Performance Testing With Puppeteer Cluster

By Sam Banks on April 20, 2020

By Sam Banks

April 20, 2020

Ever wished you could perform load testing against your infrastructure with an army of instances running actual web browsers?

Due to the shortcomings of traditional load testing tools for emulating actual real world traffic, we created a solution to do just that.

Rather than writing something ourselves, we assembled a rag-tag crew of existing tools; designed for other tasks, to put together a rough-around-the-edges but extremely useful solution.

Performance Stack Diagram

The advantages of this approach:

  • Holistically test and tune the entire platform, not just the parts you write tests for
  • Accurately simulate real browser based traffic, downloading the whole app and all remote assets
  • Hybrid functional/load testing capability, with the ability to log in and navigate and input data, regardless of the underlying technology stack

Lets look at how we put this together

Puppeteer

The core tool we are using is Puppeteer, a Node library which provides a high-level API to control Chrome or Chromium over the DevTools Protocol.

Puppeteer runs headless by default, but can be configured to run full (non-headless) Chrome or Chromium. Any Javascript library can be imported, making it incredibly powerful for interacting with Cloud providers via their SDK's.

You can even use a browser plugin such as Puppeteer Recorder to record website interactions and save them as a Puppeteer script! Functional testing doesn't get much easier.

Here is an example of a simple puppeteer script, which goes to https://stackchat.com/ and has a conversation with the bot then takes a screenshot:

Create stackChat.js Puppeteer script

1const puppeteer = require("puppeteer");
2(async () => {
3 const browser = await puppeteer.launch({ headless: false, slowMo: 50 });
4 const page = await browser.newPage();
5
6 const url = "https://stackchat.com/";
7 const messengerFrameSelector = "#web-messenger-container";
8 const messengerBubbleSelector = "#messenger-button";
9 const messengerInputSelector = "#footer > form > div > textarea";
10 const clickDelay = 3000;
11 const waitTimeout = 5000;
12
13 const waitForResponse = async (frame, response) => {
14 console.log(`Waiting for ${response}`);
15 await frame.waitForFunction(
16 `document.querySelector("body").innerText.includes("${response}")`
17 );
18 return;
19 };
20
21 const sendMessage = async (inputSelector, message) => {
22 console.log(`Type message ${message}`);
23 await inputSelector.type(message);
24 await inputSelector.press("Enter");
25 return;
26 };
27
28 const clickButton = async (frame, button) => {
29 console.log(`Clicking button ${button}`);
30 const buttonSelector = await messengerFrame.waitForSelector(
31 `#conversation > div.messages-container > div > div.reply-container > button:nth-child(${button}) > span`
32 );
33 await buttonSelector.click();
34 };
35
36 async function timeout(ms) {
37 console.log(`Pause for ${ms / 1000} seconds`);
38 return new Promise(resolve => setTimeout(resolve, ms));
39 }
40
41 await page.goto(url);
42 await page.setViewport({ width: 1024, height: 768 });
43 let frames = await page.frames();
44 console.log("get frame");
45 console.log("get frame selector");
46 await page.waitForSelector(messengerFrameSelector, {
47 timeout: waitTimeout
48 });
49 const messengerFrameContainer = await page.$(messengerFrameSelector);
50 const messengerFrame = await messengerFrameContainer.contentFrame();
51 const messengerBubble = await messengerFrame.waitForSelector(
52 messengerBubbleSelector,
53 {
54 timeout: waitTimeout
55 }
56 );
57 console.log("Page and Messenger ready");
58 console.log("Wait for messenger input");
59 await messengerBubble.click();
60 let messageInput = await messengerFrame.waitForSelector(
61 messengerInputSelector,
62 {
63 timeout: waitTimeout
64 }
65 );
66
67 await sendMessage(messageInput, "Hello");
68 await waitForResponse(messengerFrame, "What do you think?");
69 await timeout(clickDelay);
70 await clickButton(messengerFrame, 1);
71 await waitForResponse(messengerFrame, "What should l call you?");
72 await sendMessage(messageInput, "Tony Harrison");
73 await timeout(clickDelay * 2);
74 await clickButton(messengerFrame, 2);
75
76 // Create an array of known messenger responses to react to
77 responses = [
78 "hear about the features?",
79 "Sounds pretty cool, huh?",
80 "Still following?",
81 "integrating social channels",
82 "AI capabilities?",
83 "following along alright?",
84 "Sound good?",
85 "a sneaky feature?",
86 "competitors basic bots!"
87 ];
88
89 for (const response of responses) {
90 await waitForResponse(messengerFrame, response);
91 await timeout(clickDelay);
92 await clickButton(messengerFrame, 1);
93 }
94
95 await waitForResponse(messengerFrame, "to do next?");
96 await timeout(2000);
97 console.log("Taking screenshot and exiting");
98 await page.screenshot({ path: "screenshot.png" });
99 await browser.close();
100})();
101

Install puppeteer and run script:

1npm i puppeteer
2node stackChat.js
3
Magic Gif

Click Neither of these at the end before it times out, it will take a screenshot and exit.

Pretty cool huh?

Puppeteer Cluster

The next layer of the stack is Puppeteer Cluster, a library that spawns a pool of Chromium instances via Puppeteer and helps to keep track of jobs and errors. This is helpful if you want to crawl multiple pages or run tests in parallel.

Lets take our previous script and update it to run on a cluster of 10 Chrome instances

We have removed the pauses and logging, as we won't be watching individual progress of nodes

1const { Cluster } = require("puppeteer-cluster");
2(async () => {
3 //Create cluster with 10 workers
4 const cluster = await Cluster.launch({
5 concurrency: Cluster.CONCURRENCY_CONTEXT,
6 maxConcurrency: 10,
7 monitor: true,
8 timeout: 500000
9 });
10
11 // Print errors to console
12 cluster.on("taskerror", (err, data) => {
13 console.log(`Error crawling ${data}: ${err.message}`);
14 });
15
16 // Dumb sleep function to wait for page load
17 async function timeout(ms) {
18 return new Promise(resolve => setTimeout(resolve, ms));
19 }
20
21 await cluster.task(async ({ page, data: url, worker }) => {
22 // const browser = await puppeteer.launch({ headless: false, slowMo: 50 });
23 // const page = await browser.newPage();
24
25 const messengerFrameSelector = "#web-messenger-container";
26 const messengerBubbleSelector = "#messenger-button";
27 const messengerInputSelector = "#footer > form > div > textarea";
28 const clickDelay = 3000;
29 const waitTimeout = 5000;
30
31 const waitForResponse = async (frame, response) => {
32 await frame.waitForFunction(
33 `document.querySelector("body").innerText.includes("${response}")`
34 );
35 return;
36 };
37
38 const sendMessage = async (inputSelector, message) => {
39 await inputSelector.type(message);
40 await inputSelector.press("Enter");
41 return;
42 };
43
44 const clickButton = async (frame, button) => {
45 const buttonSelector = await messengerFrame.waitForSelector(
46 `#conversation > div.messages-container > div > div.reply-container > button:nth-child(${button}) > span`
47 );
48 await buttonSelector.click();
49 };
50
51 async function timeout(ms) {
52 return new Promise(resolve => setTimeout(resolve, ms));
53 }
54
55 await page.goto(url);
56 await page.setViewport({ width: 1024, height: 768 });
57 let frames = await page.frames();
58 await page.waitForSelector(messengerFrameSelector, {
59 timeout: waitTimeout
60 });
61 const messengerFrameContainer = await page.$(messengerFrameSelector);
62 const messengerFrame = await messengerFrameContainer.contentFrame();
63 const messengerBubble = await messengerFrame.waitForSelector(
64 messengerBubbleSelector,
65 {
66 timeout: waitTimeout
67 }
68 );
69 await messengerBubble.click();
70 let messageInput = await messengerFrame.waitForSelector(
71 messengerInputSelector,
72 {
73 timeout: waitTimeout
74 }
75 );
76
77 await sendMessage(messageInput, "Hello");
78 await waitForResponse(messengerFrame, "What do you think?");
79 await clickButton(messengerFrame, 1);
80 await waitForResponse(messengerFrame, "What should l call you?");
81 await sendMessage(messageInput, "Tony Harrison");
82 await clickButton(messengerFrame, 2);
83
84 // Create an array of known messenger responses to react to
85 responses = [
86 "hear about the features?",
87 "Sounds pretty cool, huh?",
88 "Still following?",
89 "integrating social channels",
90 "AI capabilities?",
91 "following along alright?",
92 "Sound good?",
93 "a sneaky feature?",
94 "competitors basic bots!"
95 ];
96
97 for (const response of responses) {
98 await waitForResponse(messengerFrame, response);
99 await clickButton(messengerFrame, 1);
100 }
101
102 await page.screenshot({ path: `screenshot${worker.id}.png` });
103 await page.screenshot({ path: "screenshot.png" });
104 });
105
106 for (let i = 1; i <= 10; i++) {
107 cluster.queue("https://stackchat.com/");
108 }
109 await cluster.idle();
110 await cluster.close();
111})();
112
1npm i puppeteer-cluster
2node stackChat.js
3

You should now have ten screenshots. Very cool!

You will notice we used were able to use language features; such as arrays, loops and functions. This full language support makes this a very powerful solution. For our testing we are able to use Javascript libraries, such as the AWS SDK, within our tests, which makes this much more flexible than traditional DSL based solutions. It is also more flexible and powerful in many ways than modern tools such as k6, which supports tests written in Javascript, but not the full Node/NPM ecosystem.

As the next three layers require more setup than would make sense to outline here, we will look at them as a group and provide code examples.

Docker

Docker Engine enables containerized applications to run anywhere consistently on any infrastructure, solving “dependency hell” for developers and operations teams, and eliminating the “it works on my laptop!” problem.

Ansible

Ansible is the simplest way to automate apps and IT infrastructure. Application Deployment + Configuration Management + Continuous Delivery.

Buildkite

Buildkite is a platform for running fast, secure, and scalable continuous integration pipelines on your own infrastructure

The Buildkite pipeline that takes several input variables and triggers Ansible, passing the values in as extra vars

1steps:
2 - block: Test?
3 prompt: Please enter the test suite to run
4 fields:
5 - select: Tests?
6 key: test_file
7 options:
8 - label: Chat Bot Conversation
9 value: chatbot
10 - label: Website Load test
11 value: website
12
13 - block: Agents?
14 prompt: Please enter the amount of agents
15 fields:
16 - text: Agents?
17 key: testing_agents
18
19 - block: Instances?
20 prompt: Please enter the amount of chrome instances per agent
21 fields:
22 - text: Instances?
23 key: testing_concurrency
24
25 - block: Length?
26 prompt: Please enter the amount of messages in the conversation, or the amount of sequential hits
27 fields:
28 - text: Conversation Length?
29 key: testing_conversation_length
30
31 - block: Environment?
32 prompt: Please select the desired environment
33 Integration, Prd, etc.
34 fields:
35 - text: Environment?
36 key: env
37
38 - block: Account?
39 prompt: Please enter the AWS account
40 dev, prd, etc.
41 fields:
42 - select: Account?
43 key: acc
44 options:
45 - label: prd_au
46 value: prd_au
47 - label: prd_cn
48 value: prd_cn
49 - label: stg_au
50 value: stg_au
51 - label: dev_au
52 value: dev_au
53
54 - label: Download and run automation code
55 command: cd ansible/ &&
56 ansible-playbook
57 -e acc=$(buildkite-agent meta-data get "acc")
58 -e env=$(buildkite-agent meta-data get "env")
59 -e test_file=$(buildkite-agent meta-data get "test_file")
60 -e testing_agents=$(buildkite-agent meta-data get "testing_agents")
61 -e testing_concurrency=$(buildkite-agent meta-data get "testing_concurrency")
62 -e testing_conversation_length=$(buildkite-agent meta-data get "testing_conversation_length")
63 -e parent_job=${BUILDKITE_JOB_ID}
64 -e product=test
65 meta_testing.yml
66 retry:
67 automatic:
68 - exit_status: -1 # Agent was lost
69 limit: 2
70 - exit_status: 255 # Forced agent shutdown
71 limit: 2
72 - exit_status: 2 # Docker connection aborted
73 limit: 2
74

This triggers a wrapper playbook, which we call a meta playbook, as it dynamically creates new Buildkite jobs, based on the testing_concurrency value entered in Buildkite.

The Ansible meta_testing.yml playbook

1---
2- hosts: all
3 gather_facts: true
4 roles:
5 - role: add_groups
6 inventories:
7 - "{{acc}}"
8 - puppeteer_results
9 - role: dynamic_testing_pipelines
10 playbook: testing_puppeteer
11 testing_agents: "{{testing_agents}}"
12 extra_vars:
13 - key: test_file
14 value: "{{test_file}}"
15 - key: acc
16 value: "{{acc}}"
17 - key: env
18 value: "{{env}}"
19 - key: testing_concurrency
20 value: "{{testing_concurrency}}"
21 - key: testing_agents
22 value: "{{testing_agents}}"
23 - key: testing_conversation_length
24 value: "{{testing_conversation_length}}"
25 - key: product
26 value: test
27 - key: parent_job
28 value: "{{parent_job}}"
29
30

The dynamic_testing_pipelines role, which creates a pipeline file for each job and uploads it to the currently running Buildkite pipeline

1- name: Create {{playbook}} pipeline
2 copy:
3 dest: "{{playbook}}-pipeline.yml"
4 content: |
5 steps:
6 - label: ':ansible: Playbook {{playbook}}'
7 retry:
8 automatic:
9 - exit_status: -1 # Agent was lost
10 limit: 2
11 - exit_status: 255 # Forced agent shutdown
12 limit: 2
13 command:
14 cd ansible &&
15 ansible-playbook
16 -e bk_root=$(cd ../../ && pwd)
17 {% for var in extra_vars %}
18 -e {{var.key}}={{var.value}}
19 {% endfor %}
20 {% endif %}
21 {{playbook}}.yml
22
23- name: Upload template as artifact
24 shell: buildkite-agent artifact upload {{playbook}}.yml
25
26- name: Upload {{playbook}} pipeline
27 shell: buildkite-agent pipeline upload
28 {{playbook}}-pipeline.yml
29 loop: "{{range(0, testing_agents|int)|list}}"
30

In this example we have chosen to run 10 instances of chrome on 10 different Buildkite agents.

Performance Stack Diagram

Each agent runs the following playbook

1---
2- hosts: all
3 gather_facts: true
4 roles:
5 - add_groups
6 - aws_sts_assume_role
7 - get_facts
8 - puppeteer_test
9

The puppeteer_test role creates a docker container container based on the Buildkite puppeteer public image in docker hub. It then installs puppeteer-cluster, and runs the selected Javascript test file.

After cleaning up the container, it uses the Buildkite CLI to upload the results to the parent Buildkite pipeline.

The puppeteer_test role

1---
2- name: Register facts
3 set_fact:
4 "{{item.key}}": "{{item.value}}"
5 with_items:
6 - key: working_dir
7 value: "{{ansible_env.BUILDKITE_BUILD_CHECKOUT_PATH}}"
8 - key: container_name
9 value: "{{repo|lower+'-'+branch+'-'+ansible_env.BUILDKITE_BUILD_NUMBER}}"
10
11- name: Template test files
12 template:
13 src: "{{item}}"
14 dest: "{{item|regex_replace('.j2')}}"
15 loop:
16 - "{{test_file}}.js.j2"
17 - "package.json.j2"
18
19- name: Create dockerfile
20 copy:
21 content: |
22 FROM buildkite/puppeteer
23 RUN useradd -u 999 -m -d /home/buildkite-agent buildkite-agent
24 dest: Dockerfile
25
26- name: Create docker image
27 docker_image:
28 name: "{{container_name}}"
29 source: build
30 build:
31 path: ./
32 pull: yes
33
34- name: Docker - Set ownership of buildkite working dir
35 docker_container:
36 name: "{{container_name}}"
37 image: "{{container_name}}"
38 detach: no
39 env: "{{node_build_env}}"
40 volumes:
41 - "{{working_dir}}:/buildkite"
42 working_dir: /buildkite
43 command: chown -R buildkite-agent .
44 register: chown_dir
45 when: chown_dir is not defined
46
47- name: Docker - NPM install
48 docker_container:
49 name: "{{container_name}}"
50 image: "{{container_name}}"
51 detach: no
52 env: "{{node_build_env}}"
53 volumes:
54 - "{{working_dir}}:/buildkite"
55 working_dir: /buildkite/ansible
56 entrypoint: npm install
57 user: buildkite-agent
58 register: npm_install
59
60- name: Docker - Run puppeteer-cluster
61 docker_container:
62 name: "{{container_name}}"
63 image: "{{container_name}}"
64 detach: no
65 volumes:
66 - "{{working_dir}}:/buildkite"
67 working_dir: /buildkite/ansible
68 command: node {{test_file}}.js
69 user: buildkite-agent
70 register: test_output
71
72- name: Show test output
73 debug:
74 msg: "{{test_output.ansible_facts.docker_container.Output}}"
75
76- name: Remove containers
77 docker_container:
78 name: "{{item}}"
79 state: absent
80 loop:
81 - "{{container_name}}"
82
83- name: Upload results as artifacts
84 shell: buildkite-agent artifact upload
85 --job {{parent_job}} {{item}}
86 loop:
87 - "errors_{{ansible_env.BUILDKITE_JOB_ID}}.json"
88 - "results_{{ansible_env.BUILDKITE_JOB_ID}}.json"
89 register: output
90

Once all the agents have finished testing, the download artifacts pipeline will download and combine all the results and error files and compile them into an HTML report and upload it to Buildkite as an artifact.

puppeteer_results Buildkite pipeline

1---
2- name: Generate artifact download pipeline
3 copy:
4 dest: pipeline.yml
5 content: |
6 steps:
7 - wait: ~
8 continue_on_failure: true
9
10 - label: Download artifacts
11 key: results
12 command: cd ansible && ansible-playbook
13 -e acc=$(buildkite-agent meta-data get "acc")
14 -e env=$(buildkite-agent meta-data get "env")
15 -e parent_job={{parent_job}}
16 -e product=test
17 testing_results.yml
18 retry:
19 automatic:
20 - exit_status: -1 # Agent was lost
21 limit: 2
22 - exit_status: 255 # Forced agent shutdown
23 limit: 2
24
25- name: Upload pipeline
26 shell: buildkite-agent pipeline upload pipeline.yml
27

testing_results.yml Ansible playbook

1---
2- hosts: all
3 gather_facts: true
4 roles:
5 - add_groups
6 - aws_sts_assume_role
7 - get_facts
8 - results_display
9

results_display Ansible role

1---
2- name: Download testing results from parent job
3 command: buildkite-agent artifact download
4 "*.json" .
5 --step "{{parent_job}}"
6 --build "{{ansible_env.BUILDKITE_BUILD_ID}}"
7
8- name: Instantiate arrays
9 set_fact:
10 results_array: []
11 errors_array: []
12
13- name: Read results into array
14 set_fact:
15 results_array: "{{results_array+(lookup('file', item)|from_json)}}"
16 with_fileglob:
17 - "{{playbook_dir}}/results*"
18
19- name: Read errors into array
20 set_fact:
21 errors_array: "{{errors_array+(lookup('file', item)|from_json)}}"
22 with_fileglob:
23 - "{{playbook_dir}}/errors*"
24
25- debug:
26 var: errors_array
27
28- name: Generate test html file
29 template:
30 dest: results.html
31 src: results.html.j2
32
33- name: Upload results
34 shell: buildkite-agent artifact upload results.html
35

results.html.j2

1<html>
2 <head>
3 <!--Load the AJAX API-->
4 <script
5 type="text/javascript"
6 src="https://www.gstatic.com/charts/loader.js"
7 ></script>
8 <script type="text/javascript">
9
10 // Load the Visualization API and the corechart package.
11 google.charts.load('current', {'packages':['scatter', 'table']});
12
13 // Set callbacks to run when the Google Visualization API is loaded.
14 google.charts.setOnLoadCallback(drawResultsChart);
15 google.charts.setOnLoadCallback(drawTable);
16
17 // Draw Charts
18 function drawResultsChart() {
19 var data = new google.visualization.DataTable();
20
21 data.addColumn('number', 'Timeline (Milliseconds)');
22 data.addColumn('number', 'Duration (Milliseconds)');
23 data.addRows({{results_array|to_json}});
24
25 var options = {'title':'Results', 'height': 1200};
26 var chart = new google.charts.Scatter(document.getElementById('results_chart_div'));
27
28 chart.draw(data, options);
29 }
30
31 function drawTable() {
32 var data = new google.visualization.DataTable();
33 data.addColumn('number', 'Timeline (Milliseconds)');
34 data.addColumn('string', 'Error Message');
35 data.addRows({{errors_array|to_json}});
36
37 var table = new google.visualization.Table(document.getElementById('table_div'));
38
39 table.draw(data, {showRowNumber: true, width: '100%', height: '100%'});
40 }
41
42 </script>
43 </head>
44
45 <body>
46 <h1>Successful Requests</h1>
47 <div id="results_chart_div"></div>
48 <h1>Errors</h1>
49 <div id="table_div"></div>
50 </body>
51
52</html>
53

And this will spit out the following report

Performance Stack DiagramPerformance Stack Diagram

This worked great for us and allowed us to track down and resolve several issues in our infrastructure, that were preventing us from getting the performance we were expecting from our dynamically scalable Serverless infrastructure design.

Once we had resolved those issues though and wanted to see what the platform could really do, we ran into an issue that fell into the nice-problem-to-have category. Now that we had realized the incredible scaling possibilities of a fully Serverless Microservice architecture, we were generating more results than our build agents could handle.

We running tried running a test with the following variables:

  • Conversation API (Our busiest API) scaled up to 80 containers
  • Conversation length 1000 messages
  • 160 Buildkite Agents running 10 Chrome instances each
Performance Stack Diagram

As you can see the tests all ran fine (huzzah!), but the pipeline to compile the results failed.

On investigation it turned out the results_display Ansible role was trying to read the 1,600,000 data points into an array in ram, as per the previous playbook. Needless to say this required a lot of ram. To get this test over the line we kept upping the size of the build agent to try and get it to finish the job, but no matter how large we went, we kept getting:

Performance Stack Diagram

So to get the test run and the graphs produced we decided to do this locally. We made a couple of bash scripts to download and concatenate the data:

bk_download_artifacts

1#!/bin/bash
2# Usage: ./bk_download_artifacts buildNumber pages destDir apiKey
3# Example: ./bk_download_artifacts 405 3 ~/Downloads/results 069ads9636as6996as666996as
4mkdir -p $3/$1/results
5for PAGE in $(seq 1 $2); do
6 echo "PAGE - $PAGE"
7 ARTIFACTS=$(curl -s -G -H "Authorization: Bearer $4" -G https://api.buildkite.com/v2/organizations/stackchat/pipelines/testing-puppeteer/builds/$1/artifacts -d "page=$PAGE" -d "per_page=100"|jq '.[].id'|sed 's/\ /\n/g'|sed 's/"//g')
8 for ARTIFACT in $ARTIFACTS; do
9 S3_URL=$(curl -s -H "Authorization: Bearer $4" "https://api.buildkite.com/v2/organizations/stackchat/pipelines/testing-puppeteer/builds/$1/jobs/a10fd2ee-9ab2-47cf-906a-4170984c3396/artifacts/$ARTIFACT/download"|jq '.url'|sed 's/"//g')
10 echo "Downloading $3/$1/results/$ARTIFACT.json"
11 curl $S3_URL -o $3/$1/results/$ARTIFACT.json
12 done
13done
14

bk_compile_results

1#!/bin/bash
2# Usage: ./bk_compile_results resultsDir
3echo -n '[' > results.json
4for i in `ls results/*`; do
5 cat $i >> results.json
6 echo -n ',' >> results.json
7 sed -i 's/,,/,/g' results.json
8done
9sed -i 's/,$//g' results.json
10echo -n ']' >> results.json
11

This worked a treat, but when we tried to embed those results in our Google Charts page:

Javascript says no

Performance Stack Diagram

Determined to see those 1.6 million lovely dots and anxious to know what the response times looked like, we decided to try using a Jupyter Notebook instead. This is what we came up with:

1{
2 "cells": [
3 {
4 "cell_type": "code",
5 "execution_count": null,
6 "metadata": {},
7 "outputs": [],
8 "source": [
9 "import pandas as pd"
10 ]
11 },
12 {
13 "cell_type": "code",
14 "execution_count": null,
15 "metadata": {},
16 "outputs": [],
17 "source": [
18 "import seaborn as sns\n",
19 "sns.set(style=\"whitegrid\")"
20 ]
21 },
22 {
23 "cell_type": "code",
24 "execution_count": null,
25 "metadata": {},
26 "outputs": [],
27 "source": [
28 "import matplotlib.pyplot as plt"
29 ]
30 },
31 {
32 "cell_type": "code",
33 "execution_count": null,
34 "metadata": {},
35 "outputs": [],
36 "source": [
37 "data = pd.read_json('/home/sbanks/Downloads/results/results.json')"
38 ]
39 },
40 {
41 "cell_type": "code",
42 "execution_count": null,
43 "metadata": {},
44 "outputs": [],
45 "source": [
46 "df = pd.DataFrame(data)\n",
47 "df2 = df.divide(1000)"
48 ]
49 },
50 {
51 "cell_type": "code",
52 "execution_count": null,
53 "metadata": {},
54 "outputs": [],
55 "source": [
56 "df2.columns = ['startTime', 'duration']"
57 ]
58 },
59 {
60 "cell_type": "code",
61 "execution_count": null,
62 "metadata": {},
63 "outputs": [],
64 "source": [
65 "df2.duration.describe()"
66 ]
67 },
68 {
69 "cell_type": "code",
70 "execution_count": null,
71 "metadata": {},
72 "outputs": [],
73 "source": [
74 "df2['duration'].quantile(.50)"
75 ]
76 },
77 {
78 "cell_type": "code",
79 "execution_count": null,
80 "metadata": {},
81 "outputs": [],
82 "source": [
83 "df2['duration'].quantile(.95)"
84 ]
85 },
86 {
87 "cell_type": "code",
88 "execution_count": null,
89 "metadata": {},
90 "outputs": [],
91 "source": [
92 "df2['duration'].quantile(.99)"
93 ]
94 },
95 {
96 "cell_type": "code",
97 "execution_count": null,
98 "metadata": {},
99 "outputs": [],
100 "source": [
101 "sns.set(font_scale=5, style=\"whitegrid\")\n",
102 "f, ax = plt.subplots(figsize=(80, 40))\n",
103 "sns.despine(f, left=True, bottom=True)\n",
104 "fig = sns.scatterplot(x=\"startTime\",\n",
105 " y=\"duration\",\n",
106 " palette=\"ch:r=-.2,d=.3_r\",\n",
107 " linewidth=0,\n",
108 " data=df2,\n",
109 " ax=ax)\n",
110 "plt.xlabel(\"Timeline (Seconds)\")\n",
111 "plt.ylabel(\"Response Time (Seconds)\")\n",
112 "plt.show(fig)\n",
113 "plt.savefig('timeline.png')"
114 ]
115 }
116 ],
117 "metadata": {
118 "kernelspec": {
119 "display_name": "Python 3",
120 "language": "python",
121 "name": "python3"
122 },
123 "language_info": {
124 "codemirror_mode": {
125 "name": "ipython",
126 "version": 3
127 },
128 "file_extension": ".py",
129 "mimetype": "text/x-python",
130 "name": "python",
131 "nbconvert_exporter": "python",
132 "pygments_lexer": "ipython3",
133 "version": "3.8.2"
134 }
135 },
136 "nbformat": 4,
137 "nbformat_minor": 4
138}
139

Quantiles (in seconds)

Performance Stack Diagram

The generated graph

Performance Stack Diagram

Success! We will eventually integrate Jupyter notebook generation into our pipelines, but for now we are very happy with the results and the confidence they have given us in our platform to quickly scale out horizontally.

Hopefully the code examples here will help you to get started with your own testing.

To read more articles on chat automation and our DevOps practices visit our blog. Visit our contact page to get in touch with the Stackchat team today.