The library at the bottom of this article has been prepared for your use. It is for you to freely download to tinker with, edit, modify, build off of, implement into your own automation, and use for anything you choose. It consists of over a dozen sample Java programs (and a couple others) that demonstrate some of the various capabilities our API can be used for. These are samples used during API training, and have been built based on conversations internally and externally about problems the API can solve.
All of these programs are plug-and-play, but some are simpler to start with if this is your first time delving into and tinkering with the API. We have everything from the simplest functionality like validating a series of test cases and getting their results (TC_Validator.java) to advanced functionalities like complete Pulse campaign creation and scheduling automation (SchedulePulseCampaigns.java).
You will see in all these samples the extensive use of chaining API calls. By that, we make an API call, parse the data, and based on that data we make subsequent API calls. For example, we can fully automate a test case execution by listing our test cases, getting the ID for the one we want, validating the test case, getting the testRunTicketId, and fetching the results.
|Java SE SDK||Download||The programming language the Java samples use|
|Javac||Configure||Add javac to your PATH to compile Java scripts|
|Eclipse||Download||Development environment to run these samples|
The provided samples are directly importable into Eclipse. After downloading the samples, open up Eclipse and in the top left click File -> Import as shown below:
In the Import Wizard popup, choose "Existing Projects into Workspace" in the General category and click Next:
Click "Select archive file" and navigate to the downloaded zip. Check the CyaraApiSamples project if it isn't already, and click Finish:
If you are using a different version of JRE, you may see the following red exclamation error along with the error in the "Problems" tab:
This error can be easily remedied by changing the project's JRE and compiler settings to fit your environment. To do this, first right click on the red exclamation and click Properties:
Click on Java Build Path on the left hand side of the Properties window, then click on JRE System Library to highlight it:
Click "Edit..." on the right hand side and click the radio button for Workspace default JRE and click Finish:
Back on the Properties window, click Apply, then click on the Java compiler category on the left hand side. Click on the Restore Defaults Button in the bottom right:
Click Yes on the popup, then Apply back on the Properties window, and OK to close it. Your project should now be ready!
To use these samples, you will need credentials.txt. A sample has been provided as part of the download, but you'll need to update it to your values. The order of the fields are
Your Portal's BaseUrl
Your account number
This is the central class that contains all the used API calls and handles the data to and from the API. It is used by every sample to make the API calls. Also contains the commonly used methods like processing JSON and XML.
For a given timeframe, will grab up to 1000 Pulse calls and produce results showing you every unique prompt, and all the different confidence score results for each prompt. This allows you to adjust confidence scores accordingly, great when building out your Pulse library. This program works best when all your prompts are blockified without duplicates.
For a given timeframe, will grab up to 1000 Pulse calls and produce results showing you every unique prompt, and all the different threshold time results for each prompt. This allows you to adjust threshold times accordingly, great when building out your Pulse library. This program works best when all your prompts are blockified without duplicates.
This sample takes in a test case ID and creates a debug duplicate it, ANI spoofing it with a random, unique ANI. The test case is then run, and 60 seconds later the result is outputted. Great to automatically execute a call and search your backend for the unique ANI if it fails.
A basic program to grab all your Pulse calls from a designated timestamp and print out how many failed out of how may calls.
This program will grab every validation that occurred in a campaign for the last X months. It will export to a csv the duration of each call in minutes, its result, and other misc info. This data can then be used to calculate usage, errors found, time saved, etc.
A program to automate the creation and usage of a dataset. Once the dataset is created, it's attached to a test case, which is then run. The test case is then used in creating a campaign. That campaign is then run and the step results are printed out.
This program will run through all your blocks and print out all the duplicate blocks, showing the duplicate and the original.
Works best if your blocks follow the single step best practice.
If desired, also has the capability to replace duplicated blocks with the original and delete the duplicate. THIS CANNOT BE UNDONE!
This basic program will grab up to 1000 calls from the designated start time and print out a URL to any call that had a latency problem (exceeded threshold).
Used as a great introduction to pulse processing and link generation.
Pulls various metrics surrounding connection, initiation, and audio quality for every single Pulse call across all your account or a specific account. Can be customized to give a date range for deeper analysis. You can then compare your results to the following metrics which were anonymously grabbed for every single Pulse call on the US Portal for March 2021:
Total number of Pulse calls processed for March 2021: 4857658
Mean number of Pulse test cases per account: 44
Median number of Pulse test cases per account: 11
Mode number of Pulse test cases per account: 1
95 percentile number of Pulse test cases per account: 201
Largest number of Pulse test cases uses for a single account: 646
Mean normalized success percentage of Pulse calls per account: 93%
Median normalized success percentage of Pulse calls per account: 99.03%
Mode normalized success percentage of Pulse calls per account: 99.92%
95 percentile normalized success percentage of Pulse calls per account: 99.97%
Highest normalized success percentage of Pulse calls for a single account: 99.99%
Mean length of connection time in ms: 2446
Median length of connection time in ms: 2204
Mode length of connection time in ms: 2186
95 percentile length of connection time in ms: 4758
Longest connection time in ms: 42553
Mean length of initiation time in ms: 1015
Median length of initiation time in ms: 660
Mode length of initiation time in ms: 10
95 percentile length of initiation time in ms: 2960
Longest initiation time in ms: 247309
Mean length of remaining response times per step in ms: 2975
Median length of remaining response times per step in ms: 1840
Mode length of remaining response times per step in ms: 600
95 percentile length of remaining response times per step in ms: 8509
Longest remaining response times for a step in ms: 508046
Mean confidence score of every prompt: 91%
Median confidence score of every prompt: 96.5%
Mode confidence score of every prompt: 100.0%
95 percentile confidence score of every prompt: 100.0%
This program prints out every current CX Model on the account, and all the test cases generated by that model. Used to show a count of test cases per CX Model. Output is tilde separated.
Grabs all Step 0 and 1 Pulse Failures from a given time, and generates network graphs showing which calls failed the most like other failures in relation to audio and transcriptions. Useful to categorize different connection and initiation failures.
This program requires the PESQ algorithm to run locally on your machine, (if you're on Windows you'll need to install Visual C++). A compiled copy has been supplied, or you can compile it yourself.To achieve this, download the English PESQ ZIP here and run unzip it on your machine. From the command line, cd into the source folder of the algorithm, and run the following command:
gcc -o PESQ *.c -lm
gcc -o PESQ *.c
Then copy the PESQ file into your working directory.
Takes in a test case ID and generates a cleaner, simpler table of a test case with all the expect to hears and replies. The table is HTML with inline CSS and is printed out to the console.
Training demo built for Xchange 2020 that uses Cyara's Pulse API to collect call results and generate metrics into an HTML file. A README is located inside of the ZIP download.
This program will change every single menu's PSST in every single CX Model in your account to your specified value. THIS CANNOT BE UNDONE! Used to show the bulk change capabilities of the API's exporting and importing of CX Model JSON.
A program to completely automate the creation and scheduling of your Pulse campaigns. Will calculate the most efficient schedule and the most efficient port usage.
Uses two classes as part of its automation:
- Used to create a scheduled campaign run with test cases that have the same frequency and fall under the same runtimes. While this class in and of itself doesn't create a campaign or insert itself into one, we use it in SchedulePulseCampaigns to build and schedule campaigns with the most efficient combinations and schedules.
- Used for test cases that are currently supposed to run for the currentDate we're on inside SchedulePulseCampaigns. If a test case is supposed to run based on the currentDate, we create an object of this class which contains all the relevant information about that test case and its runtime.
As part of the prerequisites, put all your Pulse test cases under the same parent test case folder. You will also want to find your Pulse plan ID. You can find this by logging into the Cyara Portal, clicking on the Administration header, and clicking on Account Details. From there, scroll to the bottom of the page where your plans are and click on your Pulse plan. The plan ID will be at the end of the URL. At the top of the program, update the pulseFolder and pulsePlanId variables with these values, respectively.
You will also need to add to each of their descriptions the JSON schedule for that specific test case. "minuteDuration" is how many minutes this test case takes to run for a single call. "frequency" is how often you want this test case to run (this will be used for your Run Every in the Pulse campaign. If this test case is part of a Regression Monitoring suite, set this to how long that suite should take to run). "schedule" is the array for days and times this test case should run. An example JSON has been provided below:
This program allows you to fire off consecutive Cyara test cases. After all your specified test cases have run, the program will print out the results, highlighting any runs that failed.
This one doesn't use Java, instead it's C#. This will perform a backup folder with today's date on your local machine with a fully replicated, mirror image of your Cyara account. It will back up your test cases, blocks, CX Models, campaigns, and datasets. Intended to be run daily using Windows Scheduler on a server.
A PowerShell script that performs the same calculations Cyara Customer Delivery uses to make sure your load tests are successful. Will you hit the desired port count? Will you achieve the minimum number of calls? Are my CAPS correct? This script will automatically verify all your requested settings and more including calculating your ramp up time and automatically building all your campaigns for your test.
Need a little more instruction? Talk to your Cyara Representative today for API training, and we'll get you up and running in no time. We can't wait to see what you build!
Please sign in to leave a comment.