Learn how to create and parameterize a script to start load testing today!

Today’s browsers have developer tools that can capture all network traffic. For Chrome and Firefox, the traffic can be saved as an HTTP Archive file (HAR). Internet Explorer saves the same data in an XML format. LoadStorm can use either recording format to make test scripts.

These recordings contain HTTP tracing information about each and every request made to a server and the response to that request. These archives are frequently used to gather timing data about objects being loaded by a web browser, which can then be used to pinpoint performance problems. LoadStorm automatically converts the recordings into test scripts, going through each request like a checklist and sending identical requests to the target server.

Browser Recordings

Make a Recording using Chrome

UPDATE: Latest versions of Chrome do not save form data when recording. Please us Firefox as an alternate when interacting with forms.

Watch Example1. With a regular Chrome window already open, click the Three-Bar menu at the top right and choose New Incognito Window.

2. Right-click the empty space and choose Inspect Element to open the developer tools.

3. Switch to the Network tab in the developer tools.

4. Make sure the record circle is red, and check the Preserve Log option.

5. Navigate to your website and begin taking the actions that represent a real user interacting with your application. As you perform actions and load pages, every request made is recorded.

6. Once you’ve reached the end of your activity, right-click anywhere in the timeline and choose Save as HAR with content.

7. Name your HAR and save it. Now you are ready to upload it to LoadStorm to use as a script.

Make a Recording using Firefox

UPDATE: Firefox no longer requires any plugin to record and save HAR files.

Watch Example

1. With a regular Firefox window already open, click the Three-Bar menu at the top right and choose New Private Window.

2. Right-click the empty space in the window and choose Inspect Element to open the Firefox developer tools.

3. Switch to the Network tab in the developer tools.

4. Navigate to your website and begin taking the actions that represent a real user interacting with your application. As you perform actions and load pages, every request made is recorded.

5. Once you’ve reached the end of your activity, right-click anywhere in the request log and choose Save All As HAR.

6. Name your HAR and save it. Now you are ready to upload it to LoadStorm to use as a script.

7. When uploading it to LoadStorm you’ll need to consider choosing the 2nd option for upload which inserts page breaks between delays of 2 seconds or greater because currently Firefox does not group requests by pages.

Make a Recording using Internet Explorer 11

IE cannot export a HAR file, but it can save as a HAR-style XML file. LoadStorm is capable of reading these files.

NOTE: For those interested in using the new Microsoft Edge browser instead of IE 11 we recommend against it. Their are still several quirks that make it more difficult than using IE 11, but if you wish to give it a try please reach out to [email protected] to request a demonstration.

Watch Example1. With a regular IE window already open, press Ctrl+Shift+P to open a new InPrivate window.

2. Open the developer tools by pressing the F12 key.

3. Switch to the Network tab by clicking the icon on the left that looks like a wireless router.

4. Click the green play button “Enable network traffic capturing” to change it to a red square, and the “Always refresh from server” option is on by default.

5. Navigate to your website and begin taking the actions that represent a real user interacting with your application. As you perform actions and load pages, every request made is recorded.

6. Once you’ve reached the end of your activity, click the floppy disk icon to “Export captured traffic”.

7. Name your XML file and save it. Now you are ready to upload it to LoadStorm to use as a script.

IE cannot export a HAR file, but it can save as a HAR-style XML file. LoadStorm is capable of reading these files. Follow these steps to create a recording in IE9 or IE10.

Recording API calls

Manually create requests with Postman and record them in Chrome

There is a handy app called Postman that allows you to manually create requests. With this you can mimic mobile app requests by interacting with REST or SOAP APIs. Postman is designed with REST requests in mind, but for more info on how to use it for SOAP requests please visit the Postman blog.


To launch the Postman app:

  • Open a new tab
  • Click the Apps icon in your bookmarks bar, or navigate to chrome://apps
  • Click the Postman icon

Once Postman has been launched you’ll need to open the developer tools to record your requests that you manually create.

  • Open the developer tools by right-clicking anywhere within the Postman window, and selecting Inspect Element
  • Switch to the Network tab in the developer console and check the Preserve log box
  • Begin customizing your GET or POST request with any URL params (a.k.a. query strings), Headers, and content (such as form-data, files, or raw text like JSON)
  • Click the Send button and observe the POST request in your developer console
  • Right-click anywhere in the request log and choose Save as HAR with Content

Video Tutorial

This video is a short guide on recording a HAR using Postman in combination with the Chrome developer tools.

Also check out Ruairi Browne’s Postman tutorial covering RESTful API GETs and POSTs to Twitter.

Note: In his video he shows how to use OAuth, but LoadStorm does not support the generation of dynamic OAuth signatures.

Capturing Mobile Traffic

Packet capturing on Android devices

Note: This method is rarely used because API calls are often the best approach, and in many cases the app is sending encrypted packets which prevent the PCAP to HAR converter from generating a HAR file with the proper requests.

For Android devices, this method works as follows:

  1. Install tPacketCapture or other mobile app for capturing packets.
  2. Close any other apps that you have running to avoid unnecessary packets.
  3. Open the packet capturing app and click the Capture button.
  4. Open the mobile app you wish to record, and begin emulating user behavior.
  5. When you’re done emulating user behavior, swipe your notifications area open to click the “VPN is activated by tPack..” message.
  6. Click the Disconnect button to stop capturing packets.
  7. Switch back to the tPacketCapture app to open the File list tab.
  8. Select the PCAP file you just created, and share it using email (or method of your choice).
  9. Convert the PCAP into a HAR using the PCAP Web Performance Analyzer, or your own pcap2har converter.
    Note: If you use the PCAP Web Performance Analyzer you should uncheck the Remove cookies option.
  10. Upload the HAR to LoadStorm.

Click to Zoom

Packet capturing for iOS devices

Note: This method is rarely used because API calls are often the best approach, and in many cases the app is sending encrypted packets which prevent the PCAP to HAR converter from generating a HAR file with the proper requests.

For iOS devices:

At this time, packet capturing mobile apps (like tPacketCapture) are only offered on Android devices. Apple products do not support direct packet capture services. However, if you connect your iOS device to a Mac via USB then you can use OS X supported software (such as Cocoa Packet Analyzer) to capture the packets, as described on the Apple developer site.

One example of how to accomplish this:

  1. Download and install Cocoa Packet Analyzer from the developer site. (The apple store removes the packet capturing option for some odd reason even though apple recommended it as an option so avoid the app store version, or use different software.)
  2. Open your System Preferences, click Sharing, and enable Internet Sharing on your Mac to share your ethernet conntection to devices connected to your Mac’s Wi-Fi.
  3. Have your mobile iOS device join the shared Wi-Fi network hosted by your Mac.
  4. Close all unnecessary apps in your mobile device and have it on the correct screen ready to launch the app you wish to test.
  5. Run Cocoa Packet Analyzer on the Mac.
  6. Click the Capturing option in Cocoa Packet Analyzer.
  7. The capturing menu will require you to select a capture interface which will list all of your connections (if the drop-down is empty click the refresh icon to the right of the drop-down). Generally your newly shared wifi will be connected on “en1” with an IP address that you can double check from your Mac’s Network screen in System Preference as the Shared Wi-Fi’s self-assigned IP.
  8. Click the Start button to begin capturing.
  9. Launch your app on the mobile device and begin taking actions for the user behavior you wish to record. You should see packets being captured that are going back and forth between the mobile app and the app server.
  10. When you’re finished recording your app’s user behavior click stop to end the packet capture.
  11. Save the captured packets as a PCAP file.
  12. Convert the PCAP into a HAR using the PCAP Web Performance Analyzer, or your own pcap2har converter.
    Note: If you use the PCAP Web Performance Analyzer you should uncheck the Remove cookies option.
  13. When the conversion is complete you should see a waterfall view of your new HAR file, and you’ll need to click the Download HAR link above it. Please note that all of the requests will be treated as being a part of page 0 since mobile apps are not the same as websites.
  14. Upload the HAR to LoadStorm.

Click to Zoom


A script is a HAR recording that has been uploaded and automatically processed by LoadStorm. A script can be used as a base for parameterization and customization with user data. Scripts are given default settings for think time, general timeouts for pages and request, resource caching behavior, CSRF handling, and user-agent details.

How to Upload a Recording

Watch Example1. Click Build in the left navigation, and you should be on the Scripts tab.

2. Now click the Upload Recording button. The Page Insertion Controls window will appear.

3. Select the type of page insertion behavior you would like applied to the script.

Page Insertion Details

When a recording is uploaded, a page is defined by the collection of requests that are associated with it. However, there are many cases when, rather than navigating to a different URL, the page is populated with new information through the use of AJAX. A good example of this is the way the Facebook news feed loads additional content. To realistically simulate user behavior, LoadStorm can recognize the delayed requests, and insert a new “page” so that user think time can be applied to it.

4. Click Select Recording, then choose a HAR/XML file from your system and click Open.

5. LoadStorm will automatically process the recording and execute the newly created script. The newly uploaded script will open automatically in a new sub tab.

Occasionally, recording a HAR file will generate unusual syntax within the file, and the upload will fail to process the file. We recommend you rebuild the recording and try to upload it again. If the problem persists, you can contact customer support via email [email protected] and we’ll try to repair it. Be sure to include the HAR files.

Watch ExampleExecuting a script will attempt to make each request as a single VUser, applying and saving any pending changes or parameterizations made to the requests within it. This helps to ensure that the responses for the requests are what you are expecting.

1. Click the Execute Now button at the top right of the page.

2. Once it’s complete, summary details and error information can be viewed in the script Overview. This script is now available for use in load tests.

Below are descriptions of the functions of each script sub-tab.

>Overview

The Overview tab provides a summary of the script execution details; including the upload time, number and types of requests, request response information, and any set/pending parameters. You can also name the script and add notes to describe it.

1. Double-click on a script to open it in a new tab within LoadStorm.

  1. Click on any of the parameters, errors, or transactions to navigate to them directly.
  2. Hover over the pie charts to show the Types and Status Codes details and the number of requests that correspond to them.
  3. Any modification or validation you make to a request will be displayed on the script overview as a pending change.
  4. To make a pending change into an added parameter, execute the script.
  5. Each time the script is executed, every request is run and any changes made are applied and saved.
  6. To reverse pending changes, click the Revert button at the bottom of the page.

>Parameterize

Here you can view request details, edit default script settings, and make modifications to the requests in the script. See the Parameterization section of our Learning Center for detailed instructions on how to utilize the different modify and validate options.

Watch Example

Request Table Filter Controls

The request table contains a list of every request made in the recording, in the order they were made. The script filter controls allow you to search through and find particular types of requests more easily.

1. Click on any one of the column headers to sort it.

2. Type values into the filter resource field.

3. Use the drop-downs to filter by the server, page, or mime-types.

4. The Recording vs. Last drop-down can be used to filter by status code mismatch, size difference, or time difference.

5. Select any of the radio buttons to filter by post/query requests, all errors, or specific types of errors.

6. Select the drop-down option to the far right of the column headers (small grey arrow) to make different columns visible, such as URL and mime-type, to name a few.

7. Use the Clear Filters button to clear all filter selections.

Double-click a request in the table to view the request details such as request headers, request content, response headers, response, content, and timings. Each tab has details that can be seen in a side-by-side comparison between the values found in the HAR recording and the most recent execution of the script. This can be especially useful when determining which requests need to be parameterized, and why. See our Parameterization section for more detail.

Scripts are processed with default settings automatically. To view or change the default script settings, click on the Settings button from the Parameterize tab.

Think Time is the amount of delay between different pages. This simulates what happens with a real user as he or she reads the page received before clicking again. We recommend examining your server logs to get an estimate of how much time users spend viewing different pages.

Timeouts are a LoadStorm imposed time limit for a request. Setting timeouts for requests and pages allow you to decide what the acceptable amount of time to request content is before deeming the request (or page) as failing. By default, the timeout value for every page is set to 90 seconds, and the timeout value for every request is set to 15 seconds. This means that during a test run, if a page takes over 90 seconds, or a request takes over 15 seconds, the page or request will be reported as a timeout error.

Resource Caching Behavior

LoadStorm allows you to configure caching behavior. By default, the typical caching behavior seen in a browser is applied, meaning that subsequent requests for static resources are cached. You can also choose to either request the resources as they were recorded (ie, if they were cached in the recording, they will be cached in the load test), or use no caching at all.

Default CSRF handling

Cross-site request forgery (CSRF) occurs when unauthorized commands are transmitted from a user who is trusted by a website. LoadStorm automatically detects hidden form inputs in the response content of requests, such as CSRF tokens. If the hidden input has a value that is not empty, it is stored and passed on to subsequent requests that utilize the same hidden form input in their request content (such as a POST or PUT request).

User-Agent Details

When a webpage is requested, your browser sends a string in the user-agent request header to the server hosting the site. The user-agent is a software agent sent that contains information regarding the browser used, its version, as well as additional system details, such as the operating system. This information can be used by the server to provide content that is specific to your browser and device.

By default, LoadStorm uses the same value for the user-agent that was sent in the original recording, which contains the system details it was recorded on (browser, version, browser mode, etc). Most web browsers format the user-agent value as follows:

Mozilla/[version] ([system and browser information]) [platform ([platform details])[extensions]

>History

A script’s history tab contains information regarding previous executions. Each time a script is executed, a different version is added to the history tab. To open an older version of a script for editing, select the Open as new script button.

>Manage Servers

The Manage Servers tab shows any servers associated with the script. Third-party servers that are not providing important functionality to your application should be ignored or archived to prevent unnecessary traffic being sent to them. For example, advertisements, analytics, and other third-party services may not be expecting your load test or the requests may skew data for your account with that service.

Please note that by ignoring a server, any requests normally sent to that server will not be sent.

1. Right-click the server you want to ignore.

2. Choose the Ignore option in the context menu.

To Archive a Server

Archiving a server works the same as ignoring a server, but it’s a separate category. This way you can separate the servers you may never need to test from servers that you may not want to test at the moment. To reiterate, archiving a server means it is ignored. Any requests normally sent to that server will not be sent.

1. Click the server you want to archive to select it.

2. Click the Archive button.

>Transactions

Transactions are a way to track the success, failure, and performance of groups of requests within your script. Here you can view, edit, and delete the transactions that you’ve added to the script. See the Parameterization section for detailed instructions on how to create a transaction.

To understand how LoadStorm measures the completion time for a transaction, please expand the Waterfall section below this for a visual reference. Using the requests in that image as an example, consider Page 1 as a transaction. The completion time is measured from the start of the first request, but Page 1 is not considered complete until the slowest request is finished. In this case, it is the 3rd request from the last. In other cases, the transaction may complete at the end of the last request.

>Waterfall

The Waterfall view provides a detailed visualization of the timings for each request during the script execution, and allows you to expand or collapse details on individual requests.

>Execution Log

The Execution Log provides a detailed list of actions that show how parameterizations were applied during the script execution.

A script is used to simulate typical user interactions with a web application, but in the beginning it is just instructions for replaying the requests that were recorded in the HAR file you uploaded. Parameterizations add a bit of intelligence to the script, or enhance these interactions by making them unique to each VUser. Sometimes a script needs parameterizations that will tell it where to find dynamic data such as sessionIDs, authorization tokens, or viewstates. However, the main reason for parameterization is so that virtual users can easily utilize unique login information, browse for a variety of products, visit different URLs, ignore certain requests, monitor response size and content, and much more. This section will take you through the basics of parameterizing a script.

If you’re unsure where to look for elements that may need parameterization, please visit our Tips and Tricks to Troubleshooting PRO Scripts.

Default Script Settings

Scripts are processed with default settings automatically, but you can customize these settings.

Settings

To view or change the default script settings, make sure you’re on the Parameterize tab and click the Settings button near the top-right.

Page Think Time is a range of random delay after each page. This simulates what happens with a real user as he or she reads the page received before clicking again. Your server logs can help you estimate how much time users spend viewing different pages.

Timeouts are a LoadStorm imposed time limit for a request. Setting timeouts for requests and pages allow you to decide what the acceptable amount of time to request content is before deeming the request (or page) as failing. By default, the timeout value for every page is set to 90 seconds, and the timeout value for every request is set to 15 seconds. This means that during a test run, if a page takes over 90 seconds, or a request takes over 15 seconds, the page or request will be reported as a timeout error.

Resource Caching Behavior

LoadStorm allows you to configure caching behavior. By default, the typical caching behavior seen in a browser is applied, meaning that subsequent requests for static resources are cached. You can also choose to either request the resources as they were recorded (ie, if they were cached in the recording, they will be cached in the load test), or use no caching at all.

Default CSRF handling

Cross-site request forgery (CSRF) occurs when unauthorized commands are transmitted from a user who is trusted by a website. LoadStorm automatically detects hidden form inputs in the response content of requests, such as CSRF tokens. If the hidden input has a value that is not empty, it is stored and passed on to subsequent requests that utilize the same hidden form input in their request content (such as a POST or PUT request).

User-Agent Details

When a webpage is requested, your browser sends a string in the user-agent request header to the server hosting the site. The user-agent is a software agent sent that contains information regarding the browser used, its version, as well as additional system details, such as the operating system. This information can be used by the server to provide content that is specific to your browser and device.

By default, LoadStorm uses the same value for the user-agent that was sent in the original recording, which contains the system details it was recorded on (browser, version, browser mode, etc). Most web browsers format the user-agent value as follows:

Mozilla/[version] ([system and browser information]) [platform ([platform details])[extensions]

Modify Options

Below are descriptions of each modification button that allow you to parameterize a request.

Form

1. Click on the request you wish to select it. (Hint: Forms are usually submitted by POST method rather than GET.)

2. If the highlighted request has a form, the Form button below the request table will become enabled. Click the Form button.

3. This will open the Edit Form window. To change an item submitted by this form, choose an option from the corresponding Modification drop-down:

  1. No Change: No action is taken, the value from the recording is the value used in the script, or the value of any CSRF tokens that we parse from prior response text. This is the default action.
  2. Constant: Change the value to a new constant value.
    1. A text box will appear under New Value.
    2. Enter the new desired value in this field.
  3. Custom: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)
    2. The New Value column will show the file name and column of the data you chose.
  4. Delete Value: Send this parameter with a blank value.
  5. Delete Parameter: Remove this parameter from the request entirely.

4. To add new fields to the form submission, specify a new field name and value in the text boxes at the bottom of the Edit Form window and click Add Field. Once the field has been added, you can make the same modifications as above.

Query String

1. Find the request you wish to change and highlight it by clicking on it.

2. Click the Query String button. It should be clickable and not greyed out.

3. For each value in the query string, choose one of the following under the Modification column:

  1. No Change: No action is taken, the value from the recording is the value used in the Script. This is the default action.
  2. Constant: Change the value for all VUser sessions
    1. A text box will appear under New Value.
    2. Enter the new desired value in this box.
  3. Custom: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)
    2. The New Value column will show the file name and column of the data you chose.
  4. Delete Value: Send this parameter with a blank value.
  5. Delete Parameter: Remove this parameter from the request entirely.

4. To add new fields to the query string, specify a new field name in the text box at the bottom of the Edit Form window and click Add Field. Once the field has been added, you can make the same modifications as above.

Payload

1. Find the request you wish to change and highlight it by clicking on it. A quick way to find requests with a payload is to filter for Post requests.

2. Click the Payload button.

3. Type in the substring you wish to replace in the “Substring to Find” column field. The string in the payload preview area below will be highlighted.

4. Enter the replacement string in the “New String” column field. In our example we type “username” The string to be replaced is highlighted.

5. For the new string, choose one of the following under Modification:

  1. Constant: Change the value for all VUser sessions
  2. Custom: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)
    2. The New Value column will show the file name and column of the data you chose.

Request Header

1. Find the request you wish to change and highlight it by clicking on it.

2. Click the Request Header button.

3. For each Request Header, choose one of the following under the Modification column:

  1. No Change: No action is taken, the value from the recording is the value used in the Script. This is the default action.
  2. Constant: Change the value for all VUser sessions
    1. A text box will appear under New Value.
    2. Enter the new desired value in this box.
  3. Custom: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)
    2. The New Value column will show the file name and column of the data you chose.
  4. Delete Value: Send this parameter with a blank value.
  5. Delete Parameter: Remove this parameter from the request entirely.

4. To add new fields to the query string, specify a new field name in the text box at the bottom of the Edit Form window and click Add Field. Once the field has been added, you can make the same modifications as above.

Cookie

1. Select the request you wish to change the cookie for to highlight it.

2. Click the Cookie button.

3. Select the type of change you would like to apply from a drop down in the Modification column:

  1. No Change: No action is taken, the values are set from the response headers of prior requests in the Script. This is the default action.
  2. Constant: Change the value for all VUser sessions
    1. A text box will appear under New Value.
    2. Enter the new desired value in this box.
  3. Custom: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)
    2. The New Value column will show the file name and column of the data you chose.
  4. Delete Value: Send this parameter with a blank value.
  5. Delete Parameter: Remove this parameter from the request entirely.

To add a new cookie, specify a new field name in the text box at the bottom of the Edit Form window and click Add Field. Once the field has been added, you can make the same modifications as above.

Server

Changes only the server part of the URL.

1. Highlight a request and click the Server button below the the request table. The pop-up window will allow you to edit the server portion of the URL.

2. To replace the server, choose one of the following:

  1. Constant: Change the value to a new constant value.
    1. Enter the new desired value in this box.
  2. Custom Data: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)
  3. The preview window will show a highlighted URL value for the new server you chose.

URL

1. Highlight a request and click the URL button below the requests table.

2. Copy the part of the URL you wish to replace and paste it into the Substring field. That part will be highlighted in the “Original/Sample” URL at the top of the window.

3. You can change the modification drop-down to one of the following:

  1. Constant: Changes the selected part of the URL to whatever is typed into the New String field.
  2. Custom Data: Use values from user data files, form inputs, response headers, response text, or cookies to populate this field.
    1. Click the Select Data button to select custom data to pull values out of. (Check the section below for more information on Selecting Custom Data.)

Timeout

1. Select a request and click Timeout button below the request table. Editing timeout values in this way will only modify the timeout value for the highlighted request. If you wish to change the timeout value for every request, use the general script Settings.

2. Enter a value in the pop-up window to set the number of seconds you would like to allow the request to take before it reports a timeout error. Close the window to save the change.

Skip

To skip a request, click on a request and click the Skip button below the request table. To unskip the request, select the request again and click the Unskip button, or delete the rule from the modifications table at the bottom. You may also skip many requests at once by selecting a group using the shift key and left-click, or the Ctrl/Command key and left-click.

Transaction

A transaction can be created to measure the performance of a select group of requests within a load test, such as user login, website search, or purchase process. Transaction results are reported separately in load test results, and include details on the number of completed transactions, average and peak completion time (without think time applied), total internal requests, and the amount and percentage of failing transactions. A transaction fails if any of its internal requests contain errors, or if it does not meet failure criteria selected during transaction creation.

1. Select the first request and then hold down the shift key to select the last request of the range you would like to create a transaction for. The requests must be contiguous, meaning they cannot skip requests and they must be in order. Click the Transaction button.

2. If you would like the transaction to report a failure based on a timeout limit for the transaction, or any content or size validation failures, select that error setting option.

3. Closing the Transaction window will save the transaction automatically.

4. To view, edit or delete a transaction, click on the Transaction tab and select the transaction.

Additional Information on Modifications

These are other useful things to know when adding modifications that could save you time or get you to exactly what you need.

Editing an Existing Modification

1. Each modification made to a script adds a new row to the modifications panel (the table at the bottom of the screen). To modify an existing change, click the Edit button for the parameterization you want to modify.

2. To remove a change, click the Delete button in the row you want to remove.

Modifying Multiple Selected Requests

You can modify multiple requests simultaneously. To select multiple sequential requests left-click the first desired request and hold the shift key then left-click the last desired request. To select non-sequential requests hold the CTRL or command key while left-clicking each desired request. Then, select the type of modification you would like to make.

Note: Not all types of modifications apply to every request. If a modification button is not enabled, it’s because one or more of the requests selected are not applicable for that type of modification.

Once you’ve created a multi-request modification you may not remember which requests they apply to. You can check this with ease by clicking the Multiple Requests button under the Applies To column that pertains to the modification your interested in. This will display a window with a table that only shows the requests involved in this modification.

Selecting Custom Data

A script can be modified to replace requests (or only some parts of requests) with custom data. This custom data can include login information, search terms, different URLs, or even session tokens. Requests can be parameterized from 6 different custom data options to simulate more realistic traffic.

1. First, select the request you would like to modify and click the type of modification you would like to make. Once the modification window is open, change to drop down or radio button to custom if needed, and click the Select Data button to bring up the Custom Data Selection window.

2. At the top you have the option of adding a prefix value to append to your custom data such as a word and a space like “Token ” used in some forms of authentication.

3. Data may be selected from 6 different options:

I. User Data is the data uploaded to LoadStorm as a CSV file. During a load test, each VUser will select a different value from the user data column for any fields in a request marked “Custom”. User data files can be uploaded ahead of time, or at the time of parameterization.

  1. To select User Data, click the Choose Column button for the dataset you would like to use.
  2. Click on the column you would like to use. It will become green to indicate it is selected.
  3. Close the window to Save your selection.

II. Generated Data is created within LoadStorm.You can generate data ahead of time, or at the time of parameterization. Selecting generated user data works the same as selecting uploaded user data.

  1. To select Generated Data, click the Choose Column button for the dataset you would like to use.
  2. Click on the column you would like to use. It will become green to indicate it is selected.
  3. Close the window to Save your selection.

III. Form Inputs are retrieved from all forms that are in the response content of successful requests, prior to the request being modified. Although this is rarely used, an example for this type of parameterization could occur when there is an authenticity token that is needed for a query string value in a later request.

Note: Typically, authenticity tokens will automatically be passed from form values in response content to form values in form posts by our default CSRF handling.

  1. To select form input value, click the row of the form field you would like to use.
  2. Close the window to Save your selection.

IV. Response Header data includes values from the response headers of prior requests. A typical example would be a request URL that needs to be dynamically set by the Location header in the response headers of a prior request.

  1. To select a response header value, click the row of the response header field you would like to use.
  2. Additional Options:
    • You can add a Prefix using the field at the top.
    • Using the Start/End String Delimiters you can grab something specific within the response header value.
  3. Close the window to Save your selection.

V. Response Text data includes all the response data from prior responses that are text types (html, json, xml, etc). By default, the selected response is from the last occurrence of response text from prior requests. A typical example of this type of parameterization is when a dynamic string is returned in the response text and must be used to modify a value in a subsequent request.

  1. To select a response text value, choose the selected response to use from the drop-down.
  2. To select a string from the response, type in the Start String Delimiter (the set of characters immediately before the string you wish to select).
  3. Enter an End String Delimiter (the text immediately after the string you wish to select) to indicate where to stop the selection.
  4. If more than one value is found using the delimiters you’ve chosen then change position drop-down menu to choose which value you want to use.
  5. Close the window to Save your selection.

VI. Cookie data includes the cookies that are set from prior responses. A typical example of this type of parameterization is when a dynamic string such as a JSESSION ID is returned in a cookie and must be used to modify a URL in a subsequent request.

  1. To select a cookie value, click the row of the cookie field you would like to use.
  2. Close the window to Save your selection.

Validation Buttons

Below are descriptions of each validation button that check for response text or size which allow you to flag certain conditions as an error that normally would go unnoticed during a test run.

Response Content

Particular requests can be checked for response content in order to verify that the application under test is functioning as expected.

1. First, select the request you wish to check.

2. Next to Validate, click the Response Content button.

3. There are two methods that can be used to validate response content:

  1. Fail if text is not contained in response: If you must have a certain string from your response, you should input that string in this field.
    1. Any match that exists in the Last script response text will be highlighted green.
    2. If the text is not found during a test, an error is reported to LoadStorm as a Response Text Exclusion Validation Failure.
  2. Fail if text is contained in response: If you must not have a certain string appear in your response, you should input that string in this field.
    1. Any match that exists in the Last script response text will be highlighted red.
    2. If the text is found during a test, an error is reported to LoadStorm as a Response Text Inclusion Validation Failure.

Validate Response Size

1. Particular requests can be checked for their response size. First select the desired request you wish to check.

2. Next to Validate, click the Response Size button.

3. There are several ways to validate the response size:

4. Validation size type:

  1. Percentage: Determine the boundaries for a response “failure” based on percent difference of size from the Last script response size.
  2. Byte Range: Determine the boundaries for a response “failure” based on actual value of size in bytes.
  3. Exact Match: Unless the response size is exactly the same as Last script response size, this will report an error.

User data is information that VUsers can use to customize each script they run through. Whenever a modification type is set to “Custom”, VUsers will replace what was originally there with a value from the selected user data table. For example, user data can contain a table of usernames and passwords that VUsers can utilize to log in to the target website. This allows for a more realistic test than having each VUser log in as the same person every time. User data can also be used to store things like URLs, search terms, forum comments, credit card information, or anything else to make each VUser unique.

User data can either be generated within the system, or uploaded from your own CSV (Comma Separated Values) file.

Which type of user data should you use?

CSV data – best used for specific data that needs to match values that already exist in the target server.

  • login credentials for existing test accounts
  • SKUs for products
  • tokens that represent something specific in the application
  • search terms for existing items

Generated data – great for non-specific data that doesn’t match values already existing in the target server.

  • sequential integers for login credentials of new account signups
  • random numbers for phone numbers or cryptographic nonce
  • random text for addresses or comments
  • random date/time

How are Vusers assigned to rows in User Data?

Each time a script is run in a load test, a different row of the user data file is utilized by a Vuser. The order in which a Vuser selects a row of data depends on the number of Vusers in comparison to the rows in the file.

1. If number of rows is greater than number of Vusers:

If the number of rows in a CSV is greater than the peak concurrent Vusers, there will never be a time that two Vusers are utilizing the same row of data simultaneously. Let’s say we have a test with peak users set to 3 and a user data file with 5 rows. The Vusers iterate through the user data file in the following pattern:

The third Vuser will get to the end of the file and see no more rows so it goes back to its assigned row, row #3. The Vusers will be assigned to the rows with this pattern for the duration of the test.

2. If number of Vusers is greater than number of rows:

Vuser assignment works slightly different when you have more peak users than rows. Now let’s say we have a test with 5 peak users and only 3 rows of user data. The Vusers iterate through the user data file in the following pattern:

So with more users than user data, Vusers will grab rows in a cyclical fashion.

VUser User Data Row
1 1
2 2
3 3
1 4
2 5
VUser User Data Row
1 1
2 2
3 3
4 1
5 2
1 3
2 1
3 2
4 3

Upload a CSV data file

1. Click on the Build menu, then on the User Data tab.

2. Click on New Data to select a CSV file. The first row of the selected CSV file will be treated as column headers.

3. Double-click on the file, or right-click it and choose open, to view data or edit certain features of the CSV such as title, notes, or the column name setting.

Generate User Data

Watch Example1. Click on the Build menu, then on the User Data tab.

2. Click on Generate Data to create a new table of values from LoadStorm.

3. Enter a name for the user data. The name cannot contain any space, “ / ” or “ \ ” characters.

4. Click on Add Column to add a column of values.

5. Enter a column name in the Name/Header field. The name cannot contain any spaces, “ / ” or “ \ ” characters.

6. Select the type of data you would like to generate for the user data from the drop-down menu.

Column Data Types

A column header must be entered, but any further modifications are optional. A preview is displayed on the right. There are four types of data that can be added:

1. Sequential Integers: generates a list of integers in increasing order.
Chose to start the integers from a different number, pad the length of the list, or type in a prefix or suffix to append to the number.

2. Random Numbers: generates a list of random numbers.
Choose the value of the smallest and largest random numbers, as well as the number of decimal places. You can also type in a prefix or suffix to append to the number.

3. Random Text: generates a column of random text.
Select what types of characters to allow by choosing between Lorem Ipsum, ASCII, or UTF-8. Choose the minimum and maximum length for the text. If you chose ASCII or UTF-8, you can use the checkboxes to further specify the types of characters allowed in the text.

4. Dates and times: generates a column full of random dates, times, or both.
Select the type of data you would like to generate. Then select the format.

1. Enter a Name for the user data file.

2. Click on Add Column.

3. Select Sequential Integer from the drop-down.

4. Enter “Username” for the column header.

5. Enter “User” in the prefix field.

6. Click Save Column. Now we need to generate passwords.

7. Click on Add Column.

8. Enter “Password” for the column name.

9. Select Random text from the drop-down.

10.Select the ASCII option.

11. Enter a value for the minimum and maximum length fields.

12. De-select the Whitespace and Non-printable boxes.

13.Click Save Column.

14.Click on the Data tab to view a preview of the user data. This data is ready to use to modify a script.

When generating user data, uniqueness refers to the way Vusers utilize the columns of the data files. There are two different types of uniqueness that may be selected. Uniqueness by:

1. Virtual user and script repeat
When the column uniqueness is set to virtual user and script repeat, the row of data assigned to a virtual user is changed with every script execution. Hence, Vuser 1 will grab row 1 the first time the script is executed, but it will grab a different row every other time the script is executed during the test run, dynamically generating as much data as needed.

2. Virtual user
When the column uniqueness is set to virtual user, each virtual user is assigned a row from the user data file to use for the duration of the test. Hence, Vuser 1 may use the first row of data throughout the whole test.

To download the CSV you’ve uploaded:

  1. Click on the Build menu.
  2. Then click the User Data tab.
  3. Select a CSV.
  4. Click the Download CSV button.

To export a CSV from generated data:

  1. Click on the Build menu.
  2. Then click the User Data tab.
  3. Select a Generated Data file.
  4. Click the Export as CSV button.
  5. Choose the number of rows you want to generate for the CSV.
  6. Click the next Export as CSV button to confirm.

Depending on the test requirements, third-party servers should be set to “Ignored” or “Archived”. Requests that would go to ignored or archived servers will be skipped instead. Skipped requests prevent unnecessary traffic to advertisements, analytics, or other third-party services. You may encounter a server already set to ignored as a part of our blacklist. Please contact [email protected] about enabling a blacklisted server for your testing needs.

Please note that by ignoring a server, any requests normally sent to that server will not be sent.

1. Click on the Build menu, then on the Server tab on the top.

2. Double-click the server you want to ignore.

3. Click the Ignore button. No requests will be sent to this server.

Archive a Server

Archiving a server works the same as ignoring a server, but it’s a separate category. This way you can separate the servers you may never need to test from servers that you may not want to test at the moment. To reiterate, archiving a server means it is ignored. Any requests normally sent to that server will not be sent.

1. Click on the Build menu, then on the Server tab on the top.

2. Click the server you want to archive to select it.

3. Click the Archive button. No requests will be sent to this server.

Planning a test run involves selecting and weighting the scripts, specifying the test parameters, setting a start time, and choosing the traffic source(s) if desired.

Select Scripts for the Load Test

1. Click on the Run menu, then on the Scripts tab on the top.

2. Add scripts to your test by selecting them in the All Scripts window on the right and clicking the Add button.

3. To set the weight of each selected script, type a number into each Weight field. Enter the weights as whole integers of the percentage of traffic you want for that particular script. For example, if you want 60% of VUsers browsing, enter “60”. Note that these numbers must add up to 100. Otherwise, it will be the weight of one script divided by the sum of the weights of all scripts. So two scripts weighted at 75 is 75/150 which becomes 50%.

4. If the text next to the Manage Servers button says there are any unverified servers in the selected Scripts, verify or ignore those servers before continuing.

Select Test Parameters

1. Click on the Parameters tab near the top of the screen.

2. Here you will specify the load pattern (linear or step up), duration of the test, number of minutes to hold at peak volume, amount of VUsers for the test from start to finish, and if necessary the number of steps for the step up pattern.

A load pattern is the method in which VUsers are distributed throughout a Load Test. Different patterns can have very different effects on the test server, even if they have the exact same number of VUsers in them.

Linear: VUser load increases continuously over time.

Step up: VUser load increases in discrete steps. This pattern can often hit the server under test much harder than linear, since it introduces new VUsers suddenly rather than gradually.

Scheduling a Load Test

A test can either be scheduled to run immediately, using the Run ASAP button, or it can be scheduled to run at a future time. Please note that test runs may not overlap each other.

1. Click on the Start Time tab near the top of the screen.

2. Click the calendar button next to the Date field to choose a date and time to run the load test. (NOTE: This must be at least 25 minutes from the current time.)

3. Optional: To schedule a test to run multiple times, click the Repeat checkbox and use the provided fields to specify how frequently to run this test.

4. Click the Run Scheduled button at the bottom of the screen.

Note: Once a test has been scheduled, you may right-click the test run to change the test start time to run as soon as possible, or to cancel it.

3. Optional: To schedule a test to run multiple times, click the Repeat checkbox and use the provided fields to specify how frequently to run this test.

4. Click the Run Scheduled button at the bottom of the screen.

Note: To interact with a scheduled load test see the Scheduled Runs section.

Traffic Source

You can control the geographic regions that you would like the test to run from by choosing from the available Amazon EC2 data centers where we launch load generators. Keep in mind that tests under 2500 VUsers may not come from all selected sources. This is because each load generator can support up to 200 VUsers executing scripts. If you have any questions please contact LoadStorm support ([email protected]).

Note: This feature is only available when you have at least 100 available VUsers in your account.

Scheduled Runs

Click the Run menu, then the Scheduled Runs tab. This will show you tests which are either scheduled or currently running.

  • To cancel a test, select the test you wish to cancel and click the Cancel button, or right-click it and choose Cancel.
  • To force a scheduled future load test to run asap, select the scheduled test and click the Run ASAP button. The test’s status will change to SCH_ASAP, but keep in mind that an account may only run one test at a time. So it will not run until there are no conflicting run times with other tests in your account.
  • If you double-click a test that is running you will be taken to the Analyze section to view the live results.

Modified Script Execution before a Test

When scheduling a test run, you may be met with a warning message indicating a selected script has been modified. What this means is that changes have been applied to a script, but the script was not executed to save the changes.

There are two options:

1. Scheduling the test run will execute the script and run the test, thereby saving the newest modifications made to it.

2. If you do not want to apply the modifications, Cancel scheduling the load test. You can then reopen the script from the Build menu, and revert the changes from the script Overview.

Copy and Edit a Previous Load Test

A previous load test can be copied and edited to run again from the test table on the Analyze page. Copying an old test will populate the Run page with the same exact scripts and the parameterizations they had for the old test, their weighting, load patterns, VUser amount, and time duration.

1. Click the Analyze menu.

2. Right-click the test you would like to copy and select Copy and Edit from the context menu. You will be navigated to the run tab.

3. You can run the test as is, or modify some of the parameters for the new test run.

Sometimes a test needs to be stopped before it finishes. Maybe the target server is failing under load early on, or some configuration mistake was made that invalidates the test. If you feel the need to stop your test, you can open the analysis view of a running test and find a Stop Execution button. Clicking this button will prompt you to make sure you’re certain, and once confirmed the test will be permanently stopped. We also warn you that any remaining VUsers for this test are not refunded. This is because we still incur a cost for launching enough load generating instances to handle the peak of the test even if the peak is not reached.

How do I see my results?

Test results can be examined during or after a load test is run under the Analyze menu. There are several different reporting options that can be used to interpret results. For example, you can determine what types of errors occurred the most, what resources took the longest amount of time, and how fast pages were loaded as a whole.

1. Click the Analyze menu, then the Analyze tab if you’re not already on it.

2. The filters at the top of the screen can be used to help search for a particular test.

3. Select the completed load test you wish to analyze in the table and click the Open button, or double-click to open it.

How do I compare my results?

The test comparison feature provide a top-level comparison between two or more tests. The data used for comparison can be found in the Statistics and Information tables on the Summary tab for each test.

1. Click to select a test run.

2. Then holding the shift key click the end of a range of tests, or holding the CTRL / Command key click specific tests.

3. Now click the Compare Tests button.

4. A new tab will open up displaying the top-level information of each test that was selected in a side-by-side comparison.

Test Analysis View – Sub-Tab Summaries

Below are descriptions of the functions and features of each sub-tab when analyzing a test run.

Summary

The first tab of analysis is the summary page. Here you will find an overview of the test results including requests per second, concurrent users, peak response time, throughput, percentage of errors, and average response time.

Watch ExampleFilters are available that will allow you to focus on more specific data.

  • Filter By Script: Allows you to change the reporting to only show information regarding a specific script during the load test.
  • Page: You must filter to a specific script before you can filter per page within that script.
  • Filter By Server: Gives you insight as to how each domain is performing.
  • Time Interval: Typically it is best left at one minute intervals, but for very long tests it can be useful to see data at larger chunks of time.
  • All Errors: This will allow you to remove the chosen number of error types from the results view. For example: You want to see your results without a known “404 Not Found” error.
  • All Requests: Filter to a specific request type such as HTML, XML, CSS, JS, Images, or Other to only see data regarding those kind of requests.

The Summary chart offers information on key metrics for each minute interval during a test.

  • Below the chart you can click names in the legend to toggle off and on the lines of data that are displayed in the chart.
  • Hover over any point on the graph to display the values at a particular time.
  • You can also click the rectangular button at the top-right to expand any of the tables or graphs.
  • If need the line graphs or pie charts save as an image you can click the 3 horizontal bars at the top-right of the plot area and choose one of the options for saving it.
  • Clicking and dragging across a line graph will cause the chart to zoom in on a window of time. To revert the chart to its default zoom click the Reset Zoom button at the top-right.

What do these performance metrics mean?

For a more detailed description of these performance metrics click here.

  • Average Response Time: Indicates the average of all response times of each request for a given interval or across the entire test duration.
  • Peak Response Time: Indicates the slowest response among all requests for a given interval or across the entire test duration.
  • Error Rate: This percentage represents the number of errors among all requests for a given interval, but for the total duration we display the count for all errors that occurred.
  • Concurrent Users: The number of VUsers that are executing a script during a given interval.
  • Requests per Second: This is estimated by taking the total number of requests for a given interval and dividing that total by the number of seconds in the interval.
  • Throughput: Measured by taking the total bytes received for a given interval and dividing that total by the number of seconds in the interval. Then we convert the value to kilobytes to get kB/s.

The Test Statistics table will give you summary information regarding the totals for important metrics through the entire test duration.

The Test Information table provides the details about the test setup, start time, and weighting of the scripts used.

The Response tab gives you more detailed information about individual requests and their responsiveness throughout the test. This can help you pinpoint which requests are causing the greatest hit to performance.

Watch ExampleThe Response Times chart offers information on request responsiveness for each minute interval during a test.

The Response Time Distribution chart offers statistical data for standard deviations on request responsiveness for each minute interval during a test.

The Response by Resource table provides the details about individual requests such as times requested, cumulative size, average response, and peak response.

Understanding the types of errors in a load test can be excellent clues to finding bottlenecks.  Here are some common examples.  A 503 Service Unavailable indicates that the web server is working, but other parts of the application are not (e.g. database).  Request Read Timeouts are an indicator that the web server is overwhelmed or that the resource is slow; therefore, the HTTP request was sent successfully, but the response was not completed by the web server before being timed out based on the request timeout setting in each script.  Request Connection Timed Out indicates that the network is a bottleneck because the user cannot get a TCP connection after 10 seconds; thus, no HTTP request can be sent at all.

Watch ExampleThe Total Errors pie chart shows each type of error encountered and if you mouse over the different sections it will tell you how many times a type of error occurred during a test.

The Errors Breakdown chart offers statistical data for frequency of errors for each minute interval during a test.

The Errors by Resource table provides the details about individual requests such as type of error, resource path, error count, average response, and peak response.

Common HTTP Status Codes

  • 400: Bad Request – A VUser sent a request that is not properly formatted.
  • 401: Unauthorized – A VUser failed authentication to access the target server.
  • 403: Forbidden – A VUser lacked the permission to get a resource.
  • 404: Not Found – Usually the link is invalid due to a typo, stale token, or the file does not exist.
  • 500: Internal Server Error – Usually indicates the target server is having trouble with the load or something is wrong with the request.
  • 502: Bad Gateway – Could be a configuration issue with target server.
  • 503: Service Unavailable – Your server is barely functioning, and can only respond with this status code.

Common LoadStorm Error Codes

  • Request Read Timeout – The request may have begun to respond but was cancelled when it reached the customizable timeout limit.
  • Request Connection Timed Out – A VUser couldn’t establish a TCP connection to the target server after 10 seconds, and this limit is not adjustable.
  • Socket Connection Reset – The connection was reset by the target server via a TCP reset packet which breaks the connection immediately.
  • SSL Handshake Failure – The request was unable to establish a secure connection possibly due to an obsolete cipher suite or mismatched SNI.

Select any specific page that is used in your scripts to see a frequency distribution of its speed and characteristics.  The completion times are measured as the time from the start of the initial navigation until there was 2 seconds of no network activity after Document Complete. This will usually include any activity that is triggered by javascript after the main page loads.

Watch ExampleThe Page Totals table provides the details about each page within a script, average pages requested per second, total times requested, internal requests per page, average completion time, peak completion time, number of failures, and percentage of failures out of the total number times requested.

The Page Completion Time chart offers information on page completion for each minute interval during a test. By default it shows the data based on all pages, but if you select a page in the Page Totals table the chart will change to data specific to that page.

The Pages, Load and Failure Rate chart offers data for times requested, pages requested per second, and percentage of failures for each minute interval during a test. By default it shows the data based on all pages, but if you select a page in the Page Totals table the chart will change to data specific to that page.

Select any specific transaction that is tracked in your scripts to see a frequency distribution of its speed and characteristics. The completion times are measured as the time from the start of the initial request until the completion of the last request within the transaction, and think time is excluded from these values.

Watch ExampleThe Transaction Totals table provides the details about each transaction within a script, total times requested, internal requests per transaction, average completion time, peak completion time, number of failures, and percentage of failures out of the total number times requested.

The Transaction Completion Time chart offers information on transaction completion for each minute interval during a test. By default it shows the data based on all transactions, but if you select a transaction in the Transaction Totals table the chart will change to data specific to that transaction.

The Transactions, Load and Failure Rate chart offers data for times requested, transactions requested per second, and percentage of failures for each minute interval during a test. By default it shows the data based on all transactions, but if you select a transaction in the Transaction Totals table the chart will change to data specific to that transaction.

The Sources tab displays information about the distribution of your test to each geographic location that was used.

Watch ExampleThe Average Response Time (by source) chart visualizes the performance of response times from each geographic location.

The Requests By Source chart offers a simple visual of the distribution of requests across each geographic location.

The Sources table offers details about the number of load generating servers, their IP addresses, and their geographic location.

In analyzing test results, many times it is desirable to drill down into every request/response pair that occurred in a specific time period.  Summarized metrics for a one-minute interval may not provide enough granularity to diagnose a performance issue.  This report enables you to review every second to see what virtual users were sending and receiving.  Only VUsers 1-25 are included as a representative sample.

Watch ExampleThere are some additional filters available for this tab.

The Requests By Elapsed Time table provides minute intervals for you to select. Upon select one of these the next table will update based on your selection.

The Requests For Time Interval table displays all requests made in the selected minute interval for up to the first 25 VUsers during a test. By default, it displays the first minute interval.

Watch ExampleUse this page to export all of the test results as a CSV. This allows for a very thorough evaluation of all the hard data. Also you can export a PDF Report of the results which includes graphs and tables of data.

Additionally you can make the results public or private using the button at the top right of the test results. This will allow you to share the interactive report with a URL just like this one: pro.loadstorm.com/#!test/513469 (click the link to view real results in PRO)

Similar Posts