|
You can scan any application that is defined in your account. Web applications can be added by you or another user in the same subscription. Note these cannot be added by your account manager.
Go to Account > Web Applications, and click the New link. Provide web application settings, including a title you'll use to launch a web application scan. Then click Save.
Up to the maximum number of applications permitted in your account settings.
Go to Account > Web Applications, and click next to the web application. You can change certain web application details, including the values for Title and Blacklist. The values for Site, Port, and Starting URI cannot be edited.
You can add authentication records to provide login credentials to be used when the web crawler needs to authenticate to login forms in order to scan the web application. Learn more
You'll need to submit a request to Technical Support.
Setting |
Description |
Site |
The starting host from which to start web crawling. Enter an IP address or host name (FQDN) to start web crawling under. The web application can consist of a single physical host or multiple identical hosts behind a single load-balanced IP address. The Starting URI field allows you to limit the crawling to a specific path. The protocol may be entered with the starting site, either "https://" or "http://". If entered, the service automatically removes the protocol when the web application is saved. |
Port |
The port number from which to start web crawling. |
Starting URI |
The starting path from which to start web crawling. By default the crawling starts at the web application root directory. To start web crawling from a subdirectory, enter the path to the starting directory starting with "/". This could be extracted from a correctly formed URL. For example: /services/app1 |
Blacklist |
Important! Automated web application scanning has the potential of causing data loss. Use this feature to avoid data loss. The blacklist consists of one or more strings identifying links in the web application that should not be visited. Enter each URL must be entered on a separate line (separated by a return character). A maximum of 100 URLs may be entered. Tip - For a production web application, it's best practice to blacklist pages with certain functionality that if executed would have undesirable results, such as possibly sending out too many emails, potentially submitting a "delete all" button, or disabling/deleting accounts. How it works - For each string, the crawler performs a string match against each form action URI it encounters. The URL is interpreted as a regular expression. Therefore, a question mark that is used as a query string delimiter (for example /page.cgi?a=1) should be escaped with a back slash (for example /page.cgi\?a=1) Use .* (dot asterisk) for wild cards. When a match is found, the crawler does not submit a request for the link. |