Monitor Changes to Password-Protected Web Pages
To support IT and cybersecurity teams, Fluxguard is precision-engineered to monitor password-protected web content. This lets you explore interactive web apps and dashboards, and examine them for security, defacement, or other issues.
Review our Terms of Service: you may only use our product to log in to your own sites or sites where you have permission.
Looking to handle more intricate forms (requiring clicks, option selects, and more)? Learn how to monitor results from complex form submissions.
1.Basic auth or “regular auth”?
To log in to a standard web form (“regular auth”), please proceed with the tutorial below.
Should the login prompt appear as a pop-up, it may require “basic authentication.” This will often be used by developers to restrict access. It is simpler if that’s the case. Skip to the last step and review the instructions.
2.Add the URL in Fluxguard where the login takes place
If you haven’t already done so, include the site and page where the login occurs.
To add a new page to an existing site, click its Session View. There, click the add button and add the URL:
3.Do an initial crawl of the login page to get a pre-logged-in version
To use Visual Selectors, manually initiate a crawl of this page. To accomplish this, click the play button on its Session View:
Wait for the crawl to complete. The results of the crawl will appear in a few moments. This will include a screenshot of the pre-logged in version, DOM, and text captures:
4.Click into the Page View
Select the URL to go into Page View. In the screenshot above, the arrow on the left points to it.
Following that you can view all prior capture data. This is where you can add actions such as form submissions.
5.Click the Add button to include a new action
Click the “Add” button:
This will bring up a modal to add the form:
6.Learn how to fill out the form in Fluxguard
Before we detail steps to make form automation simpler, please understand the rationale. Three lines of input are used: one for the username, one for the password, and one for the submit button.
For each of these, an area identifies the location of the page where the field is present; and, in the case of the username and password, a second field where a value is placed (such as your username).
Fluxguard utilizes CSS selectors to distinguish each area. CSS selectors are the common way that browsers recognize key areas of any web page.
In the end, the form should look similar to this (note: this is for our site; your site fields will, of course, differ):
7.Fill out the CSS identifier for the username field
CSS selectors are special instructions browsers utilize to identify an area of a web page. They can look a bit intimidating, but they’re quite simple. For example:
#element
identifies an element by its id.element
identifies an element by its classdiv > div > div > div > a
identifies an element by its hierarchical location in a document
Fluxguard utilizes CSS selectors to filter areas to ignore or focus on. This is a powerful technique to isolate the content you wish to monitor, while ignoring frequently changing areas that introduce false-positive change alerts.
It is possible to discover the CSS selector for any area of a page in 2 ways:
- Use the Visual Selector to find a CSS selector. Click on the button next to the first field in the modal to open the Visual Selector. This will use the most recently captured version of the page (so, make sure you’ve done an initial crawl before using it on any given page). It will open this page in a new browser, and prompt you to identify the areas to filter.
- This powerful feature renders the target site through our API as a means to identify DOM areas. As such, it may not work on every web page. Please let us know if you encounter a page where it does not seem to work well. In the meantime, try the alternative method detailed below. (In fact, the approach below is the one we use as it allows for more flexibility in selector creation.)
- Use Google Chrome to identify a CSS selector. This requires only minimal HTML knowledge, so you can often do this yourself.
- Using Google Chrome (or another modern browser).
- Go to the live page with the area you wish to filter.
- Right-click on the area to filter and select “Inspect.”
- A code console will appear. Move your mouse up and down in this console. Notice that it highlights different sections of the page.
- Mouse up and down in the code console until you isolate the element you want to filter in the browser viewing part of the display.
- Right-click on the HTML element in the code console. Select “copy” and then “copy selector.” This will copy a lengthy selector identifying that particular DOM element.
- Paste it into the filter area. When the crawl occurs again, this area will be excluded (or exclusively included, depending on which filter you are using).
- Tips to keep in mind:
- When you use the above method, you may get a very long selector. As much as possible, try to reduce the selector to a minimal core. Try to eliminate any hierarchical elements in the selector. This is necessary to reduce brittleness. For example, if you have a selector that relies on multiple hiearchical and class elements (e.g.,
#navbar > div > div.navbar-header > a.logo.navbar-btn.pull-left.flip > img
), then it will break easily if the site adjusts its layout or class structure even slightly. - You can paste a selector directly in Chrome’s Elements tab (the same one where you copied it from). When you do this, Chrome will tell you how many matches that selector has on the current page. Using this method, you can edit the selector down to a minimal core that still exclusively matches the areas you wish to filter.
- When you use the above method, you may get a very long selector. As much as possible, try to reduce the selector to a minimal core. Try to eliminate any hierarchical elements in the selector. This is necessary to reduce brittleness. For example, if you have a selector that relies on multiple hiearchical and class elements (e.g.,
8.Enter the Username and click the Plus button
Click the Plus button once the Username is complete:
This will create a new row for designating the password area.
9.Repeat the above 2 steps for the Password field and the Form Submit button
Remember to identify the Submit button. You can adjust the number of milliseconds to wait after form submission before capturing the page. For slow-loading forms, you might increase it to 6000 milliseconds (6 seconds), for instance.
Once finished, click “Save”. Click the gear icon to re-configure:
10.Verify the form works
Go back to Session View and restart a crawl again. Once crawled, a new thumbnail of the page will materialize. Click the version screenshot to view a capture of the logged-in page.
11.You’re all set!
Fluxguard will now monitor this page for changes. And you will be alerted of any differences. Likewise, you can add more pages to monitor after the login. Every page appended after the login in any session will preserve cookies, local storage, and more. This allows you to orchestrate multi-page, multi-step session monitoring.
A quick tip: add the Login page twice
You may want to add the Login page twice to the session. Why? Because adding the page once will only monitor the logged-in page. By including the Login page again (and moving it ahead of where the login occurs), a monitoring session can be sequenced that contains both the login and the post-login page (e.g., dashboard or anything that appears after the login).
Basic authentication?
Do you need to add a login to a page with basic authentication? It’s even simpler! Click into Session Settings for any site, select the “Crawl” tab, scroll to the bottom, and enter auth details. Don’t know what basic auth is? Then ignore this section. In general, basic auth is used by developers to limit access to development servers.