Web Change Detection Onboarding Worksheet
As we work with new customers, we seek to align expectations in terms of purpose, product, and process. While this document can be helpful as a framework for our hands-on engagement with customers—particularly for proof-of-concept (POC) or pilot engagements—feel free to use this in any self-service use of our product, too.
- BANT
- Budget: What budget is available for this project? We can share rough pricing contours, but we need to understand committed and available funds from your side. This will allow us to sequence a right-sized engagement that ratchets support and engineering expectations appropriately.
- Authority: Who makes go/no-go decisions? We need a clear path to project success that defines the decision-making process to move this project from an idea, to a pilot, to a funded and committed initiative.
- Need: What are the pain points? Are there clear benefits to this project in terms of risk reduction, cost savings, or revenue generation? We understand that at the start of pilots, there may not be a clear articulation of the business story. That’s fine. But we must craft a plan to obtain that story, its goals, and benefits.
- Timeline: Do you aim to complete this project this year? Next year? Do you wish to have a completed pilot in the next few months before budgeting exercises for next fiscal year? Do you wish to start with a pilot, or move immediately into a production deployment?
- What are the project’s core objectives?
From the customer side, this often initially centers around validating our product. However, we encourage you to think more broadly: while assessment of Fluxguard may be a key element of any project, it generally is in service of larger themes. Goal definition is important: it allows us to craft an engagement framework that is conducive to success. From our side, initial engagements are a key chance for customer discovery. We typically front-load our projects with a few weeks of discovery with a customer’s business, project, and technical teams. This allows us to better understand needs, roles, technology ecosystem, and existing processes. (We find that the software part of any engagement is often the easiest part.)To start, we recommend crafting two or three sentences which describe where we are headed. These might touch on broader themes than an initial engagement, but they help orient project success. For example:- Replace informal, manual review of partner integrations with a structured process that automates assessment at a regular cadence, defines a complete end-to-end data story, and delivers prioritized and categorized defects to an enabled remediation team.
- How do we measure project success?
At the end of any engagement, what does success look like? How do we structure the project to achieve these results?- Quantitative: e.g., X detected changes with no false-negatives
- Qualitative: e.g., fleshed out remediation process
- What should we monitor?
This includes an approximate sizing in terms of number of sites, pages per site, and overall monitoring frequency.As well, we need a representative sample of the content to monitor. We need to know what to monitor on each specific page or sequence, too.- Easy, medium, hard pages and content
- A clear, aligned list (or procedure for generating that list)
- Any orchestrated flows (e.g., login, registration, and so on)
- Visual, network, DOM, or other changes to detect and alert
- What will we monitor post-POC?
- Input: How will this list be updated?
We may only have an inkling of this at the moment, and the POC may be designed to discover the contours of this integration. - Output: What should we do with the results?
How should results be categorized, if at all? What does any existing process look like? - Who will do setup and ongoing configuration?
An initial project will reveal the cadence of false-positives and configuration work. We need to have some idea as to where configuration and management of our product will lie. - Who will analyze and remediate results?
How do we grade and categorize results? Who will analyze and alarm results? - What missing features are necessary or nice-to-have?
We need to understand key missing features to propose a cadenced production launch. - Who’s involved and what are their responsibilities?
There will typically be a team on the Fluxguard side with one primary point-of-contact. We need to know who to involve on the customer side; who should receive reports (which can be noisy). - Any special requirements?
Let’s get all special requirements on the table early. This can include things such as WORM compliance, data segregation, or PII requirements. - What’s the schedule? Cadence of meetings?
Pilots can run from 4 weeks to 6 months, depending on the engagement level. We usually will have project meetings at least every week.