Responsible Lending Expense Capture
Designed and validated a compliance-driven redesign of credit application forms using unmoderated remote testing at scale, giving senior stakeholders the quantitative confidence to proceed, in two weeks.
Overview
Updated Australian RG209 guidelines required financial institutions to make reasonable inquiries into credit applicants’ capacity to repay. For our lending business, this meant changes to the expense capture flows in our credit application process, changes that senior stakeholders feared would introduce friction and reduce pull-through rates.
Legacy IT infrastructure made it impossible to run a live trial without significant cost and delay. The team was stuck: unable to proceed without data, and unable to get data without proceeding.
I proposed a different path: rebuild the forms in high-fidelity prototypes and run unmoderated user testing at scale against three versions simultaneously.

The Challenge
The stalemate was caused by a combination of technical constraint and stakeholder risk aversion. Making significant changes to originations flows without data to predict impact was not something leadership was prepared to do, and rightly so. But the compliance deadline wasn’t moving.
My role was to find a way to generate the evidence needed to unlock progress, without the time or infrastructure for a traditional live test.
My Role
I led the research design, prototype build, and analysis end-to-end. This included redesigning the expense capture forms in close collaboration with legal and compliance colleagues, planning the research approach and recruitment, building high-fidelity prototypes in Axure with conditional logic, running unmoderated tests via Validately, and synthesising findings for senior stakeholder reporting.
What I Did
Redesigned the forms. Working alongside legal and compliance, I redesigned the finances and expense capture application forms to meet the new RG209 requirements across three versions: a baseline reflecting the current experience, and two distinct future state alternatives.
Built high-fidelity, testable prototypes. All three versions were built in Axure with conditional logic, closely replicating the real application experience to ensure test behaviour would reflect genuine usage.
Ran unmoderated testing at scale. Using Validately, I ran simultaneous task completion tests across three groups of 25 participants each (n=75 total). The platform captured screen recordings and audio, enabling me to tag key moments and build a library of usability clips for reporting.
Delivered quantitative and qualitative evidence. Senior stakeholders received completion rate data and time-to-complete comparisons across all three versions, alongside qualitative usability insights drawn directly from participant recordings.
Result
Within two weeks of kicking off the research, we had statistically grounded evidence that the added expense fields would not introduce significant friction to the originations process. Leadership had what they needed to proceed with confidence.
Beyond the immediate compliance objective, this project also shifted the team’s understanding of what rapid research could look like. Remote unmoderated testing, which had been viewed with scepticism, proved faster, more scalable, and equally rigorous as lab-based alternatives.
If anything, the constraints made it better. Being forced to work remotely and at scale produced richer, more naturalistic data than a controlled lab environment would have. That’s a lesson I’ve carried into every research engagement since.