
Scribe is an AI-assisted platform designed to support accountants working with complex technical documentation such as business combinations, lease accounting, revenue recognition, convertible debt, and warrants. The product helps professionals analyse large volumes of transactional and regulatory documents, surface relevant information, and generate structured accounting memos aligned with established industry standards.
My role focused on designing the end-to-end UX for the MVP, translating highly regulated accounting workflows and AI constraints into a product experience that prioritises clarity, trust, and human accountability. The goal was not to automate decisions, but to support expert users in analysing documents faster while maintaining full control over outputs.
Technical accounting work is inherently repetitive in structure but complex in execution. Accountants must analyse multiple source documents, cross-reference authoritative guidance, and produce memos that follow rigid legal and industry-accepted formats. This work is time-sensitive, error-prone, and closely tied to personal and organisational liability.
Introducing AI into this context adds an additional layer of complexity. Outputs cannot be speculative, opaque, or treated as final truth. Every interaction must be grounded in source documents, explainable to the user, and designed to reinforce — not replace — professional judgement. The challenge was to design a system that leveraged AI’s strengths without compromising compliance, trust, or accountability.
The UX approach was centred on structure, grounding, and progressive disclosure. From the start, users are asked what type of document they want to produce, allowing the system to request the right inputs, present the correct fields, and surface relevant prompts based on known accounting structures. This reduces cognitive load and prevents misaligned analysis early in the flow.
AI interactions were designed to be document-grounded by default. All responses are based on uploaded source documents and a curated knowledge base, ensuring relevance and traceability. The experience encourages an iterative dialogue, where users can continuously query documents, validate assumptions, and refine understanding before generating or finalising any output.
From a UX perspective, this meant designing clear prompt patterns, document-grounded interactions (RAG), structured information extraction, and AI-assisted drafting flows that always kept the accountant in control of review and final decisions.
Scribe’s core experience revolves around a split-screen analysis flow, allowing users to review source documents alongside AI-assisted insights and prompts. Predefined prompts help start the analysis based on common accounting structures, while free-form questions allow deeper exploration when needed. Users can continue querying documents even after an initial draft is generated, reinforcing confidence in the final output.
Output documents are generated using backend-controlled templates that reflect accepted legal and industry formats. This decision prioritised correctness and compliance over flexibility in the MVP. While more advanced collaboration and editing features were explored, the initial release focused on reliable drafting, clear review states, and export functionality. All source documents, outputs, and query history are stored in a central documents area, supporting traceability, search, and reuse over time.
The project resulted in a complete UX definition for an AI-assisted accounting platform designed specifically for regulated workflows. The final solution demonstrates how complex document analysis and drafting tasks can be supported by AI without undermining professional responsibility or compliance requirements.
By framing AI as a structured assistant rather than an autonomous decision-maker, the product establishes a foundation of trust that is critical in finance and accounting contexts. The MVP balances speed and reliability, enabling users to work more efficiently while maintaining confidence in both process and outcome.
Designing AI for regulated domains requires restraint as much as innovation. Clear structure, predefined templates, and guided workflows proved more valuable than unrestricted flexibility, especially for expert users working under pressure. In this context, reducing choice can significantly reduce risk.
Another key insight was the importance of continuous review. Allowing users to keep querying documents and the knowledge base even after generating an output reinforces AI as a thinking partner, not a final authority. Human-in-the-loop design is not a feature layer — it must be embedded into the core interaction model from the start.