Usage & Workflow
The typical usage/workflow in Pacific AI follows these steps:
- Create a System
Within the Pacific AI application, the term ‘System’ refers to what would typically be called a “Project”. Creating a new System requires a name, version, description, and tags. Creating and managing Systems is done by Risk Managers. The Risk Manager refers to what would typically be called a “Project Manager”. Only Risk Managers can create and manage Systems. They become the owners of the System(s) they created.
- Set Up Your Team
Once your System is created, as the Risk Manager for the new System you must to define the team that will work on it. The team is composed of users in specific roles, i.e. Developer, Compliance Manager, etc (see the roles definition below). The team is built using existing user account, previoslly added to the application by the admin of your organization. Based on their role, team members of a System will only have access and visibility to the features coresponding to the specific duties for a specific role. See role definition and duties later on on this page - Risk Manager, Compliance Officer, Developer.
- Upload System Documents -
Building software systems in healthcare and regulated industries follows a rigorous lifecycle from requirements gathering through deployment, with each phase governed by strict standards and compliance frameworks. The process begins with capturing functional and non-functional requirements - security, performance, interoperability, and regulatory constraints - and proceeds through implementation guided by industry-specific architectural standards such as HIPAA, FDA guidance, and ISO quality frameworks. Throughout this lifecycle, comprehensive documentation is legally required and serves multiple critical purposes: architecture documentation establishes design decisions and compliance rationale, quality assurance documentation demonstrates validation methodologies, release documentation ensures deployment reproducibility, and operational documentation enables safe system maintenance and incident response.
Testing, particularly for AI-powered systems using Large Language Models (LLMs) or AI systems in general, extends far beyond traditional software validation. These systems require specialized evaluations including bias testing, safety assessments, accuracy benchmarking against clinical standards, and robustness validation—requirements that are increasingly codified in regulations like the FDA’s AI/ML guidance, the EU AI Act, and healthcare frameworks such as CHAI certification. This testing is not merely a best practice but a legal mandate: regulators require documented evidence of systematic validation, ongoing monitoring, and risk mitigation before AI systems can be deployed in clinical settings. Without rigorous documentation and testing protocols, organizations face regulatory sanctions, legal liability, and most critically, risks to patient safety.
This is where Pacific AI comes in: based on the provided documentation for a specific System, pacific AI will automatically read the uploaded documents and identify risks, assess impact, and determine the overall risk level of an AI system.
In addition to the Risk Registry, the application uses these documents to generate the Model Card for the System. During the lifecycle of a given System, you will be able to add additional documentation, provide new versions of existing documents, etc. and continue to automatically generate new versions of the Model Card and new versions of the Risk Registry.
- Evaluate LLMs
This is one of the core features of the platform. Evaluating your AI model(s) used by the System you want to build is done by running specific tests. To evaluate a model you plan to use within your application (System) you need to things:
- configure access to the specific model LLM / endpoints
- configure what specific tests you want to run.
After configuration , the application allows you to execute as many runs you need , compare results across different runs, etc. You do not need to invent the tests yourself, the application provides the test categories and subcategories you need to select the ones you want, then run them against the models you need, as many times you need.
- Perform System Risk Assessment
Pacific AI will automatically analyze the documents you choose from the entire collection of the documents already uploaded and deliver a risk registry + overall risk score for your System. You can review , remove or add your own, provide answers to the questions generated buy the internal “judge LLM”, request and provide evidence regarding risk you consider as already being addressed, etc. Pacific AI will assist and streamline this process, alowing you to build and document a system ready for the compliance review.
- Generate a Model Card
Create a structured summary of the system using AI or manual authoring.
Role Overview
Section titled “Role Overview”Pacific AI uses a role-based access control model to ensure clear accountability across governance workflows.
| Role | Primary Responsibility |
|---|---|
| Admin | Platform configuration and user administration |
| Risk Manager | System creation, document management, and risk assessments. Manages teams for specific Systems. A Risk Manager can own one or more Systems and when logged in, will see only the Systems he/she ownes. |
| Compliance Officer | Review and approval of risk assessments and lifecycle oversight for the Systems he/she was designated/assigned. The Risk manager can assign the Compliance Officer for a specific System from the list of available users in this role. This means that users in this roles must be pre-configured by admin as Complaince Officer, as directed. |
| Developer | Test suite creation and evaluation management. Risk Manager can add/assign one or more users in Developer role to a System. A user in Developer role can be part of one or or more teams belonging to different Systems |
| Governance Officer | Policy and organizational governance |
| Policy Manager | Policy creation and submission |
| Vendor Manager | Third-party vendor information management |
For detailed role permissions, see User Management.