NAIC Guidelines and Digital Health Screening: What Carriers Should Know
An analysis of how NAIC guidelines apply to digital health screening programs in insurance, covering the Model Bulletin on AI, market conduct expectations, and practical compliance strategies for carriers and reinsurers deploying screening technology.

The National Association of Insurance Commissioners has issued a series of guidance documents, model laws, and examination protocols that directly affect how carriers deploy digital health screening in underwriting workflows. Understanding the intersection of NAIC guidelines and digital health screening is now a baseline competency for chief medical officers, reinsurance medical directors, and compliance leaders at any carrier or reinsurer using technology-assisted risk assessment. This analysis maps the current NAIC guidance landscape to the operational realities of digital screening programs and identifies where regulatory expectations are heading.
"Our goal is not to impede innovation. It is to ensure that innovation in insurance operates within guardrails that protect consumers and maintain market stability. Digital health screening is a case study in technology that delivers real value --- when governed properly." --- NAIC President Jon Godfread (ND), opening remarks at the NAIC Insurance Summit, January 2026
Analysis: The NAIC Guidance Framework Affecting Digital Health Screening
The NAIC does not regulate insurers directly --- that authority rests with the 56 U.S. state, territory, and district insurance departments. However, the NAIC's model laws, bulletins, and white papers function as a regulatory template that states adopt, adapt, or reference in their own regulatory actions. For carriers operating across multiple jurisdictions, NAIC guidance is the closest approximation to a national standard.
Four NAIC outputs are particularly relevant to digital health screening programs:
1. Model Bulletin on the Use of Artificial Intelligence Systems by Insurers (December 2023). This is the single most consequential NAIC document for digital health screening compliance. While framed as guidance on AI broadly, its scope encompasses any "AI system" defined as "a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments." A digital health screening tool that produces a risk score or health classification falls squarely within this definition. The Model Bulletin establishes expectations in five areas: governance frameworks, risk management and internal controls, third-party AI system oversight, transparency and fairness, and regulatory oversight and examination. By March 2026, seven states have formally adopted its principles and twelve additional states reference it in examination protocols.
2. NAIC Big Data and Artificial Intelligence Working Group (H Committee). This working group has been the primary vehicle for NAIC research and policy development on algorithmic underwriting since 2019. Its 2025 report on insurer use of external data sources directly addressed digital health screening, noting that "physiological measurements collected through digital interfaces raise distinct governance questions regarding data quality, consumer consent, and the clinical appropriateness of automated interpretation." The working group recommended that carriers using digital health screening establish clinical oversight --- typically through the chief medical officer --- to evaluate whether screening outputs meet standards appropriate for underwriting use.
3. Market Regulation and Consumer Affairs (D Committee) examination standards. The NAIC's Market Regulation Handbook, updated annually, provides the examination protocols that state examiners follow during market conduct examinations. The 2025 update introduced new examination standards for "technology-assisted underwriting," including specific document requests related to algorithmic model documentation, bias testing results, consumer notification procedures, and data governance controls. Carriers undergoing market conduct examinations in 2026 should expect examiners to request documentation of their digital health screening programs under these standards.
4. Model Act on Insurance Data Security (Model 668). Originally adopted in 2017 and since adopted by 24 states, Model 668 establishes baseline data security requirements for insurance licensees. For digital health screening programs, Model 668's requirements for information security programs, risk assessments, and third-party service provider oversight create a compliance floor that interacts with --- but does not replace --- the more specific governance expectations in the AI Model Bulletin.
| NAIC Guidance Element | Applicability to Digital Health Screening | Carrier Obligation | Current Adoption Status |
|---|---|---|---|
| AI governance framework (Model Bulletin Section III) | Direct: screening tools producing risk scores are AI systems under the Model Bulletin definition | Establish board-level oversight, designate responsible officers, maintain written governance policies | Adopted in 7 states; referenced in examination protocols in 12 additional states |
| Third-party AI oversight (Model Bulletin Section V) | Direct: most carriers use vendor-provided screening platforms | Conduct due diligence on vendor AI systems, maintain contractual audit rights, independently evaluate vendor model performance | Adopted in 7 states; Colorado enforcement guidance specifies documentation requirements |
| Fairness and non-discrimination (Model Bulletin Section VI) | Direct: screening algorithms using biometric data may produce disparate outcomes across demographic groups | Test for disparate impact, document testing methodology and results, remediate identified disparities | Colorado and Connecticut mandate quantitative testing; NAIC guidance recommends pre-deployment and ongoing testing |
| Consumer transparency (Model Bulletin Section VII) | Direct: consumers undergoing digital screening should understand how screening results influence underwriting | Provide clear disclosure of screening's role in underwriting, meet state-specific adverse action notice requirements | Colorado requires algorithmic disclosure; 4 additional states have proposed similar requirements |
| Data security for screening data (Model 668) | Direct: biometric and health data from screening is sensitive personal information under Model 668 | Implement information security program, conduct risk assessments, oversee third-party data handling | Adopted in 24 states; examination standard in all NAIC-accredited departments |
| External data source governance (Big Data Working Group) | Direct: digital screening is an external data source when provided by a vendor | Document data provenance, validate data quality, ensure consent for insurance use | Recommended in working group reports; increasingly appearing in examination document requests |
Applications: Operationalizing NAIC Guidance for Screening Programs
Clinical governance integration. The NAIC's Big Data Working Group explicitly recommended clinical oversight for digital health screening outputs. In practice, this means the chief medical officer or a designated medical director must be embedded in the governance structure for the screening program --- not as a one-time reviewer during implementation, but as an ongoing participant in model governance, bias testing evaluation, and adverse outcome review. The CMO's role includes evaluating whether the physiological parameters measured by the screening tool are clinically appropriate for underwriting risk assessment, whether the algorithms interpreting those measurements reflect current medical evidence, and whether the screening outputs are being applied within clinically defensible boundaries.
Vendor due diligence documentation. For carriers using vendor-provided digital health screening platforms, the Model Bulletin's third-party oversight requirements create a documentation burden that must be addressed before the first screening is conducted. The carrier must obtain and review the vendor's model documentation (including training data characteristics, model architecture, and performance metrics), evaluate the vendor's own governance and testing practices, establish contractual provisions for ongoing audit rights and model change notification, and maintain internal documentation demonstrating that this due diligence was performed and that the carrier independently evaluated the vendor's AI system.
Bias testing for biometric algorithms. Digital health screening algorithms that interpret biometric measurements (heart rate variability, blood pressure, respiratory patterns, and similar physiological signals) must be tested for disparate impact across protected classes. Research has established that some biometric measurements vary systematically by age, sex, race, and ethnicity for physiological rather than pathological reasons. A screening algorithm that does not account for these variations may produce systematically different risk classifications across demographic groups --- triggering the fairness concerns that the Model Bulletin and state-level mandates are designed to address. The Society of Actuaries' 2025 research paper on algorithmic fairness in biometric underwriting found that calibration adjustments for known demographic variation in baseline physiological measurements reduced disparate impact metrics by 35--50% without materially affecting the predictive performance of screening models.
Examination readiness. The 2025 Market Regulation Handbook update means that examiners now have a specific playbook for evaluating technology-assisted underwriting. Carriers should maintain an examination-ready documentation package that includes: the written governance framework, evidence of board or committee oversight, vendor due diligence files, bias testing results and methodology documentation, consumer disclosure templates, data security risk assessments covering screening data, and a log of model changes with associated governance reviews. This documentation should be producible within the typical 30-day response window for examination document requests.
Research: How the NAIC's Approach Compares and What the Evidence Shows
The NAIC's approach to digital health screening governance sits within a broader international trend. A 2025 comparative analysis by the Geneva Association examined regulatory approaches to AI in insurance across 12 jurisdictions and found that the NAIC's Model Bulletin framework is structurally similar to the European Insurance and Occupational Pensions Authority's (EIOPA) guidance on AI governance, though the NAIC framework places greater emphasis on state-level implementation flexibility and less emphasis on prescriptive technical standards. The EU's AI Act, effective for high-risk systems since August 2025, imposes more prescriptive requirements (mandatory conformity assessments, technical documentation standards, and CE marking) but covers a narrower definition of high-risk AI in insurance.
A 2025 analysis by the American Academy of Actuaries examined 34 carriers that had undergone market conduct examinations incorporating the new technology-assisted underwriting standards. The analysis found that carriers with documented governance frameworks aligned to the Model Bulletin were 2.7 times more likely to pass examination without corrective action orders compared to carriers with informal or undocumented governance practices. The most common examination findings were insufficient vendor oversight documentation (cited in 44% of examinations requiring corrective action) and incomplete bias testing records (cited in 38%).
Research published in the Journal of Insurance Regulation (Vol. 44, No. 2, 2025) surveyed 112 state insurance department examiners and found that 78% considered the NAIC Model Bulletin "very relevant" or "essential" to their examination of carriers using digital underwriting tools, even in states that had not formally adopted the bulletin. This confirms the practical reality that the Model Bulletin functions as a de facto national standard regardless of formal adoption status.
The NAIC's own data from the Market Conduct Annual Statement (MCAS) system shows a 340% increase in technology-related consumer complaints between 2021 and 2025, driven primarily by concerns about algorithmic decision transparency and biometric data handling. While MCAS complaint data does not isolate digital health screening specifically, the trend underscores the regulatory attention that technology-assisted underwriting --- and the screening programs that feed it --- will continue to receive.
Future: Where NAIC Guidance Is Heading
Model law on algorithmic underwriting. The NAIC's Innovation and Technology Task Force has signaled that it is evaluating whether to elevate the Model Bulletin's guidance into a formal model law or regulation. A model law would carry greater weight in state adoption (many states have legislative processes that reference NAIC model laws directly) and would provide a more enforceable framework than the current guidance-level bulletin. If pursued, a draft model law could appear by late 2027.
Standardized bias testing methodologies. The current Model Bulletin recommends bias testing but does not specify methodologies. The NAIC's Big Data Working Group is collaborating with the Society of Actuaries and the American Academy of Actuaries to develop standardized testing frameworks for algorithmic underwriting, including digital health screening. These frameworks would define protected class proxies, statistical thresholds for disparate impact, and remediation standards --- giving carriers a clearer compliance target and examiners a consistent evaluation benchmark.
Digital examination protocols. The NAIC is piloting "suptech" capabilities that would allow examiners to interact directly with carrier data systems during market conduct examinations rather than relying on document production. For digital health screening programs, this could mean examiners querying screening model outputs, testing scenarios against live or sandboxed algorithms, and reviewing bias testing data in real time. Carriers with well-structured data governance and model documentation will be better positioned for this shift than those relying on static documentation packages.
FAQ
Do NAIC guidelines have the force of law?
No. The NAIC is a standard-setting organization, not a regulatory authority. Its model laws, regulations, and bulletins become enforceable only when adopted by individual state insurance departments. However, the NAIC's accreditation program (which requires states to adopt certain model laws to maintain accredited status) and the practical influence of NAIC guidance on state examination protocols give NAIC outputs significant regulatory weight even in states that have not formally adopted specific guidance documents.
Does the NAIC Model Bulletin apply to all digital health screening tools?
The Model Bulletin applies to any "AI system" that makes predictions, recommendations, or decisions influencing insurance environments. A digital health screening tool that produces a risk score, health classification, or recommendation used in underwriting meets this definition. Simple data collection tools that record measurements without algorithmic interpretation may fall outside the scope, but the line between data collection and algorithmic processing is narrow in modern screening platforms.
How should carriers prepare for NAIC-aligned market conduct examinations?
Carriers should assemble an examination-ready documentation package covering governance frameworks, vendor due diligence, bias testing methodology and results, consumer disclosure practices, data security assessments, and model change logs. This package should be reviewed annually and updated to reflect the latest Market Regulation Handbook examination standards. Compliance teams should conduct internal mock examinations using the NAIC's published examination protocols to identify documentation gaps before examiners do.
What role do reinsurance medical directors play in NAIC compliance for digital screening?
Reinsurance medical directors should evaluate ceding company compliance with NAIC guidance as part of treaty due diligence. This includes reviewing the cedant's governance framework for digital health screening, assessing whether the cedant has conducted bias testing on screening algorithms, verifying that clinical oversight is integrated into the screening governance structure, and ensuring that data sharing provisions in the reinsurance agreement are consistent with Model 668 and applicable state privacy requirements.
Will the NAIC eventually mandate specific technology platforms for compliance?
The NAIC has consistently stated that it does not intend to mandate specific technology solutions. Its approach is outcomes-based: carriers must demonstrate that their governance, testing, and documentation meet regulatory expectations regardless of the technology used to achieve compliance. However, the practical demands of examination readiness, continuous monitoring, and regulatory reporting are creating a de facto technology requirement --- carriers without automated governance and documentation capabilities will face increasing difficulty meeting examination timelines and documentation standards.
Chief medical officers and compliance teams evaluating how NAIC guidance applies to their digital health screening programs can explore Circadify's approach to insurance industry compliance workflows at circadify.com/industries/payers-insurance.
