State-by-State Digital Health Assessment Regulations: A Guide
A research-based guide to state by state digital health assessment regulations affecting insurance underwriting, privacy, algorithmic governance, and health data oversight.

State by state digital health assessment regulations are no longer a niche legal issue for innovation teams. They now shape how carriers collect health-related data, how medical directors review third-party screening tools, and how compliance teams document the use of models that influence underwriting. What makes the issue difficult is that there is no single "digital health assessment law" to follow. Instead, insurers are dealing with a patchwork: insurance unfair discrimination rules, AI governance bulletins, biometric and consumer health privacy statutes, and department-level expectations for documentation, testing, and vendor oversight.
"Existing laws and regulations apply to AI systems just as they apply to other insurance practices." - National Association of Insurance Commissioners, Model Bulletin on the Use of Artificial Intelligence Systems by Insurers, adopted December 2023
Where state by state digital health assessment regulations are actually coming from
For most insurance leaders, the practical question is not whether a state has written a law using the phrase "digital health assessment." Usually it has not. The real question is whether the state regulates one or more of the building blocks behind a digital assessment workflow:
- collection of health, biometric, or consumer-generated data
- use of external consumer data and predictive models in underwriting
- disclosure and documentation around algorithmic decision systems
- privacy, retention, deletion, and sharing of health-related information
- oversight of third-party vendors supplying screening or scoring tools
That is why the regulatory map looks uneven. Colorado has a formal algorithmic discrimination framework. New York has put clear expectations around AI governance into insurer guidance. Washington's My Health My Data Act changed the privacy analysis for consumer health data well beyond traditional HIPAA boundaries. Connecticut has also moved toward AI-system governance expectations for insurers. Then there are states that may not have headline AI rules but still apply older unfair trade practice, recordkeeping, and privacy rules to the same workflow.
A carrier using camera-based vitals capture, digital questionnaires, wearable-derived signals, or external health-risk scoring has to read across all of those layers at once.
| State or regulatory bucket | What matters most for digital health assessments | Why buyers care |
|---|---|---|
| Colorado | Formal rules on external consumer data, algorithms, and predictive models under SB 21-169 and DOI governance/testing requirements | Highest enforcement visibility for algorithmic underwriting governance |
| Connecticut | Department guidance aligning insurer AI use with unfair discrimination and governance expectations | Signals that health and underwriting workflows need documented controls, not just product memos |
| New York | DFS guidance emphasizes accountability, testing, documentation, and board-level oversight for AI systems | Large-market pressure; often treated as a practical benchmark even outside New York |
| Washington | My Health My Data Act expands obligations around consumer health data collection, consent, and sharing | Important when assessments rely on app, web, or wellness-style health data outside classic HIPAA channels |
| NAIC-influenced states | Growing adoption of the NAIC AI Model Bulletin as an examination reference point | Creates a de facto national baseline for governance, vendor management, and evidence trails |
| General privacy and insurance states | Traditional unfair discrimination, record retention, privacy, and disclosure laws still apply | A state can create real compliance risk without passing a flashy AI law |
The states setting the pace
Colorado: the clearest underwriting governance model
Colorado is still the state most compliance teams study first. Senate Bill 21-169 focused on external consumer data, algorithms, and predictive models, and the Colorado Division of Insurance followed with governance and testing expectations that are unusually specific by insurance standards. For chief medical officers and compliance leads, the important point is not the law's title. It is the operational burden it creates.
If a digital health assessment affects underwriting, Colorado expects insurers to show that they have thought through governance, documentation, validation, and unfair discrimination testing before the tool is treated as routine production infrastructure. A vendor's slide deck is not enough. Carriers need a defensible paper trail showing what data entered the workflow, how outputs were used, who reviewed the model, and how the organization monitors discriminatory effects.
That matters because Colorado has become a template state. Even when another jurisdiction has not copied Colorado word for word, internal compliance reviews often start with a simpler version of the same questions.
New York: governance expectations with market-wide influence
New York's Department of Financial Services has not been subtle about insurer governance expectations. In 2024, DFS guidance on AI systems made the familiar themes harder to ignore: board and senior management accountability, risk management, testing, vendor oversight, documentation, and controls proportionate to the materiality of the system.
For digital health assessments, that pushes the conversation out of product teams and into enterprise governance. If a screening signal can affect eligibility, pricing, class assignment, or escalation to manual review, the workflow starts to look less like an experimental health-tech feature and more like a regulated insurance decision system.
New York also matters because carriers rarely build separate governance stacks for one state. When DFS raises expectations, the response often becomes enterprise policy.
Connecticut: a quieter but important signal
Connecticut has become one of the states to watch because its insurance department has increasingly treated AI-system governance as an active supervisory issue rather than a theoretical future issue. The direction is consistent with the NAIC bulletin and with broader unfair discrimination concerns: insurers should be able to explain what a model does, what data it uses, how it is monitored, and how vendor relationships are controlled.
That is especially relevant for health assessments supplied by third parties. Many carriers do not build the underlying technology themselves. They license workflows, scoring layers, or data collection interfaces. Connecticut-style oversight pushes responsibility back onto the insurer anyway.
The privacy states matter just as much as the AI states
Insurance teams sometimes overfocus on AI governance and miss the privacy side of the map. That is a mistake.
Washington's My Health My Data Act is the clearest example. The law is broader than many executives initially assumed, because it reaches consumer health data outside the traditional covered-entity logic of HIPAA. If a digital health assessment captures or infers health status, bodily function, diagnosis-related information, or treatment-related information, privacy analysis can become much more complicated.
For insurers and insurtech vendors, that creates three immediate questions:
- Was the data collected under a disclosure and consent structure that fits the state law?
- Can the data be shared with vendors, affiliates, or reinsurers under the same assumptions the business has historically used?
- Do retention and deletion practices match the promises made at collection?
Those are not edge questions anymore. They are central design questions for any digital assessment workflow that pulls from web forms, mobile experiences, wellness interfaces, or nontraditional health data streams.
California, Virginia, Colorado, Connecticut, and other privacy-law states create similar pressure in different ways. The details vary, but the trend is stable: health-adjacent consumer data is being regulated more aggressively, and insurance use cases do not get a free pass simply because underwriting has always used sensitive information.
What this means for underwriting teams and medical directors
The regulatory picture can feel abstract until it reaches operations. In practice, state by state digital health assessment regulations change the work in four places.
1. Vendor onboarding
A screening vendor now has to provide more than a security questionnaire and a demo. Carriers increasingly need:
- clear data dictionaries
- evidence of validation work and intended use
- documentation on model updates and change management
- bias testing or monitoring methodology where relevant
- contractual language on audit rights, retention, deletion, and incident reporting
2. Clinical governance
Medical directors are being pulled closer to the front of the process. That makes sense. If a digital assessment produces a physiologic estimate, risk signal, or referral flag, someone has to decide whether that output is clinically interpretable and appropriate for underwriting use. The governance question is no longer just "does the model run?" It is "should this output influence an insurance decision in this form?"
3. Documentation and exam readiness
The organizations that struggle most are usually not the ones with the worst intentions. They are the ones that cannot reconstruct their own workflow six months later. Regulators increasingly want evidence, not summaries. They want to know:
- what data entered the system
- what model or rules touched it
- what decision point it informed
- what controls were in place
- who owned the process
4. Multi-state rollout planning
A carrier cannot assume that launching in one permissive jurisdiction makes national expansion straightforward. The rollout sequence itself becomes a regulatory strategy question. Many teams now start with a state matrix before finalizing product scope.
Current research and evidence
The strongest evidence base here is not one clinical trial. It is the convergence of regulatory and industry sources.
The NAIC's December 2023 Model Bulletin gave state insurance departments a common vocabulary for AI-system governance: accountability, risk management, validation, third-party oversight, and compliance with existing insurance law. That bulletin matters because departments do not need to pass a new statute to start using those concepts in examinations.
Colorado then supplied the operational model. Its framework around external consumer data and predictive models turned abstract fairness concerns into measurable governance work. In effect, Colorado showed what it looks like when a department expects insurers to evidence governance rather than merely describe it.
New York DFS added a large-market supervisory signal. Its 2024 guidance made clear that AI governance is not a side issue for innovation teams. It belongs in enterprise risk management and board oversight.
Washington's privacy regime added a second line of pressure. A digital health assessment can be compliant from an underwriting-model perspective and still create legal exposure through the way health-related data is collected, disclosed, stored, or shared.
McDermott Will & Emery, Quarles, Kennedys, and other insurance law practices all published 2024-2025 analyses reaching a similar conclusion: insurers should stop waiting for a single uniform national rule and should instead treat the combined effect of state insurance, AI, and privacy oversight as today's working standard.
The future of state digital health assessment regulation
The next phase probably will not arrive as one sweeping federal framework. It will arrive through more states borrowing from the same handful of templates.
I would expect three things over the next cycle.
First, more departments will use the NAIC bulletin as a supervisory reference even if they do not formally adopt it. Second, privacy rules will keep expanding around consumer health data, especially where app-based or inferred health signals are involved. Third, carriers will be asked to show more evidence that their governance process is alive: current inventories, current testing, current vendor reviews, current escalation paths.
That last point is easy to miss. A static policy manual does not carry much weight anymore. Regulators are increasingly looking for living controls.
Frequently asked questions
Which state is most important for insurance digital health assessment compliance right now?
Colorado is still the clearest benchmark because it has the most developed framework around external consumer data, algorithms, and predictive models in insurance. Even carriers with limited Colorado exposure often use it as the first draft of their internal control model.
Does HIPAA solve the compliance issue for digital health assessments?
No. HIPAA may apply in some contexts, but state insurance law, unfair discrimination standards, AI guidance, and consumer health privacy statutes can all create separate obligations. Washington's My Health My Data Act is the clearest reminder that health-related consumer data may be regulated outside classic HIPAA boundaries.
If a third-party vendor runs the assessment, does the insurer carry less risk?
Usually not. State insurance departments generally expect the insurer to remain responsible for governance, documentation, and the fairness of decisions influenced by vendor tools. Vendor reliance changes the workflow, but it does not remove accountability.
Do all states need their own separate digital health assessment program design?
Not necessarily. Most carriers build one core governance framework and then adjust controls, disclosures, data practices, and rollout strategy based on state-specific requirements. The goal is not fifty separate products. It is one defensible system with state-aware controls.
What is the biggest mistake compliance teams make in this area?
Treating the issue as an AI-only problem. In many cases, the real exposure comes from the combination of underwriting rules, privacy obligations, recordkeeping gaps, and weak vendor documentation.
Teams building multi-state digital assessment workflows can also review our related analysis on state insurance regulations for digital screening and NAIC guidelines for digital health screening.
Circadify is building for this compliance-heavy environment, with workflows designed for payer and insurance teams that need a clearer path from digital assessment capability to operational governance. Learn more at circadify.com/industries/payers-insurance.
