Insights/Industry Compliance

Securities Law for AI Companies: What the SEC Actually Asks About Your Technology Claims

Frederick M. Lehrer, Esq.|13 min read|
Artificial IntelligenceSEC DisclosureTechnology ClaimsMD&AData GovernanceAI Regulation

TLDR

The SEC is scrutinizing AI capability claims in securities filings with the same rigor it applies to revenue projections. Issuers that use marketing language in their MD&A, risk factors, or technology descriptions are creating enforcement exposure. Structure AI disclosure around what the technology actually does, not what you hope it will do.

Why the SEC Is Paying Attention to AI Claims

The SEC has made clear through enforcement actions, staff speeches, and comment letter patterns that it views AI capability claims in securities filings with heightened scrutiny. When I served as a Staff Attorney in the SEC Division of Enforcement, technology companies that overstated their capabilities were among the most common enforcement targets. The AI industry is following the same pattern, with an important difference: the gap between marketing claims and actual capabilities is often wider in AI than in any previous technology sector.

The core issue is that AI companies routinely use promotional language in securities filings that would be appropriate for marketing materials but creates enforcement exposure in the securities disclosure context. When a company states in its Form 10-K that its AI system will transform an industry or deliver unprecedented accuracy, the SEC staff evaluates that claim under the antifraud provisions of the securities laws, not under advertising standards. The legal standard is whether the statement is materially accurate and whether a reasonable investor would be misled.

The SEC's attention to AI disclosure is not speculative. The Commission has brought enforcement actions against companies that claimed to use AI-driven analytics when they were actually using manual processes, against companies that fabricated AI performance metrics in offering materials, and against companies that described development-stage technology as commercially validated in securities filings. These actions establish a clear precedent that AI misrepresentation is treated with the same severity as financial statement fraud.

MD&A Disclosure for AI Technology

The Management Discussion and Analysis section of periodic filings is where AI companies create the most disclosure risk. MD&A requires a discussion of the company's results of operations, financial condition, and known trends and uncertainties that are reasonably likely to affect future results. For AI companies, this means the MD&A must honestly address the current state of the technology, its actual commercial performance, and the uncertainties that could affect future technology development and revenue generation.

The most common MD&A failure for AI companies is the use of promotional technology descriptions that do not align with the company's actual operational reality. When the MD&A describes AI capabilities in the same aspirational language used in investor presentations, the SEC staff will compare those claims against the financial results reported in the same filing. If the AI system is described as commercially transformative but the revenue from AI products is negligible, the disconnect creates a comment letter and potential enforcement exposure.

AI-Specific Risk Factors

Risk factor disclosure for AI companies must go beyond generic technology risk language. The SEC staff expects AI companies to disclose specific risks including algorithmic bias and the potential for discriminatory outputs, model accuracy limitations and the consequences of inaccurate AI decisions, dependency on training data that may be of uncertain provenance or legality, regulatory actions in multiple jurisdictions that could restrict AI deployment, competitive risk from rapidly evolving technology and open-source alternatives, and the risk that current AI capabilities may not translate to commercial viability.

Each risk factor should be connected to its specific impact on the company's business and financial condition. A risk factor stating that AI regulation could adversely affect operations is insufficient. The disclosure should identify which specific regulations or regulatory proposals affect the company, how they affect operations, what compliance costs are anticipated, and what the consequences of non-compliance would be. The Division of Enforcement looks for risk factors that are specific enough to demonstrate that management has actually evaluated the risk, not generic enough to be copied from another company's filing.

Technology Claims vs Securities Disclosure

The fundamental tension for AI companies is that the language that attracts investors and customers is not the language that satisfies securities disclosure requirements. Marketing describes potential. Securities disclosure describes reality. When a company's investor presentation states that its AI platform processes millions of data points with 99.7% accuracy, and that same claim appears in the S-1 or 10-K without qualification, it becomes a representation that the SEC can test against actual performance data.

Based on my experience reviewing these cases, the companies that create the most enforcement exposure are those that use identical language in their securities filings and marketing materials. This creates a direct comparison document that enforcement attorneys use to evaluate whether the company applied the appropriate legal standard to its disclosure. Companies should maintain separate disclosure processes for marketing and securities filings, with securities counsel reviewing all technology claims before they appear in SEC filings.

Data Governance and Privacy Disclosure

AI companies depend on data, and the disclosure obligations surrounding data governance are expanding rapidly. SEC filings must address the sources and legality of training data, compliance with privacy regulations across all jurisdictions where the company operates or has customers, data security measures and breach notification obligations, the consequences of losing access to key data sources, and any pending or threatened regulatory actions related to data practices.

The EU General Data Protection Regulation, the California Consumer Privacy Act, and emerging state and federal privacy laws create disclosure obligations that many AI companies are not adequately addressing. When a company's AI system depends on data that may have been collected in ways that violate privacy laws, the disclosure must address both the legal risk and the business risk of being required to delete or stop using that data. Issuers often misunderstand this obligation because they view data governance as an operational issue rather than a securities disclosure issue.

Intellectual Property Disclosure

AI intellectual property disclosure requires more specificity than most AI companies provide. The SEC staff evaluates IP disclosure to determine whether the company's claims of proprietary technology are supported by actual intellectual property protections. This means disclosing the number and scope of patents, the status of patent applications, the nature of trade secret protections, the company's use of open-source components and the license obligations that attach, and the competitive defensibility of the technology stack as a whole.

A common disclosure failure is claiming proprietary AI technology while relying heavily on open-source models, frameworks, or datasets. The SEC staff can and does investigate these claims, and the discovery that a company's claimed proprietary technology is built primarily on open-source components creates enforcement exposure for misleading disclosure and undermines the company's valuation narrative.

The Emerging Regulatory Landscape

The regulatory environment for AI is evolving rapidly across multiple jurisdictions. The EU AI Act establishes a comprehensive regulatory framework that classifies AI systems by risk level and imposes corresponding obligations. In the United States, executive orders, agency guidance, and proposed legislation are creating a patchwork of AI regulation that companies must monitor and disclose. State-level AI regulations, particularly in areas like employment discrimination, healthcare, and financial services, add additional complexity.

For SEC disclosure purposes, AI companies must identify the specific regulations that affect their operations, assess the compliance costs and timeline, disclose the risk of enforcement actions by multiple regulators, and update this disclosure with each periodic filing as the regulatory landscape changes. The pace of regulatory development means that risk factor and MD&A disclosure that was adequate six months ago may be materially incomplete today.

10 Key Points

  1. 1.The SEC evaluates AI capability claims in securities filings with the same skepticism it applies to revenue projections and asset valuations.
  2. 2.Marketing language about AI technology in MD&A or business descriptions creates enforcement exposure when actual capabilities differ from disclosed capabilities.
  3. 3.Risk factors for AI companies must address algorithmic bias, data governance, model accuracy limitations, regulatory classification, and competitive technology risk.
  4. 4.The distinction between development-stage AI technology and commercially validated AI technology must be clearly disclosed with specific metrics.
  5. 5.Data governance disclosure must address data source legality, privacy compliance across jurisdictions, data security, and the consequences of data access loss.
  6. 6.AI intellectual property disclosure requires specificity about patent protection, trade secret status, open-source component usage, and competitive defensibility.
  7. 7.The EU AI Act and emerging US regulatory frameworks create additional disclosure obligations for companies with international operations.
  8. 8.SEC staff reviewers are increasingly knowledgeable about AI technology and can identify vague or promotional technology claims in filings.
  9. 9.Forward-looking statements about AI capabilities must be accompanied by meaningful cautionary language specific to the technology risks, not boilerplate disclaimers.
  10. 10.Companies that use the same language in their securities filings and marketing materials are creating direct evidence for enforcement comparison.

Frequently Asked Questions

What does the SEC ask about AI technology in securities filings?

Based on my experience in SEC Enforcement and subsequent issuer-side practice, SEC staff comments on AI company filings focus on the specificity of technology capability claims, the distinction between current functionality and aspirational capabilities, the basis for performance metrics cited in disclosure, data source disclosures and associated risks, competitive landscape accuracy, and the consistency between technology claims in the filing and the company's actual development stage.

How should AI companies disclose technology risks?

AI technology risks should be disclosed with specificity rather than generic cautionary language. This includes risks related to model accuracy and reliability, training data quality and bias, regulatory actions that could limit AI deployment, dependency on third-party data or compute resources, customer adoption uncertainty, and the competitive risk from rapidly evolving technology. Each risk should be connected to its specific impact on the company's business and financial condition.

What is the difference between AI marketing claims and securities disclosure?

Marketing claims about AI capability are evaluated under commercial advertising standards that permit promotional language and aspiration. Securities disclosure is evaluated under the antifraud provisions of the Securities Act and Exchange Act, which require material accuracy and prohibit misleading statements or omissions. A marketing claim that an AI system will revolutionize an industry is promotional. The same claim in a securities filing is a forward-looking statement that requires specific cautionary language and a reasonable basis.

Does the SEC have specific rules for AI company disclosure?

The SEC has not adopted AI-specific disclosure rules as of early 2026, but existing disclosure requirements apply fully to AI companies. The SEC has issued guidance through staff comments, speeches, and enforcement actions indicating that it views AI capability claims with heightened scrutiny. The SEC's existing framework for technology company disclosure, risk factor requirements, and MD&A obligations applies to AI companies with particular force.

How should an AI company disclose data governance practices?

Data governance disclosure should address the sources and legality of training data, compliance with privacy regulations across all jurisdictions of operation, data security measures and breach history, the consequences of losing access to key data sources, data retention and deletion practices, and any pending or threatened regulatory actions related to data practices. This disclosure should be specific to the company's actual practices, not aspirational statements about intended governance.

What intellectual property disclosure is required for AI companies?

AI companies must disclose the nature and scope of patent protection for AI algorithms and systems, trade secret protection measures for proprietary models, usage of open-source components and associated license obligations, ownership of AI-generated outputs, and the competitive defensibility of their technology stack. The SEC staff will scrutinize claims of proprietary technology to ensure they are supported by actual intellectual property protections.

How does the EU AI Act affect US securities disclosure?

AI companies with operations or customers in the EU must disclose the impact of the EU AI Act on their business, including classification of their AI systems under the Act's risk framework, compliance costs and timeline, restrictions on certain AI applications, and the potential for enforcement actions by EU regulators. This creates disclosure obligations that overlay standard SEC reporting requirements for companies with international exposure.

What AI enforcement actions has the SEC brought?

The SEC has brought enforcement actions against companies for misrepresenting AI capabilities in securities offerings, including cases where companies claimed to use AI-driven analytics but were actually using manual processes, cases where AI performance metrics in offerings were fabricated or misleading, and cases where AI technology risk factors were inadequate given the company's actual development stage. These actions signal that the SEC treats AI misrepresentation with the same seriousness as financial fraud.

Should AI companies use different language in marketing and SEC filings?

Yes. The legal standards for marketing communications and securities disclosure are fundamentally different, and companies that use identical language in both contexts create direct evidence that enforcement attorneys can use to compare promotional claims against disclosure obligations. Securities filings should use precise, factual language about technology capabilities, while marketing materials can use promotional framing subject to FTC and state advertising regulations.

How should AI companies disclose competitive risks?

Competitive risk disclosure for AI companies should address the pace of technology development in the AI sector, the risk that competitors may develop superior or equivalent technology, dependency on third-party AI platforms or tools, the risk of technology obsolescence, open-source alternatives that could reduce the company's competitive advantage, and the barriers to entry in the company's specific AI application area. Generic statements about competitive risk are insufficient.

What financial disclosure issues are specific to AI companies?

AI companies face specific financial disclosure challenges including the capitalization versus expensing of AI development costs, revenue recognition for AI-as-a-service arrangements, valuation of AI-related intangible assets, impairment testing for AI technology assets, and cost classification for compute resources and data acquisition. These issues require careful coordination between securities counsel and auditors.

How do AI companies handle forward-looking statements?

Forward-looking statements about AI capabilities must comply with the safe harbor provisions of the Private Securities Litigation Reform Act, which require meaningful cautionary language that identifies specific factors that could cause actual results to differ from projections. Boilerplate cautionary language that does not address AI-specific risks is insufficient. The cautionary language must be updated to reflect the current state of the technology and regulatory environment.

What is the SEC's view on AI-generated content in filings?

The SEC has not specifically addressed AI-generated content in securities filings, but existing requirements for accuracy and completeness apply regardless of how disclosure is produced. Officers who certify filings under Sarbanes-Oxley Sections 302 and 906 are personally responsible for the accuracy of the disclosure, whether it was drafted by humans, AI systems, or a combination. Using AI to draft disclosure does not reduce the certification obligation.

How should AI companies disclose algorithmic bias risk?

Algorithmic bias disclosure should identify the specific types of bias risk the company's AI systems face, the measures the company takes to detect and mitigate bias, the regulatory and legal consequences of biased AI outputs, any pending or threatened claims related to algorithmic bias, and the company's testing and validation procedures for bias detection. This disclosure should be factual and specific rather than aspirational.

Do AI startups face different SEC requirements than established AI companies?

AI startups that are SEC reporting companies or filing registration statements face the same disclosure requirements as established companies, but the disclosure challenges differ. Startups must address the risks inherent in early-stage technology, limited operating history, dependence on key technical personnel, and the uncertainty of achieving commercial viability. The SEC staff applies heightened scrutiny to development-stage technology claims because the risk of promotional disclosure is highest in early-stage companies.

What role does securities counsel play for AI companies?

Securities counsel for AI companies must understand both securities disclosure requirements and the specific technology and regulatory landscape of the AI industry. This dual expertise enables counsel to identify disclosure issues that general securities practitioners may miss, anticipate SEC staff questions about AI technology claims, and draft disclosure that is both technically accurate and legally defensible. AI companies should engage securities counsel with demonstrated experience in technology company disclosure.

How should AI companies disclose government AI regulation risk?

Government regulation risk disclosure should identify specific regulatory proposals or enacted legislation that affects the company's AI operations, the potential impact on the company's business model, compliance costs and timeline, the risk of enforcement actions, and the uncertainty created by evolving regulatory frameworks. This disclosure should be updated with each periodic filing to reflect the current regulatory landscape.

What is the impact of AI on corporate governance disclosure?

Companies using AI in decision-making processes should disclose the role of AI in corporate governance, risk management, and operational decision-making. This includes disclosure of AI-driven decision systems, human oversight mechanisms, board-level AI governance, and the qualifications of personnel responsible for AI oversight. The SEC's interest in corporate governance disclosure extends to how companies manage the risks and opportunities presented by AI technology.

Should AI companies disclose their AI model training methodology?

AI companies should disclose sufficient information about their training methodology to enable investors to evaluate technology risk without revealing proprietary trade secrets. This typically includes general descriptions of training data sources and volume, model architecture category, validation and testing procedures, performance benchmarks, and known limitations. The balance between investor information and trade secret protection requires careful drafting by experienced securities counsel.

How does flat-fee counsel benefit AI companies navigating SEC disclosure?

Flat-fee counsel removes the economic friction that discourages AI company management from consulting securities counsel on disclosure questions as they arise. AI technology and regulation are evolving rapidly, and the disclosure implications of technology developments, regulatory changes, and competitive developments require ongoing counsel input. Under hourly billing, the cost of each consultation creates a disincentive to seek counsel at precisely the moments when guidance is most valuable.

This article was written by Frederick M. Lehrer, Esq., a former SEC Division of Enforcement Staff Attorney and Special Assistant United States Attorney (Southern District of Florida) with over 30 years of securities law experience. Florida Bar No. 888400.