AIQu VEIL ML/AI Disclosure

Platform-agnostic marketplace document; app-specific.

Publisher: Integrated Cyber Solutions Inc. d/b/a Integrated Quantum Technologies Inc.

Registered Office: 2600-1066 West Hastings St., Vancouver, British Columbia V6E 3X1, Canada

Legal Notices: [email protected]

Effective date: Upon publication or acceptance through the applicable marketplace. If the applicable marketplace imposes additional mandatory terms, those marketplace terms control for ordering, billing, payment, tax, and marketplace-specific rights.

This ML/AI Disclosure describes the machine-learning functionality of the AIQu VEIL application (the “App”) in a platform-agnostic manner for marketplace publication and customer review. It is provided by Integrated Quantum Technologies Inc. (“Integrated Quantum”).

1. What VEIL Is

VEIL is a privacy-preserving ML infrastructure component. It trains and applies a deterministic autoencoder that transforms raw inputs or feature representations into latent encodings. The training framework uses PyTorch, and deployment artifacts are served or executed using ONNX, ONNX Runtime, or other platform-native ONNX execution components as applicable.

• VEIL is not a general-purpose AI assistant, chatbot, foundation model, or generative AI system.

• VEIL does not generate text, images, audio, video, code, or other expressive content.

• VEIL does not simulate human reasoning or produce human-like conversational outputs.

• VEIL is not designed to answer arbitrary natural-language prompts from end users.

• VEIL is not generally queryable. It communicates only with data sources, feature resources, registries, and ML deployments that the customer has configured and authorized.

• VEIL itself is an infrastructure layer for transforming inputs; it is not, by itself, a business decision engine.

2. How the App Processes Data

The App processes customer inputs inside the customer’s own environment, including the trusted source-side environment described in VEIL’s architecture. Its purpose is to convert raw inputs into latent encodings for downstream ML workflows. Relevant inputs may include tabular records, engineered feature vectors, tensors, or other structured inputs that the customer has selected for processing.

• Latent encodings are intended to preserve information useful for downstream tasks such as classification, regression, or scoring while reducing direct exposure of raw inputs.

• The App does not require customer data to leave the customer environment in order to function.

• The App does not phone home for telemetry, control, model execution, or billing.

• Sensitive data are not logged by the App.

• Integrated Quantum does not receive customer runtime data in the ordinary course of App operation.

• Model artifacts, encoder files, registry entries, schedules, and configuration objects created by the App remain only inside the customer environment until the customer deletes them or uninstalls the App.

3. Intended Output

The principal output of the App is a latent encoding or other in-environment artifact necessary to support the customer’s configured ML workflow. Any downstream model prediction, recommendation, score, classification, regression output, or action is produced by a separate customer-configured deployment, not by VEIL alone.

4. Training and Customer Data Use

If Customer uses the App to train an encoder, that training occurs on Customer data within Customer’s environment. Customer selects the relevant feature columns, data sources, and deployment settings, and Customer initiates training or retraining. Integrated Quantum does not receive any customer data in the use of the App. Therefore Integrated Quantum cannot use customer data received through App runtime to train a provider-hosted backup model, a cross-customer shared model, or a model for unrelated customers.

The App does not autonomously retrain itself, change model parameters for unrelated purposes, or connect to new data sources without customer instruction and customer-side configuration.

5. Determinism and Reproducibility

The VEIL encoder is deterministic. For a given trained model and a given input, the encoding produced is intended to remain the same across repeated calls because the encoder does not use stochastic sampling at inference time. This distinguishes the App from probabilistic generative models, diffusion models, large language models, and other systems that produce variable or non-deterministic outputs.

6. Key Limitations and Privacy Boundary

• VEIL itself does not make predictions, classifications, or decisions on the customer’s behalf.

• VEIL does not include or constitute a complete machine-learning pipeline. Downstream models that consume the encodings are configured, trained, and deployed separately by the customer.

• The privacy properties of the App apply to the encoder layer and the resulting latent encodings, not to the customer’s downstream models, infrastructure, or ML pipeline as a whole.

• VEIL is designed to reduce inversion and raw-data exposure risk through dimensionality reduction, and no decoder component is deployed as part of the App; however, no infrastructure component can eliminate every downstream privacy or inference risk created by customer-side storage, indexing, retention, joining, or model-use practices.

• If latent artifacts are retained or linked with identifiers, quasi-identifiers, or other external join handles in downstream systems, attribute-level inference or other privacy risks may increase.

• Operational outcomes such as latency, throughput, compatibility, and reliability depend materially on the customer’s environment, data shape, feature engineering choices, compute resources, and downstream model configuration.

7. Human Oversight, Intended Use, and High-Impact Uses

VEIL itself is not a final decision-maker. The customer retains control over how encodings are used downstream and remains responsible for oversight of any downstream model, application, workflow, or decision process that consumes VEIL outputs.

The App is intended for enterprise machine-learning programs, including sensitive or regulated data contexts such as financial services, healthcare, fraud detection, and other use cases in which data minimization and security are important operational requirements.

If Customer uses latent encodings or downstream models in employment, credit, housing, insurance, healthcare, public-sector, law-enforcement, or other high-impact contexts, Customer remains responsible for appropriate validation, human oversight, explainability review, appeal procedures, and legal compliance.

8. Customer Responsibilities

• register only those data sources and model deployments that Customer intends the App to access;

• ensure that Customer has the lawful right to process the relevant data;

• test downstream models for fitness, bias, fairness, and explainability where required;

• avoid downstream retention or indexing practices that would make latent artifacts joinable to identity-linked records;

• implement and maintain customer-side controls for access, logging, network segmentation, encryption, and model governance;

• ensure that use of the App complies with applicable law, including data protection, export-control, sanctions, and AI-regulatory obligations applicable to Customer’s use case.

9. Marketplace and Billing Context

Charges for the App are determined by the applicable marketplace listing and marketplace billing controls. Underlying platform charges for compute, storage, containers, registries, networking, or other infrastructure remain separate from App charges.

10. Contact

For questions about the App’s ML/AI functionality, contact [email protected]. For legal notices, contact [email protected].

This disclosure is intended to support transparency. It does not replace technical documentation, marketplace terms, customer security reviews, or customer validation of downstream ML use cases.

Copyright 2026. Integrated Quantum Technologies. All Rights Reserved.