Overview
The VARL API provides programmatic access to our biological intelligence platform. It enables developers, researchers, and institutions to integrate digital twin simulations, molecular pathway analysis, biomarker detection, and predictive modeling into their own workflows — all through a single, unified RESTful interface.
Every endpoint follows a consistent design philosophy: submit biological context, receive structured intelligence. Whether you are building a clinical decision support tool, automating drug screening pipelines, or constructing real-time patient monitoring dashboards, the VARL API abstracts the complexity of computational biology into clean, composable operations.
Base URL
All API requests are made to the following base URL. All endpoints require HTTPS. HTTP requests will be rejected.
https://api.varl.net/v1Architecture
The API is organized around five primary resource groups that mirror VARL's scientific workflow. Each group encapsulates a distinct phase of the biological intelligence pipeline, from data ingestion to actionable prediction.
Digital Twins
Virtual representations of biological systems — from individual cells and protein networks to complete organ models. Create twins from genomic profiles, configure environmental parameters, attach real-time patient data, and query system state at any resolution. Twins persist across sessions and evolve as new data is integrated.
Simulations
Run computational experiments on digital twins. Introduce drug candidates, model pathway disruptions, simulate genetic mutations, and observe cascading effects across biological subsystems. Simulations execute in parallel and return time-series data with configurable granularity. Batch operations support running thousands of scenarios simultaneously.
Biomarkers
Detect, track, and analyze molecular biomarkers across patient cohorts or simulation outputs. The biomarker engine identifies statistically significant markers from multi-omics data, correlates them with disease states, and provides confidence-scored recommendations for diagnostic and therapeutic targets.
Predictions
AI-powered forecasting endpoints built on VARL's proprietary biological language models. Submit patient data, molecular profiles, or simulation snapshots and receive predictions about disease trajectories, treatment efficacy, adverse event probability, biomarker evolution, and system-level outcomes. Every prediction includes confidence intervals and explainability metadata.
Datasets
Access curated biological datasets spanning genomics, proteomics, metabolomics, and clinical trial records. Upload proprietary data for secure analysis, or query VARL's reference library of over 2.4 million annotated molecular interactions. Datasets support streaming for large-scale operations and are versioned for reproducibility.
Request Format
The API accepts JSON-encoded request bodies and returns JSON-encoded responses. All timestamps are in ISO 8601 format. Pagination follows cursor-based patterns for consistent performance across large result sets. Every response includes a request_id field for debugging and audit purposes.
// Create a digital twin from a genomic profile { "organism": "homo_sapiens", "system": "cardiovascular", "resolution": "cellular", "source_data": { "genomic_profile": "ds_gp_82kf9n", "clinical_history": "ds_ch_4m2j7p" }, "config": { "time_horizon": "365d", "update_frequency": "real_time", "fidelity": "high" } }
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"status": "initializing",
"organism": "homo_sapiens",
"system": "cardiovascular",
"resolution": "cellular",
"node_count": 847293,
"edge_count": 2341876,
"created_at": "2026-02-14T09:32:11Z",
"ready_at": null,
"request_id": "req_v4rl_7k2m9n"
}SDKs & Libraries
Official client libraries for Python, TypeScript, and R are under active development. They will handle authentication, request signing, automatic retries, and response parsing.
varl-sdk
Coming soon@varl/sdk
Coming soonvarl
Coming soonVersioning
The API uses date-based versioning. The current version is 2026-02-01. When breaking changes are introduced, a new version date is published and the previous version remains available for 12 months. You can pin your integration to a specific version by including the VARL-Version header in your requests.
Status
Current API uptime is 99.97%. System status, incident reports, and scheduled maintenance windows are published at status.varl.net. Subscribe to receive real-time notifications via email or webhook.
Authentication
The VARL API uses API keys to authenticate requests. Every request must include a valid key in the Authorization header. Keys are scoped to organizations and carry specific permissions that determine which resources and operations are accessible.
API keys are sensitive credentials. Do not expose them in client-side code, public repositories, or log files. If a key is compromised, revoke it immediately from your dashboard and generate a new one. All key rotation events are logged and auditable.
Obtaining API Keys
API keys are generated from the VARL Dashboard under Settings → API Keys. Each organization can create up to 50 active keys. When creating a key, you must assign it a name, select its permission scope, and optionally restrict it to specific IP ranges or environments.
Keys are displayed only once at the time of creation. Store them securely in environment variables or a secrets manager. VARL does not store plaintext keys — only a cryptographic hash is retained on our servers.
Making Authenticated Requests
Include your API key in the Authorization header using the Bearer scheme. This is the only supported authentication method. Query parameter authentication is not supported for security reasons.
Authorization: Bearer varl_sk_live_4f8k2m9n7j3p1x...
curl -X GET https://api.varl.net/v1/twins \ -H "Authorization: Bearer varl_sk_live_4f8k2m9n7j3p1x" \ -H "Content-Type: application/json" \ -H "VARL-Version: 2026-02-01"
from varl import Client # The SDK reads VARL_API_KEY from environment by default client = Client() # Or pass it explicitly client = Client(api_key="varl_sk_live_4f8k2m9n7j3p1x")
Key Types
VARL issues two types of API keys. Each serves a distinct purpose and carries different security implications. Using the wrong key type in production is a common source of integration issues.
Live Keys
Used in production environments. Live keys have access to real biological data, execute actual simulations on VARL's compute infrastructure, and consume your organization's quota. All operations performed with live keys are logged, billed, and subject to rate limits. Results from live key operations are persisted and available for downstream analysis.
Test Keys
Used in development and staging environments. Test keys operate against a sandboxed copy of the API that returns synthetic data. Simulations complete instantly with deterministic outputs. No quota is consumed and no data is persisted. Test keys are ideal for building integrations, running CI/CD pipelines, and validating request formats without incurring costs.
Permission Scopes
Each API key is assigned one or more permission scopes that control which endpoints it can access. Scopes follow a resource-based model with granular read/write separation. Apply the principle of least privilege — assign only the scopes required for each key's intended use case.
| Scope | Access | Description |
|---|---|---|
| twins:read | Read | List and retrieve digital twins, query twin state and metadata |
| twins:write | Write | Create, update, configure, and delete digital twins |
| simulations:run | Execute | Start simulations, submit batch jobs, cancel running operations |
| simulations:read | Read | Retrieve simulation results, status, and time-series outputs |
| biomarkers:read | Read | Query biomarker databases, retrieve detection results |
| predictions:run | Execute | Submit prediction requests, access forecasting models |
| datasets:read | Read | Access curated datasets and reference libraries |
| datasets:write | Write | Upload proprietary data, create custom dataset versions |
IP Allowlisting
For enhanced security, API keys can be restricted to specific IP addresses or CIDR ranges. When IP allowlisting is enabled, requests originating from unlisted addresses will receive a 403 Forbidden response regardless of key validity. Configure allowlists from the dashboard or via the management API.
{ "ip_allowlist": [ "203.0.113.0/24", "198.51.100.42" ], "enforce_allowlist": true }
Key Rotation
VARL recommends rotating API keys every 90 days as a security best practice. The rotation process is designed for zero-downtime transitions: create a new key, update your application to use it, verify successful requests, then revoke the old key. During the transition period both keys remain active.
Automated rotation is available through the management API. You can also configure expiration dates at key creation time — expired keys are automatically disabled and cannot be reactivated. Rotation events, including the originating IP and user agent, are recorded in your organization's audit log.
Authentication Errors
When authentication fails, the API returns one of the following error responses. All error responses include a machine-readable error.code field for programmatic handling.
authentication_required
No API key was provided. Include the Authorization header with a valid Bearer token.
invalid_api_key
The provided API key does not match any active key. Verify the key is correct and has not been revoked.
insufficient_scope
The API key is valid but lacks the required permission scope for this endpoint. Update the key's scopes from the dashboard.
ip_not_allowed
The request originates from an IP address not in the key's allowlist. Add the IP to the allowlist or disable IP restriction.
key_expired
The API key has passed its expiration date. Generate a new key from the dashboard. Expired keys cannot be reactivated.
Security Recommendations
Follow these practices to maintain the security of your VARL API integration:
- Store API keys in environment variables or a dedicated secrets manager — never hardcode them in source files.
- Use separate keys for development, staging, and production environments with appropriately scoped permissions.
- Enable IP allowlisting for production keys to restrict access to known infrastructure.
- Rotate keys every 90 days. Set expiration dates on keys that are intended for temporary use.
- Monitor the audit log for unexpected key usage patterns — geographic anomalies, unusual request volumes, or access outside business hours.
- Revoke compromised keys immediately. VARL invalidates revoked keys within 30 seconds globally.
Digital Twins
Digital twins are the foundational abstraction in the VARL platform. A digital twin is a high-fidelity computational replica of a biological system — a cell, a tissue, an organ, or an entire organism — constructed from real-world data and maintained as a living, queryable object. Twins ingest genomic profiles, proteomic signatures, clinical histories, and environmental parameters to produce a model that behaves as its biological counterpart would under identical conditions.
Unlike static snapshots, VARL digital twins are dynamic entities. They evolve over time as new data is fed into them, recalibrate their internal state in response to interventions, and maintain a full audit trail of every mutation. This makes them suitable for longitudinal patient monitoring, iterative drug design, and real-time clinical decision support.
Every twin is defined by three layers: a structural layer that maps the topology of biological components and their connections, a functional layer that encodes the kinetic and thermodynamic rules governing interactions, and a data layer that binds the model to patient-specific or population-level measurements. Together, these layers produce a system that can be interrogated, perturbed, and observed with the same rigor as a physical experiment.
Create a Digital Twin
Creating a twin initializes a new biological model from source data. The creation process involves three phases: data validation, graph construction, and calibration. Depending on the resolution and system complexity, initialization can take between 2 seconds and 15 minutes. The twin object is returned immediately with a status field that transitions from initializing to ready when calibration completes.
{ "organism": "homo_sapiens", "system": "immune", "resolution": "molecular", "name": "Patient-0042 Immune Model", "source_data": { "genomic_profile": "ds_gp_82kf9n", "proteomic_data": "ds_pd_3m7k1x", "clinical_history": "ds_ch_4m2j7p", "microbiome_snapshot": "ds_mb_9f2n4k" }, "config": { "time_horizon": "730d", "update_frequency": "real_time", "fidelity": "high", "stochastic_noise": true, "auto_calibrate": true } }
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| organism | string | Yes | Target organism. Supported: homo_sapiens, mus_musculus, rattus_norvegicus, danio_rerio, caenorhabditis_elegans |
| system | string | Yes | Biological system to model. Options include immune, cardiovascular, nervous, endocrine, respiratory, digestive, hepatic, renal, musculoskeletal, or whole_body |
| resolution | string | Yes | Model granularity. molecular (atomic-level interactions), cellular (cell-level dynamics), tissue (tissue-level aggregation), organ (organ-level abstraction). Higher resolution increases compute cost and initialization time. |
| name | string | No | Human-readable label for this twin. Maximum 256 characters. Defaults to an auto-generated identifier. |
| source_data | object | No | Dataset references to seed the twin. Accepts IDs from the Datasets API. If omitted, a generic population-average model is created. |
| config.time_horizon | string | No | Maximum simulation window. Format: Nd for days. Default 365d. Maximum 3650d. |
| config.fidelity | string | No | Computation precision. low (fast, approximate), medium (balanced), high (maximum accuracy, slower). Default medium. |
| config.stochastic_noise | boolean | No | Enable biological noise modeling. When true, simulations incorporate stochastic variation that mimics real-world biological variability. Default false. |
Response
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"name": "Patient-0042 Immune Model",
"status": "initializing",
"organism": "homo_sapiens",
"system": "immune",
"resolution": "molecular",
"node_count": 1247839,
"edge_count": 4892156,
"layers": {
"structural": { "status": "complete", "nodes": 412893 },
"functional": { "status": "calibrating", "rules": 89247 },
"data": { "status": "binding", "sources": 4 }
},
"config": {
"time_horizon": "730d",
"update_frequency": "real_time",
"fidelity": "high",
"stochastic_noise": true,
"auto_calibrate": true
},
"metadata": {
"compute_estimate_ms": 47200,
"memory_footprint_mb": 2340,
"version": "2026-02-01"
},
"created_at": "2026-02-14T09:32:11Z",
"ready_at": null,
"request_id": "req_v4rl_7k2m9n"
}Retrieve a Digital Twin
Fetch the current state of a twin, including its calibration status, node/edge counts, layer health, and the most recent snapshot timestamp. This endpoint is idempotent and safe for polling during initialization.
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"name": "Patient-0042 Immune Model",
"status": "ready",
"organism": "homo_sapiens",
"system": "immune",
"resolution": "molecular",
"node_count": 1247839,
"edge_count": 4892156,
"layers": {
"structural": { "status": "complete", "nodes": 412893 },
"functional": { "status": "complete", "rules": 89247 },
"data": { "status": "complete", "sources": 4 }
},
"health": {
"drift_score": 0.003,
"last_calibration": "2026-02-14T09:34:42Z",
"data_freshness": "2026-02-14T09:30:00Z"
},
"simulations_run": 0,
"snapshots": 1,
"created_at": "2026-02-14T09:32:11Z",
"ready_at": "2026-02-14T09:34:42Z",
"updated_at": "2026-02-14T09:34:42Z"
}List Digital Twins
Returns a paginated list of all twins in your organization. Results are ordered by creation date (newest first). Use cursor-based pagination for consistent results across large collections. Supports filtering by organism, system, status, and creation date range.
Query Parameters
| Parameter | Type | Description |
|---|---|---|
| organism | string | Filter by organism type |
| system | string | Filter by biological system |
| status | string | Filter by status: initializing, ready, degraded, archived |
| limit | integer | Number of results per page. Default 20, maximum 100. |
| cursor | string | Pagination cursor from a previous response's next_cursor field |
Update a Digital Twin
Modify a twin's configuration, attach new data sources, or rename it. Structural parameters (organism, system, resolution) are immutable after creation — to change them, create a new twin. Updating data sources triggers an automatic recalibration cycle.
{ "name": "Patient-0042 Immune Model v2", "source_data": { "metabolomic_panel": "ds_mt_7n3k9f" }, "config": { "fidelity": "high", "stochastic_noise": true } }
Delete a Digital Twin
Permanently deletes a twin and all associated data, including snapshots, simulation history, and cached predictions. This action is irreversible. Active simulations running against this twin will be terminated. For non-destructive removal, use the archive endpoint instead.
{
"id": "twn_8f3k2n9m",
"object": "digital_twin",
"deleted": true
}Query Twin State
Inspect the internal state of a twin at any point in its timeline. State queries allow you to examine specific nodes (genes, proteins, metabolites), edges (interactions, pathways), or subgraphs (functional modules) without running a full simulation. This is useful for debugging, data validation, and building monitoring dashboards.
{ "query_type": "node_state", "targets": ["TP53", "BRCA1", "EGFR"], "timestamp": "2026-02-14T09:34:42Z", "include_neighbors": true, "depth": 2 }
{
"twin_id": "twn_8f3k2n9m",
"query_type": "node_state",
"timestamp": "2026-02-14T09:34:42Z",
"results": [
{
"node": "TP53",
"type": "tumor_suppressor",
"expression_level": 0.72,
"activity_state": "active",
"phosphorylation": { "S15": true, "S20": false },
"neighbors": ["MDM2", "ATM", "CDKN1A", "BAX"]
},
{
"node": "BRCA1",
"type": "dna_repair",
"expression_level": 0.89,
"activity_state": "active",
"complex_membership": ["BRCA1-BARD1", "BASC"],
"neighbors": ["BARD1", "RAD51", "PALB2", "ATM"]
},
{
"node": "EGFR",
"type": "receptor_tyrosine_kinase",
"expression_level": 0.34,
"activity_state": "basal",
"ligand_bound": false,
"neighbors": ["GRB2", "SOS1", "ERBB2", "SHC1"]
}
],
"subgraph": {
"total_nodes": 47,
"total_edges": 128,
"depth_explored": 2
}
}Snapshots
Snapshots capture the complete state of a digital twin at a specific moment. They serve as checkpoints that can be restored, compared, or used as starting points for simulations. VARL automatically creates snapshots after initialization and after each simulation. You can also create manual snapshots at any time.
Snapshots are immutable once created. They include the full node/edge state, all configuration parameters, and references to the source data that was active at the time of capture. Comparing two snapshots reveals exactly what changed between them — useful for tracking disease progression or measuring intervention impact.
{ "label": "pre-treatment-baseline", "description": "Baseline state before chemotherapy simulation" }
Compare Snapshots
Diff two snapshots to identify changes in node expression, edge weights, pathway activity, and system-level metrics. The comparison engine uses a hierarchical diffing algorithm that reports changes at the level of individual molecules, functional modules, and whole-system behavior.
{ "snapshot_a": "snap_2k4m8n", "snapshot_b": "snap_7j3p1x", "granularity": "pathway", "significance_threshold": 0.05 }
Twin Lifecycle
A digital twin passes through several states during its lifecycle. Understanding these states is important for building robust integrations that handle asynchronous operations correctly.
The twin is being constructed. Source data is validated, the biological graph is assembled, and calibration is in progress. Queries and simulations are not available in this state.
Calibration is complete and the twin is fully operational. All endpoints are available. The twin will remain in this state as long as it receives regular data updates and passes health checks.
New data has been attached and the twin is updating its internal state. Read queries remain available but may return stale data. New simulations are queued until recalibration completes.
The twin's drift score exceeds acceptable thresholds, indicating that its model has diverged significantly from observed biological reality. Simulations may return unreliable results. Supply fresh data or trigger a manual recalibration.
The twin has been soft-deleted. It is excluded from list results and cannot run simulations, but its data and snapshots are retained for 90 days. Archived twins can be restored to active status.
Webhooks
Register webhooks to receive real-time notifications about twin lifecycle events. Supported events include twin.ready, twin.degraded, twin.recalibrating, snapshot.created, and twin.deleted. Webhook payloads include the full twin object at the time of the event.
Simulations
Simulations are computational experiments executed against digital twins. They allow you to introduce perturbations — drug candidates, genetic mutations, environmental changes, pathway disruptions — and observe how the biological system responds over time. Every simulation produces a high-resolution time-series of molecular, cellular, and system-level changes.
Simulations run asynchronously on VARL's distributed compute infrastructure. A typical simulation completes in 30 seconds to 20 minutes depending on twin complexity, simulation type, and time horizon. Batch endpoints allow you to submit thousands of scenarios simultaneously — each executing in parallel and returning results independently.
Every simulation is fully reproducible. Given the same twin state, parameters, and random seed, the system produces identical outputs. This is critical for regulatory compliance, peer review, and iterative research workflows where results must be independently verifiable.
Run a Simulation
Submit a simulation against an existing digital twin. The twin must be in ready state. The simulation object is returned immediately with status queued, then transitions through running and completed. Use webhooks or polling to track progress.
{ "twin_id": "twn_8f3k2n9m", "type": "drug_response", "name": "Pembrolizumab response — Patient 0042", "intervention": { "compound": "pembrolizumab", "target": "PD-1", "mechanism": "checkpoint_inhibition", "dosage_mg": 200, "frequency": "every_21d", "duration": "180d" }, "config": { "time_steps": 1000, "observe": ["PD-1", "CD8_T_cells", "tumor_burden", "IFN_gamma"], "snapshot_on_complete": true, "seed": 42 } }
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| twin_id | string | Yes | ID of the digital twin to simulate against. Must be in ready state. |
| type | string | Yes | Simulation type: drug_response, pathway_disruption, genetic_mutation, treatment_protocol, environmental_stress |
| intervention | object | Yes | Perturbation parameters. Structure depends on simulation type. See Simulation Types below. |
| config.time_steps | integer | No | Number of discrete time steps in the output series. Default 500. Maximum 10,000. Higher values increase resolution and compute time. |
| config.observe | string[] | No | List of node IDs or system metrics to track. If omitted, all nodes in the perturbation neighborhood are observed. |
| config.snapshot_on_complete | boolean | No | Automatically create a twin snapshot when the simulation completes. Default true. |
| config.seed | integer | No | Random seed for reproducibility. Same seed + same twin state = identical results. If omitted, a random seed is generated. |
Response
{
"id": "sim_4k7m2n9f",
"object": "simulation",
"twin_id": "twn_8f3k2n9m",
"type": "drug_response",
"name": "Pembrolizumab response — Patient 0042",
"status": "queued",
"progress": 0,
"intervention": {
"compound": "pembrolizumab",
"target": "PD-1",
"mechanism": "checkpoint_inhibition",
"dosage_mg": 200,
"frequency": "every_21d",
"duration": "180d"
},
"config": {
"time_steps": 1000,
"observe": ["PD-1", "CD8_T_cells", "tumor_burden", "IFN_gamma"],
"snapshot_on_complete": true,
"seed": 42
},
"compute": {
"estimated_duration_ms": 187000,
"gpu_allocation": "A100x4",
"priority": "standard"
},
"created_at": "2026-02-14T10:15:33Z",
"started_at": null,
"completed_at": null,
"request_id": "req_v4rl_9m2k7n"
}Simulation Types
Each simulation type accepts a different intervention schema optimized for its domain. The type determines which perturbation model is applied, how cascading effects are computed, and what output metrics are generated.
Drug Response
Simulate how a compound interacts with its molecular target and propagates through downstream signaling pathways. Supports single compounds, combination therapies, and dose-response curves. Output includes binding kinetics, pathway activation changes, off-target effects, and predicted efficacy scores with confidence intervals.
Pathway Disruption
Knock out, overexpress, or modulate specific nodes in a signaling pathway and observe cascading effects across the biological network. Useful for identifying critical control points, validating therapeutic hypotheses, and understanding disease mechanisms at a systems level.
Genetic Mutation
Introduce point mutations, insertions, deletions, or copy number variations and simulate their phenotypic consequences over time. The engine models protein folding changes, expression level shifts, and downstream pathway rewiring. Critical for variant interpretation and gene therapy design.
Treatment Protocol
Simulate multi-step clinical protocols including sequential drug administration, dosage adjustments, and combination regimens. Models pharmacokinetics, drug-drug interactions, resistance development, and treatment scheduling optimization across the full protocol duration.
Environmental Stress
Apply environmental perturbations — temperature, pH, oxidative stress, nutrient deprivation, toxin exposure — to observe biological system responses. Primarily used in agricultural and toxicology applications. Models stress response cascades, adaptive mechanisms, and system failure thresholds.
Retrieve Simulation Results
Once a simulation reaches completed status, retrieve its full results including time-series data, summary statistics, and system-level metrics. Results are cached for 90 days after completion.
{
"simulation_id": "sim_4k7m2n9f",
"status": "completed",
"summary": {
"efficacy_score": 0.847,
"confidence": 0.93,
"adverse_events_predicted": 2,
"resistance_probability": 0.12,
"optimal_dosage_mg": 185
},
"time_series": {
"time_points": 1000,
"duration": "180d",
"channels": {
"tumor_burden": {
"initial": 1.0,
"final": 0.23,
"min": 0.19,
"trend": "decreasing",
"data": [1.0, 0.98, 0.95, "..."]
},
"CD8_T_cells": {
"initial": 0.34,
"final": 0.89,
"max": 0.94,
"trend": "increasing",
"data": [0.34, 0.36, 0.41, "..."]
},
"IFN_gamma": {
"initial": 0.12,
"final": 0.67,
"peak_at_step": 412,
"trend": "increasing",
"data": [0.12, 0.14, 0.18, "..."]
}
}
},
"adverse_events": [
{
"type": "hepatotoxicity",
"probability": 0.08,
"severity": "grade_1",
"onset_day": 42
},
{
"type": "dermatitis",
"probability": 0.15,
"severity": "grade_2",
"onset_day": 28
}
],
"snapshot_id": "snap_9f2k4m",
"compute_time_ms": 174200,
"completed_at": "2026-02-14T10:18:27Z"
}Batch Simulations
Submit multiple simulations in a single request. Batch operations execute in parallel across VARL's compute cluster. This is the recommended approach for drug screening, dose-response analysis, and combinatorial testing where hundreds or thousands of scenarios need to be evaluated simultaneously.
{ "twin_id": "twn_8f3k2n9m", "type": "drug_response", "variations": [ { "compound": "pembrolizumab", "dosage_mg": 100 }, { "compound": "pembrolizumab", "dosage_mg": 200 }, { "compound": "pembrolizumab", "dosage_mg": 400 }, { "compound": "nivolumab", "dosage_mg": 240 } ], "config": { "time_steps": 500 } }
{
"id": "batch_7n3k9f2m",
"object": "simulation_batch",
"total": 4,
"status": "running",
"simulations": [
{ "id": "sim_a1b2c3", "variation_index": 0, "status": "running" },
{ "id": "sim_d4e5f6", "variation_index": 1, "status": "running" },
{ "id": "sim_g7h8i9", "variation_index": 2, "status": "queued" },
{ "id": "sim_j1k2l3", "variation_index": 3, "status": "queued" }
],
"created_at": "2026-02-14T10:20:00Z"
}Cancel a Simulation
Terminate a running or queued simulation. Cancelled simulations release their compute allocation immediately. Partial results from cancelled simulations are discarded and cannot be retrieved. Only simulations in queued or running state can be cancelled.
{
"id": "sim_4k7m2n9f",
"object": "simulation",
"status": "cancelled",
"cancelled_at": "2026-02-14T10:16:45Z"
}Simulation Lifecycle
Simulations transition through a deterministic set of states. Use the status field to track progress programmatically, or register webhooks for real-time notifications.
Simulation is accepted and waiting for compute resources. Queue time depends on cluster load. Typical wait: under 5 seconds.
Compute resources are allocated and the simulation is executing. The progress field updates every 5 seconds with a value between 0 and 100.
Simulation finished successfully. Full results are available via the results endpoint. If snapshot_on_complete was enabled, a twin snapshot has been created.
Simulation encountered an unrecoverable error. The error field contains a machine-readable error code and human-readable message. Common causes: twin degraded mid-simulation, invalid intervention parameters, compute timeout.
Simulation was manually cancelled via the cancel endpoint. No results are available. Compute resources have been released.
Biomarkers
The Biomarkers API detects, quantifies, and tracks molecular indicators of disease, treatment response, and biological system health. It operates on multi-omics data — genomic, proteomic, metabolomic, and transcriptomic — to identify statistically significant markers that correlate with clinical outcomes.
Unlike traditional biomarker discovery pipelines that take months of manual analysis, the VARL engine screens thousands of candidate markers simultaneously, validates them against population-level reference data, and returns confidence-scored results ranked by clinical relevance. Every detected marker includes mechanistic context — not just what changed, but why it matters.
Biomarker endpoints integrate directly with Digital Twins and Simulations. You can detect markers from real patient data, track them across simulation time-series, or compare marker profiles between twin snapshots to measure intervention impact.
Detect Biomarkers
Submit multi-omics data and receive a ranked list of detected biomarkers with clinical significance scores, reference ranges, and mechanistic annotations. The detection engine applies ensemble machine learning models trained on VARL's proprietary biological knowledge graph spanning 2.4 million validated molecular interactions.
{ "source": "dataset", "dataset_id": "ds_pd_3m7k1x", "omics_layers": ["proteomic", "metabolomic"], "condition": "non_small_cell_lung_cancer", "config": { "significance_threshold": 0.01, "max_results": 50, "include_mechanism": true } }
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| source | string | Yes | Data source type: dataset, twin, or simulation |
| dataset_id | string | Conditional | Required when source is dataset. Reference to an uploaded omics dataset. |
| twin_id | string | Conditional | Required when source is twin. Extracts markers from current twin state. |
| omics_layers | string[] | Yes | Which omics layers to analyze: genomic, transcriptomic, proteomic, metabolomic, epigenomic |
| condition | string | No | Disease or phenotype context for prioritizing markers. Uses VARL's disease ontology. |
| config.significance_threshold | float | No | P-value cutoff for statistical significance. Default 0.05. Recommended 0.01 for clinical applications. |
Response
{
"id": "bm_det_7k2m9n",
"object": "biomarker_detection",
"condition": "non_small_cell_lung_cancer",
"markers_detected": 23,
"markers": [
{
"id": "bm_001",
"name": "CEA",
"full_name": "Carcinoembryonic Antigen",
"type": "protein",
"omics_layer": "proteomic",
"value": 12.4,
"unit": "ng/mL",
"reference_range": { "low": 0, "high": 3.0 },
"fold_change": 4.13,
"p_value": 0.00012,
"clinical_significance": 0.94,
"direction": "elevated",
"mechanism": {
"pathway": "cell_adhesion_signaling",
"role": "Overexpressed in adenocarcinoma; promotes metastatic potential via integrin-mediated adhesion",
"downstream_effects": ["EMT_activation", "invasion_promotion"]
}
},
{
"id": "bm_002",
"name": "CYFRA 21-1",
"full_name": "Cytokeratin 19 Fragment",
"type": "protein",
"omics_layer": "proteomic",
"value": 8.7,
"unit": "ng/mL",
"reference_range": { "low": 0, "high": 3.3 },
"fold_change": 2.64,
"p_value": 0.00034,
"clinical_significance": 0.89,
"direction": "elevated",
"mechanism": {
"pathway": "epithelial_integrity",
"role": "Released during tumor cell apoptosis and necrosis; correlates with tumor volume",
"downstream_effects": ["cell_death_marker"]
}
}
],
"metadata": {
"models_used": ["varl-biomarker-v4", "ensemble-omics-v2"],
"compute_time_ms": 34200,
"reference_cohort_size": 47000
},
"request_id": "req_v4rl_3n7k9f"
}Track Biomarkers Over Time
Monitor how biomarker levels evolve across longitudinal data points or simulation time-series. The tracking endpoint aligns measurements across time, normalizes for technical variation, and applies trend detection algorithms to identify clinically meaningful trajectories — early warning signals, treatment response patterns, and relapse indicators.
{ "twin_id": "twn_8f3k2n9m", "markers": ["CEA", "CYFRA_21_1", "PD_L1"], "time_range": { "from": "2025-08-01", "to": "2026-02-14" }, "alert_thresholds": { "CEA": { "warn": 5.0, "critical": 10.0 } } }
{
"twin_id": "twn_8f3k2n9m",
"tracking_period": { "from": "2025-08-01", "to": "2026-02-14" },
"markers": {
"CEA": {
"data_points": 12,
"values": [3.1, 3.4, 4.2, 5.1, 6.8, 8.2, 9.7, 11.3, 12.4, 10.1, 7.2, 4.8],
"trend": "rising_then_declining",
"peak": { "value": 12.4, "date": "2025-12-15" },
"current": 4.8,
"alerts_triggered": 2,
"status": "improving"
},
"CYFRA_21_1": {
"data_points": 12,
"values": [2.8, 3.1, 4.5, 5.9, 7.2, 8.7, 8.4, 6.1, 4.3, 3.5, 3.2, 3.0],
"trend": "normalizing",
"current": 3.0,
"status": "within_reference"
},
"PD_L1": {
"data_points": 12,
"values": [0.45, 0.48, 0.52, 0.61, 0.58, 0.55, 0.72, 0.81, 0.85, 0.88, 0.91, 0.89],
"trend": "increasing",
"current": 0.89,
"status": "elevated"
}
}
}Compare Marker Profiles
Diff biomarker profiles between two snapshots, two patients, or pre/post intervention states. The comparison engine computes statistical significance for each marker shift and identifies coordinated changes across marker groups that indicate pathway-level effects.
{ "profile_a": { "twin_id": "twn_8f3k2n9m", "snapshot": "snap_2k4m8n" }, "profile_b": { "twin_id": "twn_8f3k2n9m", "snapshot": "snap_7j3p1x" }, "omics_layers": ["proteomic", "metabolomic"] }
Reference Database
Query VARL's curated biomarker reference database containing over 14,000 validated markers across 2,800 disease conditions. Each entry includes reference ranges stratified by age, sex, and ethnicity, validated assay methods, and clinical interpretation guidelines sourced from peer-reviewed literature.
{
"marker": "CEA",
"full_name": "Carcinoembryonic Antigen",
"gene": "CEACAM5",
"type": "glycoprotein",
"conditions": [
{
"name": "Non-Small Cell Lung Cancer",
"icd10": "C34",
"sensitivity": 0.68,
"specificity": 0.87,
"reference_ranges": {
"healthy_nonsmoker": { "low": 0, "high": 3.0, "unit": "ng/mL" },
"healthy_smoker": { "low": 0, "high": 5.0, "unit": "ng/mL" },
"stage_I_II": { "typical": "3-10", "unit": "ng/mL" },
"stage_III_IV": { "typical": "10-50+", "unit": "ng/mL" }
}
}
],
"publications": 847,
"last_updated": "2026-01-15"
}Predictions
The Predictions API provides AI-powered forecasting built on VARL's proprietary biological language models. Submit patient profiles, molecular data, or digital twin snapshots and receive predictions about disease trajectories, treatment outcomes, adverse event risks, and optimal intervention strategies — each with confidence intervals, explainability metadata, and supporting evidence.
Predictions are not black-box outputs. Every forecast includes a full causal chain — the molecular features that drove the prediction, the pathways involved, the population-level evidence supporting it, and the conditions under which the prediction may not hold. This level of transparency is essential for clinical decision support, where physicians need to understand why an AI recommends something, not just what it recommends.
Prediction models are continuously retrained on VARL's growing dataset of validated outcomes. Model versions are tracked, and you can pin your integration to a specific model version for reproducibility or opt into automatic updates for maximum accuracy.
Disease Trajectory
Predict how a disease will progress over a defined time horizon given the patient's current molecular state. The model projects biomarker evolution, symptom onset probability, stage transitions, and critical decision points where intervention would have maximum impact.
{ "twin_id": "twn_8f3k2n9m", "condition": "type_2_diabetes", "horizon": "5y", "include_interventions": true }
{
"id": "pred_9f2k4m7n",
"object": "prediction",
"type": "disease_trajectory",
"condition": "type_2_diabetes",
"horizon": "5y",
"confidence": 0.91,
"trajectory": {
"current_stage": "prediabetes",
"progression_risk": 0.73,
"milestones": [
{
"event": "hba1c_exceeds_6.5",
"probability": 0.68,
"estimated_onset": "8-14 months",
"confidence": 0.87
},
{
"event": "insulin_resistance_severe",
"probability": 0.52,
"estimated_onset": "18-30 months",
"confidence": 0.79
},
{
"event": "retinopathy_onset",
"probability": 0.23,
"estimated_onset": "36-60 months",
"confidence": 0.71
}
]
},
"recommended_interventions": [
{
"type": "lifestyle",
"description": "Caloric restriction + 150min/week moderate exercise",
"impact": "Reduces progression risk by 58%",
"confidence": 0.94
},
{
"type": "pharmacological",
"compound": "metformin",
"dosage": "500mg BID",
"impact": "Delays onset by 24-36 months",
"confidence": 0.88
}
],
"key_drivers": [
{ "feature": "HOMA-IR score", "weight": 0.31, "direction": "risk_increasing" },
{ "feature": "visceral adiposity", "weight": 0.22, "direction": "risk_increasing" },
{ "feature": "GLP-1 sensitivity", "weight": 0.18, "direction": "protective" }
],
"model": "varl-trajectory-v6.2",
"request_id": "req_v4rl_4m7k2n"
}Treatment Efficacy
Predict how effectively a specific treatment will work for a given patient based on their molecular profile. The model compares the patient's biology against a reference database of treatment outcomes, identifies pharmacogenomic factors that influence response, and returns a probability distribution of expected outcomes.
{ "twin_id": "twn_8f3k2n9m", "treatment": { "compound": "trastuzumab", "indication": "HER2_positive_breast_cancer", "regimen": "standard_q3w" }, "compare_alternatives": true }
{
"id": "pred_eff_3k7n9f",
"object": "prediction",
"type": "treatment_efficacy",
"primary": {
"compound": "trastuzumab",
"efficacy_score": 0.82,
"response_probability": 0.78,
"partial_response": 0.15,
"no_response": 0.07,
"confidence": 0.90,
"pharmacogenomic_factors": [
{ "gene": "HER2", "status": "amplified", "impact": "strong_positive" },
{ "gene": "PIK3CA", "status": "wild_type", "impact": "neutral" },
{ "gene": "FCGR3A", "status": "V158F_heterozygous", "impact": "moderate_positive" }
],
"predicted_adverse_events": [
{ "event": "cardiotoxicity", "probability": 0.04, "severity": "grade_1" },
{ "event": "infusion_reaction", "probability": 0.12, "severity": "grade_1_2" }
]
},
"alternatives": [
{
"compound": "pertuzumab + trastuzumab",
"efficacy_score": 0.91,
"response_probability": 0.86,
"confidence": 0.88,
"advantage": "+9% efficacy, dual HER2 blockade"
},
{
"compound": "T-DXd (trastuzumab deruxtecan)",
"efficacy_score": 0.88,
"response_probability": 0.83,
"confidence": 0.85,
"advantage": "Effective in trastuzumab-resistant cases"
}
],
"model": "varl-efficacy-v5.1",
"reference_cohort": 12400,
"request_id": "req_v4rl_7n3k9f"
}Adverse Event Risk
Predict the probability, severity, and timing of adverse events for a specific treatment-patient combination. The model evaluates pharmacogenomic markers, metabolic capacity, organ function baselines, and drug-drug interaction risks to produce a comprehensive safety profile before a single dose is administered.
{ "twin_id": "twn_8f3k2n9m", "compounds": ["doxorubicin", "cyclophosphamide"], "duration": "120d" }
{
"id": "pred_ae_8k2n4m",
"object": "prediction",
"type": "adverse_events",
"risk_summary": {
"overall_risk_score": 0.34,
"risk_level": "moderate"
},
"events": [
{
"event": "neutropenia",
"probability": 0.72,
"severity": "grade_3_4",
"onset_window": "7-14 days post-cycle",
"mechanism": "Myelosuppression via DNA intercalation",
"mitigation": "G-CSF prophylaxis recommended"
},
{
"event": "cardiotoxicity",
"probability": 0.18,
"severity": "grade_2",
"onset_window": "cumulative, after cycle 4",
"mechanism": "Doxorubicin-induced ROS damage to cardiomyocytes",
"mitigation": "LVEF monitoring every 2 cycles, dexrazoxane if LVEF <50%",
"pharmacogenomic_risk": {
"gene": "RARG",
"variant": "rs2229774",
"patient_status": "carrier",
"risk_increase": "3.2x"
}
},
{
"event": "nausea_vomiting",
"probability": 0.85,
"severity": "grade_2_3",
"onset_window": "0-72 hours post-infusion",
"mechanism": "5-HT3 receptor activation",
"mitigation": "Ondansetron + dexamethasone pre-medication"
}
],
"drug_interactions": [
{
"compounds": ["doxorubicin", "cyclophosphamide"],
"interaction": "synergistic_toxicity",
"affected_system": "hematopoietic",
"severity": "requires_monitoring"
}
],
"model": "varl-safety-v4.3",
"request_id": "req_v4rl_2m7k9n"
}Prediction Models
List available prediction models, their versions, training data characteristics, and performance metrics. Use this endpoint to select the appropriate model for your use case or to audit which model version produced a specific prediction.
{
"models": [
{
"id": "varl-trajectory-v6.2",
"type": "disease_trajectory",
"conditions_supported": 847,
"training_cohort": 2400000,
"auroc": 0.94,
"last_retrained": "2026-02-01",
"status": "current"
},
{
"id": "varl-efficacy-v5.1",
"type": "treatment_efficacy",
"compounds_supported": 3200,
"training_cohort": 1800000,
"auroc": 0.91,
"last_retrained": "2026-01-15",
"status": "current"
},
{
"id": "varl-safety-v4.3",
"type": "adverse_events",
"compounds_supported": 4100,
"training_cohort": 3100000,
"auroc": 0.89,
"last_retrained": "2026-02-01",
"status": "current"
}
]
}Datasets
The Datasets API manages biological data throughout its lifecycle — upload, validation, versioning, querying, and secure sharing. Datasets are the raw material that feeds digital twins, trains prediction models, and validates simulation outputs. Every dataset is immutable once finalized, versioned for reproducibility, and encrypted at rest and in transit.
VARL maintains a reference library of over 2.4 million annotated molecular interactions, curated from peer-reviewed literature, clinical trial registries, and validated experimental data. This library is accessible to all API users and serves as the foundation for cross-referencing proprietary uploads against the global body of biological knowledge.
Data sovereignty is absolute. Your datasets are isolated to your organization's tenant. They are never used to train shared models, never aggregated with other organizations' data, and never accessible to anyone outside your permission structure. You retain the right to delete your data at any time with immediate, irrevocable effect.
Upload a Dataset
Upload biological data for use across the platform. The upload process validates data integrity, infers schema where possible, runs quality checks, and indexes the dataset for fast querying. Large datasets support multipart uploads with resumable transfer for reliability over unstable connections.
{ "name": "Patient-0042 Proteomic Panel", "type": "proteomic", "format": "csv", "organism": "homo_sapiens", "sample_count": 1, "description": "Mass spectrometry proteomic panel, 4,200 proteins quantified", "tags": ["oncology", "nsclc", "baseline"] }
Request Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
| name | string | Yes | Human-readable name. Maximum 256 characters. |
| type | string | Yes | Data type: genomic, transcriptomic, proteomic, metabolomic, epigenomic, clinical, imaging |
| format | string | Yes | File format: csv, tsv, parquet, fastq, vcf, bam, h5ad, json |
| organism | string | Yes | Source organism. Same values as Digital Twins organism parameter. |
| tags | string[] | No | Freeform labels for organization and filtering. Maximum 20 tags per dataset. |
Response
{
"id": "ds_pd_3m7k1x",
"object": "dataset",
"name": "Patient-0042 Proteomic Panel",
"type": "proteomic",
"format": "csv",
"status": "validating",
"organism": "homo_sapiens",
"sample_count": 1,
"upload": {
"method": "direct",
"upload_url": "https://upload.varl.net/ds_pd_3m7k1x?token=...",
"expires_at": "2026-02-14T11:15:33Z",
"max_size_mb": 5000
},
"version": 1,
"created_at": "2026-02-14T10:15:33Z",
"request_id": "req_v4rl_8n3k7m"
}After receiving the response, upload the actual data file to the provided upload_url using a PUT request with the file as the body. The URL is pre-signed and expires after 1 hour.
curl -X PUT \ -H "Content-Type: text/csv" \ --data-binary @patient_0042_proteomics.csv \ "https://upload.varl.net/ds_pd_3m7k1x?token=..."
Query a Dataset
Run structured queries against uploaded datasets without downloading the full file. The query engine supports filtering, aggregation, statistical summaries, and cross-dataset joins. Results are streamed for large output sets.
{ "select": ["protein_id", "abundance", "p_value"], "filter": { "abundance": { "gt": 2.0 }, "p_value": { "lt": 0.01 } }, "sort": { "field": "abundance", "order": "desc" }, "limit": 100 }
Reference Library
Access VARL's curated reference library of molecular interactions, pathway annotations, and disease-gene associations. The library is continuously updated from peer-reviewed sources and validated by our scientific team.
{
"gene": "TP53",
"total_interactions": 1247,
"results": [
{
"partner": "MDM2",
"type": "protein_protein",
"interaction": "ubiquitin_ligase_substrate",
"confidence": 0.99,
"sources": 342,
"functional_impact": "Negative regulation of TP53 stability"
},
{
"partner": "ATM",
"type": "protein_protein",
"interaction": "kinase_substrate",
"confidence": 0.98,
"sources": 287,
"functional_impact": "Phosphorylation at S15 upon DNA damage"
},
{
"partner": "CDKN1A",
"type": "transcription_factor_target",
"interaction": "transcriptional_activation",
"confidence": 0.97,
"sources": 412,
"functional_impact": "Cell cycle arrest via p21 induction"
}
]
}Versioning
Datasets are versioned automatically. Each time you upload new data to an existing dataset, a new version is created. Previous versions are retained and remain accessible. Simulations and predictions always reference a specific version, ensuring full reproducibility even as datasets evolve.
{
"dataset_id": "ds_pd_3m7k1x",
"versions": [
{ "version": 3, "created_at": "2026-02-10", "rows": 4200, "size_mb": 12.4, "status": "active" },
{ "version": 2, "created_at": "2026-01-15", "rows": 3800, "size_mb": 11.1, "status": "active" },
{ "version": 1, "created_at": "2025-12-01", "rows": 3200, "size_mb": 9.7, "status": "active" }
]
}Delete a Dataset
Permanently delete a dataset and all its versions. This action is irreversible. Digital twins and simulations that reference this dataset will retain their computed states but will not be able to recalibrate or re-run. A 72-hour grace period allows you to cancel the deletion before data is physically purged.
Supported Formats
| Format | Extension | Max Size | Use Case |
|---|---|---|---|
| CSV / TSV | .csv, .tsv | 5 GB | Tabular data, expression matrices, clinical records |
| Parquet | .parquet | 50 GB | Large-scale columnar data, population cohorts |
| FASTQ | .fastq, .fq.gz | 100 GB | Raw sequencing reads |
| VCF | .vcf, .vcf.gz | 10 GB | Variant call files, genomic mutations |
| BAM | .bam | 200 GB | Aligned sequencing reads |
| H5AD | .h5ad | 50 GB | Single-cell data (AnnData format) |
| JSON | .json, .jsonl | 5 GB | Structured biological records, pathway definitions |
Webhooks
Webhooks deliver real-time notifications to your application when events occur in the VARL platform. Instead of polling endpoints for status changes, register a webhook URL and receive HTTP POST requests the moment a twin completes initialization, a simulation finishes, a biomarker alert triggers, or a dataset finishes validation.
All webhook payloads are signed with HMAC-SHA256 using your webhook secret, ensuring that your application can verify the authenticity of every incoming request. Failed deliveries are retried with exponential backoff for up to 72 hours. Every delivery attempt is logged and visible in your dashboard.
Register a Webhook
Create a webhook endpoint that listens for specific event types. You can register multiple webhooks, each filtering for different events or delivering to different URLs. VARL sends a verification request to the URL upon creation — your server must respond with a 200 status code to confirm the endpoint is active.
{ "url": "https://api.yourapp.com/varl/webhooks", "events": [ "twin.ready", "twin.degraded", "simulation.completed", "simulation.failed", "biomarker.alert", "dataset.validated" ], "description": "Production event handler" }
{
"id": "wh_4k7m2n",
"object": "webhook",
"url": "https://api.yourapp.com/varl/webhooks",
"events": [
"twin.ready", "twin.degraded",
"simulation.completed", "simulation.failed",
"biomarker.alert", "dataset.validated"
],
"secret": "whsec_v4rl_8f3k2n9m7j3p1x...",
"status": "active",
"created_at": "2026-02-14T10:30:00Z"
}Store the secret securely — it is only returned once at creation time. Use it to verify webhook signatures on incoming requests.
Event Types
Subscribe to any combination of events. Each event type delivers a payload containing the full object state at the time the event occurred.
| Event | Description |
|---|---|
| twin.ready | Digital twin initialization and calibration complete |
| twin.degraded | Twin drift score exceeds acceptable threshold |
| twin.recalibrated | Twin recalibration finished after new data integration |
| simulation.completed | Simulation finished successfully, results available |
| simulation.failed | Simulation encountered an error |
| batch.completed | All simulations in a batch have finished |
| biomarker.alert | Tracked biomarker crossed a defined threshold |
| dataset.validated | Uploaded dataset passed validation and is ready for use |
| dataset.failed | Dataset validation failed due to format or integrity errors |
Payload Format
Every webhook delivery is an HTTP POST request with a JSON body. The payload includes the event type, a timestamp, and the full object that triggered the event.
{
"id": "evt_9f2k4m7n",
"object": "event",
"type": "simulation.completed",
"created_at": "2026-02-14T10:18:27Z",
"data": {
"id": "sim_4k7m2n9f",
"object": "simulation",
"twin_id": "twn_8f3k2n9m",
"type": "drug_response",
"status": "completed",
"summary": {
"efficacy_score": 0.847,
"confidence": 0.93
},
"compute_time_ms": 174200,
"completed_at": "2026-02-14T10:18:27Z"
}
}Signature Verification
Every webhook request includes a VARL-Signature header containing an HMAC-SHA256 hash of the request body, signed with your webhook secret. Always verify this signature before processing the payload.
VARL-Signature: sha256=a1b2c3d4e5f6...
import hmac, hashlib def verify_signature(payload, signature, secret): expected = hmac.new( secret.encode(), payload, hashlib.sha256 ).hexdigest() return hmac.compare_digest(f"sha256={expected}", signature)
Retry Policy
If your endpoint returns a non-2xx status code or fails to respond within 30 seconds, VARL retries the delivery with exponential backoff. The retry schedule is: 1 minute, 5 minutes, 30 minutes, 2 hours, 12 hours, 24 hours, 48 hours, 72 hours. After all retries are exhausted, the event is marked as failed and visible in your dashboard. You can manually replay failed events from the dashboard or via the API.
Endpoint returned 2xx. Delivery is confirmed and logged.
Delivery failed. Next retry scheduled according to backoff policy.
All retry attempts exhausted. Event can be replayed manually.
Webhook auto-disabled after 50 consecutive failures. Re-enable from dashboard.
Manage Webhooks
List all registered webhooks
Retrieve a specific webhook and its delivery history
Update URL, events, or description. Secret cannot be changed — rotate by deleting and recreating.
Remove a webhook. Pending deliveries are cancelled immediately.
Send a test event to verify your endpoint is working correctly
Replay a failed event delivery
Rate Limits
Rate limits protect the stability of the VARL platform and ensure fair resource allocation across all users. Limits are applied per API key and vary by endpoint category. When a rate limit is exceeded, the API returns a 429 Too Many Requests response with headers indicating when you can retry.
Default Limits
Limits are measured in requests per minute (RPM) and requests per day (RPD). Compute-intensive endpoints (simulations, predictions) have separate limits measured in compute units.
| Endpoint Category | Limit |
|---|---|
| Read operations (GET) | 600 RPM |
| Write operations (POST/PATCH/DELETE) | 200 RPM |
| Simulations | 50 RPM |
| Predictions | 100 RPM |
| Dataset uploads | 100 RPD |
| Batch operations | 20 RPM |
Rate Limit Headers
Every API response includes headers that communicate your current rate limit status. Use these headers to implement client-side throttling and avoid hitting limits.
| Header | Description |
|---|---|
| X-RateLimit-Limit | Maximum requests allowed in the current window |
| X-RateLimit-Remaining | Requests remaining in the current window |
| X-RateLimit-Reset | Unix timestamp when the current window resets |
| Retry-After | Seconds to wait before retrying (only present on 429 responses) |
X-RateLimit-Limit: 600 X-RateLimit-Remaining: 547 X-RateLimit-Reset: 1708000000
Handling 429 Responses
When you exceed a rate limit, implement exponential backoff with jitter. The official SDKs handle this automatically. If you are making direct HTTP requests, follow this pattern:
import time, random def request_with_retry(fn, max_retries=5): for attempt in range(max_retries): response = fn() if response.status_code != 429: return response wait = min(2 ** attempt + random.uniform(0, 1), 60) time.sleep(wait) raise Exception("Rate limit exceeded after max retries")
Burst Allowance
The API includes burst allowance — a short-term capacity buffer that allows brief spikes above your sustained rate limit. Bursts are permitted for up to 10 seconds and allow up to 3x your per-minute limit. This accommodates legitimate use patterns like page loads that trigger multiple parallel requests.
Requesting Higher Limits
If your workload consistently approaches rate limits, contact us at api@varl.net or submit a limit increase request through the dashboard. Include your current usage patterns, expected peak load, and the specific endpoints that need higher limits.
Errors
The VARL API uses conventional HTTP status codes to indicate success or failure. Every error response includes a structured JSON body with a machine-readable error code, a human-readable message, and where applicable, a pointer to the specific field or parameter that caused the error.
Error Response Format
{
"error": {
"code": "invalid_parameter",
"message": "The 'resolution' field must be one of: molecular, cellular, tissue, organ.",
"param": "resolution",
"type": "validation_error"
},
"request_id": "req_v4rl_7k2m9n"
}| Field | Description |
|---|---|
| error.code | Machine-readable error identifier for programmatic handling |
| error.message | Human-readable explanation of what went wrong |
| error.param | The specific parameter that caused the error (if applicable) |
| error.type | Error category: validation_error, authentication_error, authorization_error, not_found, rate_limit, server_error |
| request_id | Unique identifier for this request — include in support tickets for debugging |
HTTP Status Codes
OK
Request succeeded. Response body contains the requested resource.
Created
Resource successfully created. Response body contains the new resource.
Bad Request
The request body is malformed, missing required fields, or contains invalid parameter values. Check the error.param field for the specific issue.
Unauthorized
No API key provided, or the key is invalid, expired, or revoked. See the Authentication section for details.
Forbidden
The API key is valid but lacks the required permission scope, or the request originates from a blocked IP address.
Not Found
The requested resource does not exist. Verify the ID is correct and the resource has not been deleted.
Conflict
The request conflicts with the current state of the resource. Common causes: attempting to run a simulation on a twin that is still initializing, or uploading to a dataset that is currently being validated.
Unprocessable Entity
The request is syntactically valid but semantically incorrect. Example: requesting molecular resolution for an organism that only supports cellular resolution, or specifying contradictory simulation parameters.
Too Many Requests
Rate limit exceeded. Check the Retry-After header and implement exponential backoff. See the Rate Limits section.
Internal Server Error
An unexpected error occurred on our servers. These are automatically reported to our engineering team. If the error persists, contact support with the request_id.
Service Unavailable
The platform is temporarily unavailable due to maintenance or capacity constraints. Retry after the duration specified in the Retry-After header. Check status.varl.net for incident updates.
Common Error Codes
These are the most frequently encountered error codes across all endpoints. Each code maps to a specific, actionable issue.
| Code | HTTP | Description |
|---|---|---|
| invalid_parameter | 400 | A parameter value is not valid for its expected type or range |
| missing_parameter | 400 | A required parameter was not included in the request |
| authentication_required | 401 | No Authorization header provided |
| invalid_api_key | 401 | API key does not match any active key |
| insufficient_scope | 403 | API key lacks the required permission scope |
| resource_not_found | 404 | The specified twin, simulation, dataset, or webhook does not exist |
| twin_not_ready | 409 | Operation requires the twin to be in 'ready' state |
| simulation_already_complete | 409 | Cannot cancel a simulation that has already finished |
| incompatible_resolution | 422 | Requested resolution not available for this organism/system |
| quota_exceeded | 429 | Monthly quota for this resource type has been exhausted |
| dataset_validation_failed | 422 | Uploaded data did not pass schema or integrity checks |
Error Handling Best Practices
- Always check the HTTP status code before parsing the response body. 2xx means success; anything else is an error.
- Use the
error.codefield for programmatic error handling — not the message, which may change. - Retry 429 and 503 errors with exponential backoff. Do not retry 400, 401, 403, 404, or 422 errors — they require code changes.
- Log the
request_idfrom every error response. Include it when contacting support for fastest resolution. - Implement circuit breakers for 500 errors. If you receive 3 or more server errors within 60 seconds, back off for 5 minutes before retrying.
Changelog
A chronological record of changes to the VARL API. Breaking changes are announced at least 90 days in advance. Previous API versions remain available for 12 months after a new version is released.
Predictions API — Disease trajectory, treatment efficacy, and adverse event risk prediction endpoints now generally available. Includes explainability metadata and model version pinning.
Batch Simulations — Submit up to 10,000 simulation variations in a single request. All variations execute in parallel with independent result retrieval.
Digital Twin calibration — Calibration time reduced by 60% through parallel graph construction. Average initialization for molecular-resolution twins: 4.2 minutes (down from 11 minutes).
Biomarker reference database — Expanded to 14,200 validated markers across 2,800 conditions. Added stratified reference ranges by age, sex, and ethnicity.
Webhook payload format — Event payloads now include the full object state instead of just the ID. This is a backward-compatible change; existing integrations will continue to work.
Biomarker tracking — Track biomarker evolution over time with configurable alert thresholds. Supports longitudinal analysis across twin snapshots and simulation outputs.
Dataset query engine — Run structured queries against uploaded datasets without downloading the full file. Supports filtering, aggregation, and cross-dataset joins.
Snapshot comparison — Diff two twin snapshots to identify changes in node expression, edge weights, and pathway activity with statistical significance scoring.
Simulation performance — Drug response simulations now complete 3x faster through optimized pathway traversal algorithms. Average completion time: 45 seconds for cellular resolution.
Twin state query depth — Fixed an issue where include_neighbors with depth > 3 could return incomplete subgraphs for twins with high node density.
Webhooks API — Real-time event notifications for twin lifecycle, simulation completion, biomarker alerts, and dataset validation. HMAC-SHA256 signed payloads with exponential backoff retry.
Environmental stress simulations — New simulation type for modeling biological responses to temperature, pH, oxidative stress, nutrient deprivation, and toxin exposure.
R SDK — Official R client library with full API coverage, tidyverse-compatible data structures, and Bioconductor integration.
Dataset versioning — Datasets now support automatic versioning with full history retention. Previous versions are accessible and can be referenced by simulations and twins.
Rate limit structure — Limits are now applied per endpoint category instead of globally. This provides more predictable behavior for applications that use multiple API features.
Biomarkers API — Detect, quantify, and compare molecular biomarkers from multi-omics data. Includes reference database with 12,000+ validated markers.
Stochastic noise modeling — Digital twins now support optional biological noise layers that incorporate molecular-level variability into simulations for more realistic phenotypic outputs.
TypeScript SDK — Complete rewrite with full type safety, tree-shaking support, and automatic request/response validation. Breaking change from v1.x — see migration guide.
Cursor pagination — Resolved edge case where cursor tokens could become invalid when resources were deleted between page requests.
VARL API v1 — Initial public release. Core endpoints for Digital Twins (create, retrieve, update, delete, query, snapshots), Simulations (drug response, pathway disruption, genetic mutation), and Datasets (upload, list, delete).
Python SDK — Official Python client library with async support, automatic retries, and streaming for large dataset operations.
Authentication system — API key management with permission scopes, IP allowlisting, key rotation, and organizational isolation.