Gray thinks about
your data.

Named after the space between knowing and not knowing — because that's where all the real reasoning happens. A symbolic reasoning engine that observes, questions, imagines, and puts its own beliefs on trial. No weights. No parameters. No gradient descent. Just logic, evidence, and reality as the final judge.

$ tuplecore run --dataset trial_z1031.csv
[epoch 1/12] ingesting 351 patients, 3 arms...
[epoch 6/12] 847 causal claims discovered
[epoch 12/12] consolidating...

1,464 consensus rules extracted
47 nodes · 99 edges · 0 external deps

$ tuplecore query --cause "exemestane"
exemestane CAUSES good_outcome
confidence: 0.91 | support: 284 | blockers: 3
precondition: her2_negative
// approach

Built different. On purpose.

While others race to wrap LLMs around everything, we took a fundamentally different path. One that prioritizes transparency, reproducibility, and trust.

Pure Python

Our causal discovery engine is 100% Python with zero external libraries. No neural networks, no API calls, no opaque model weights. Just clean, deterministic logic you can read and verify.

Fully Auditable

Every causal claim traces back to specific observations with confidence scores, preconditions, and exclusion criteria. No hallucinations. Just evidence-backed findings you can verify against your source data.

Causal, Not Correlational

We don't find patterns. We discover cause-and-effect relationships. Our engine identifies what drives outcomes, what blocks them, and under which specific conditions, structured as queryable knowledge.

Self-Aware

The engine knows what it doesn't know. It detects its own blind spots, generates compound features to fill gaps, and signals when it needs data it's never seen.

Self-Calibrating

When its predictions fail, it raises its own standards. Claims that can't defend themselves against stricter thresholds are put on probation — not deleted, put on trial.

Imaginative

On high-surprise events, the engine imagines alternative realities — "what if this variable were different?" — and tracks where its imagination fails. Failed imagination is the most valuable learning signal.

// process

From raw data to causal knowledge

The engine predicts, measures surprise, reinforces or weakens claims, and induces preconditions. All through symbolic reasoning. No ML, no weights, no GPUs. Runs on a Raspberry Pi. Auditable line-by-line.

01 INGEST

Point it at your data

Clinical trials, operational data, event logs. No preprocessing required.

02 OBSERVE

Build a cognitive model

Every record becomes an observation. The engine models actions, effects, and context over multiple epochs.

03 DISCOVER

Extract causal rules

Claims emerge with preconditions and blockers. Tested, consolidated, and scored for confidence across the full dataset.

04 QUERY

Ask questions

What works, for whom, and why. Get structured answers with full evidence trails.

05 REVISE

Challenge its own beliefs

The engine tests its own beliefs against its own imagination. Claims that reality disagrees with are demoted. Structural truths survive. Ephemeral correlations don't.

// output

Oncology causal networks

Causal maps from lung and breast cancer clinical trials. Thousands of causal claims discovered in minutes with zero medical priors. Click to enlarge.

Causal network generated from oncology clinical trial data accessed via Project Data Sphere. Project Data Sphere and the data provider(s) have not contributed to, approved, or are responsible for these research results.

// domains

Domain-agnostic discovery

The same engine, the same process. Point it at any structured dataset and extract causal knowledge.

Clinical Trials

Discover which treatments work for which patient subgroups, identify hidden risk factors, and run counterfactual analyses across trials.

Financial Markets

Extract signal selection rules from market data. Identify which conditions cause specific outcomes and which are noise.

Sports Analytics

Find causal drivers of match outcomes beyond correlation. Identify which tactical and contextual factors actually change results.

Operations

Understand what causes defects, delays, or failures in your processes and the specific conditions under which they occur.

Transparency is not optional
Every claim links to source data Confidence scores on all findings No external API dependencies Reproducible results, every time 100% open-box methodology
// contact

Let's make sense of your data

Have a dataset that needs deeper understanding? We'd love to hear from you.