Named after the space between knowing and not knowing — because that's where all the real reasoning happens. A symbolic reasoning engine that observes, questions, imagines, and puts its own beliefs on trial. No weights. No parameters. No gradient descent. Just logic, evidence, and reality as the final judge.
While others race to wrap LLMs around everything, we took a fundamentally different path. One that prioritizes transparency, reproducibility, and trust.
Our causal discovery engine is 100% Python with zero external libraries. No neural networks, no API calls, no opaque model weights. Just clean, deterministic logic you can read and verify.
Every causal claim traces back to specific observations with confidence scores, preconditions, and exclusion criteria. No hallucinations. Just evidence-backed findings you can verify against your source data.
We don't find patterns. We discover cause-and-effect relationships. Our engine identifies what drives outcomes, what blocks them, and under which specific conditions, structured as queryable knowledge.
The engine knows what it doesn't know. It detects its own blind spots, generates compound features to fill gaps, and signals when it needs data it's never seen.
When its predictions fail, it raises its own standards. Claims that can't defend themselves against stricter thresholds are put on probation — not deleted, put on trial.
On high-surprise events, the engine imagines alternative realities — "what if this variable were different?" — and tracks where its imagination fails. Failed imagination is the most valuable learning signal.
The engine predicts, measures surprise, reinforces or weakens claims, and induces preconditions. All through symbolic reasoning. No ML, no weights, no GPUs. Runs on a Raspberry Pi. Auditable line-by-line.
Clinical trials, operational data, event logs. No preprocessing required.
Every record becomes an observation. The engine models actions, effects, and context over multiple epochs.
Claims emerge with preconditions and blockers. Tested, consolidated, and scored for confidence across the full dataset.
What works, for whom, and why. Get structured answers with full evidence trails.
The engine tests its own beliefs against its own imagination. Claims that reality disagrees with are demoted. Structural truths survive. Ephemeral correlations don't.
Causal maps from lung and breast cancer clinical trials. Thousands of causal claims discovered in minutes with zero medical priors. Click to enlarge.
Causal network generated from oncology clinical trial data accessed via Project Data Sphere. Project Data Sphere and the data provider(s) have not contributed to, approved, or are responsible for these research results.
Scroll to zoom. Drag to pan. Click X or press Esc to close.
Scroll to zoom. Drag to pan. Click X or press Esc to close.
The same engine, the same process. Point it at any structured dataset and extract causal knowledge.
Discover which treatments work for which patient subgroups, identify hidden risk factors, and run counterfactual analyses across trials.
Extract signal selection rules from market data. Identify which conditions cause specific outcomes and which are noise.
Find causal drivers of match outcomes beyond correlation. Identify which tactical and contextual factors actually change results.
Understand what causes defects, delays, or failures in your processes and the specific conditions under which they occur.
Have a dataset that needs deeper understanding? We'd love to hear from you.