Neural data
infrastructure.
Before you can train a neural model, you need the data. That's what we build.
| timestamp | TP9 | AF7 | AF8 | TP10 |
|---|---|---|---|---|
| 00:00.000 | 820.5 | 830.1 | 2847.3 | -199.4 |
| 00:00.004 | NaN | 831.4 | 814.7 | 825.2 |
| 00:00.008 | 819.8 | 828.9 | 815.2 | NaN |
| 00:00.012 | 2941.2 | 830.5 | -189.3 | 825.9 |
| 00:00.016 | 820.1 | NaN | 814.9 | 824.6 |
The inflection point
The hardware arrived.
The infrastructure didn't.
The hardware arrived.
MUSE, OpenBCI, Emotiv — consumer EEG is proliferating. The devices are cheaper, more accurate, and easier to use than ever.
The software stack didn't.
Every team rebuilds the same pipelines — drivers, ingestion, artifact detection, feature extraction. Months of undifferentiated work before you can build anything real.
That's the gap we're filling.
Neural data has always been collected in isolation — one device, one lab, one study at a time. No cross-device, labeled dataset has ever been assembled at scale. Every session through Voxel becomes a standardized neural record — device-normalized, artifact-tagged, and ready to train on. For the first time, it accumulates instead of disappearing.
< 47ms
Avg latency
256 Hz
Max sample rate
5 bands
Per window
99.9%
Uptime SLA
Normalize
Any headset. One response shape.
MUSE, OpenBCI, Emotiv — each has its own SDK and channel quirks. Voxel abstracts all of it. Same JSON schema regardless of hardware.
import muselsl
stream = muselsl.stream("your_mac")
# raw LSL · device-specific · no QCboard = BoardShim(CYTON_BOARD, params) board.start_stream() # numpy array · 24-bit · no artifacts
await headset.subscribe(["eeg"]) # vendor JSON · proprietary channels # no normalization · no quality score
{
"session_id": "ses_4f2a9b8c",
"device": "MUSE_2",
"bandpower": {
"TP9": { "alpha": 15.7, "sqi": 0.91 },
"AF7": { "alpha": 11.2, "sqi": 0.85 }
},
"artifacts": { "blink": false },
"latency_ms": 11
}Same response schema across MUSE, OpenBCI, Emotiv, or custom hardware. SQI, bandpower, and artifact flags on every window — no extra config.
Solutions
Built for builders.
Designed for researchers.
BCI teams integrate once and ship. Every session they run automatically becomes a labeled training record — contributing to a dataset no single lab could build alone.
For Builders
Replace 6 SDKs with one endpoint.
MUSE, OpenBCI, Emotiv — same API call, same response shape. Artifact detection, SQI scoring, and 5-band features in under 47ms. No signal processing required.
< 47ms latency · 6+ devices · zero DSP knowledge needed
For Researchers
A dataset that builds itself.
Every session is auto-labeled with device, task, and subject metadata. 47 normalized fields per record. Export a cross-device, quality-gated training set with one API call.
47 fields/record · cross-device · SQI-gated
Why it's hard
Neural data is harder than it looks.
EEG isn't text or images. Every device speaks a different dialect. Every person's signal is different. These aren't engineering inconveniences — they're the reason no universal neural dataset exists yet. Solving them at ingestion is the moat.
Cross-device normalization
MUSE, OpenBCI, and Emotiv use different electrode positions, impedance ranges, and ADC resolutions. TP9 on a MUSE is not T7 on an OpenBCI Cyton. Raw microvolts are not comparable across hardware — Voxel normalizes all of it.
Per-subject variability
Alpha bandpower varies 10× across people. A model trained on one subject generalizes poorly to another. Real-time per-subject calibration — within the first 30 seconds of a session — is a hard open problem we solve at ingestion time.
Real-time artifact rejection
Blinks inject 100–300 μV spikes. Muscle noise contaminates the gamma band. Motion floods all channels. All must be detected at 256 Hz within a sub-50 ms latency budget, without stalling the signal pipeline.
No ImageNet for the brain
There is no large-scale, cross-device, labeled EEG dataset. Every model today trains on a narrow slice of one device and one task. Voxel is building the data layer that makes cross-device neural AI possible.
Build the app.
Train the model.
Every session you run through Voxel gets normalized, quality-gated, and banked as a training record — automatically. You ship faster. The dataset grows with every call.