ProfitOpsPriorStudio
Marketplace

Stop building from scratch.

13 priors, 6 model templates, and 9 open-source PFN projects to fork, learn from, or build on.

13
priors
6
models
9
projects
8
featured

priors, models, and external projects

PriorRegression
★ Featured

Linear Regression

y = a·x + b + noise

The reference PFN prior. Random linear functions with Gaussian noise — the simplest demonstrable PFN training task.

PriorStudio Team
PriorClassification
★ Featured

Two-Moons Classification

Random interlocking half-moons + 0/1 labels

Classification analogue of linear regression. Each task is a fresh draw of the two-moons geometry — the PFN learns a generic 2D classifier.

PriorStudio Team
PriorTime series
★ Featured

AR(2) Time Series

y_t = φ₁·y_{t-1} + φ₂·y_{t-2} + ε_t

Real-world-shaped time series. Random stationary AR(2) coefficients per task; the PFN learns to forecast any well-behaved autoregressive series.

PriorStudio Team
PriorCausal / discovery
★ Featured

Linear SCM Discovery

Recover the DAG behind linear y = Ax + ε

Random sparse linear structural causal models. The PFN outputs an adjacency matrix — pure structure discovery, no fitted edges.

PriorStudio Team
PriorRegression

Polynomial Regression

y = Σ c_k · x^k + noise

Random polynomial functions up to degree D. Generalises linear regression with curvature.

PriorStudio Team
PriorRegression

GP Regression (RBF)

y ~ GP(0, k_RBF(x, x'))

Functions sampled from a Gaussian Process with an RBF kernel. PFN learns to do GP regression at inference without solving the kernel system.

Müller et al.
PriorClassification

Gaussian-Mixture Classification

Per-task GMM with K components → K-class labels

Random Gaussian mixtures in D dimensions. Trains a PFN that does Bayesian-optimal classification on any well-separated mixture.

PriorStudio Team
PriorClassification

Logistic w/ Feature Interactions

σ(w·x + interactions)

Logistic regression with pairwise feature interactions baked in. Trains a PFN that picks up cross-feature signal automatically.

Community
PriorTime series

Sine Wave

y_t = a·sin(ω·t + φ) + noise

The simplest temporal PFN. Random amplitude, frequency, and phase per task.

PriorStudio Team
PriorTime series

Seasonal + Trend

y_t = trend(t) + seasonal(t) + noise

Classic decomposition: slow trend plus periodic component plus noise. Demo-worthy for retail / energy / web traffic.

Community
PriorProbabilistic

Bayesian Coin Flip

p ~ Beta(α, β); flips ~ Bernoulli(p)

Textbook conjugate prior. Direct evidence the PFN has learned Bayesian inference.

PriorStudio Team
PriorProbabilistic

Hierarchical Normal

μ_g ~ N(μ_0, τ); y ~ N(μ_g, σ)

Two-level normal model. Groups share information through a population mean — the canonical multi-level setup.

Community
PriorCausal / discovery

Causal Chain Discovery

Detect X → Y → Z chains

Specialised SCM prior with chain-shaped DAGs. Faster to learn than full ER-DAGs and matches a common scientific use case.

Community
ModelRegression
★ Featured

2-Layer Regression Transformer

embed → 2× attn → scalar head

Smallest model that consistently solves linear-regression-shape priors. Good first pass for any 1D scalar-output task.

tabular_embedder (d=64)transformer_encoder × 2scalar_head
PriorStudio Team
ModelClassification

2-Layer Binary Classifier

embed → 2× attn → sigmoid head

Same backbone as the regression baseline, scalar head emits one logit per point. Paired with the two-moons / GMM priors.

tabular_embedder (d=64)transformer_encoder × 2scalar_head (d_out=1)
PriorStudio Team
ModelTime series
★ Featured

4-Layer Temporal Transformer

embed → 4× attn → forecast head

Deeper backbone for sine / AR / seasonal priors. Wider d_model and four attention layers give enough capacity for non-trivial dynamics.

tabular_embedder (d=128)transformer_encoder × 4scalar_head
PriorStudio Team
ModelCausal / discovery

Discovery Encoder

embed → attn → causal pool → discovery head

Outputs a d×d adjacency matrix from N×d observations. The default architecture for linear-SCM / chain-SCM discovery priors.

tabular_embedder (d=128)transformer_encoder × 3causal_attention_pooldiscovery_head
PriorStudio Team
ModelClassification

Tabular Foundation (6L)

embed → 6× attn → estimation head

Wider, deeper tabular backbone for harder priors (hierarchical models, GMM with many classes, interaction-heavy logistic).

tabular_embedder (d=256)transformer_encoder × 6 (heads=8)estimation_head
PriorStudio Team
ModelCausal / discovery

Treatment-Effect Stack

embed → attn → estimation head

For causal effect estimation tasks where the output is a real-valued estimate per query. Pairs with potential-outcome priors.

tabular_embedder (d=128)transformer_encoder × 4estimation_head
PriorStudio Team
ProjectProbabilistic✓ Importable
★ Featured

PFNs

Reference library for training Prior-Data Fitted Networks

AutoML Freiburg's maintained PFN library — the canonical implementation for training transformer-based PFNs that approximate Bayesian prediction. Foundation for TabPFN, PFNs4BO, LC-PFN and most downstream PFN work.

Müller, Hollmann, Hutter · AutoML Freiburg
Apache-2.0
Import to studio →·View source ↗
ProjectClassification◐ Partial import
★ Featured

TabPFN

Foundation model for tabular classification + regression

In-context-learning transformer that predicts on small tabular datasets in seconds, no per-dataset training. v2 was published in Nature (2025) and matches or beats tuned tree ensembles.

Hollmann, Müller, Hutter · Prior Labs
Prior Labs License (Apache-2 + attribution)
Import to studio →·View source ↗
ProjectTime series✓ Importable

TabPFN-TS

Zero-shot univariate time-series forecasting via TabPFN v2

Frames forecasting as tabular regression and runs TabPFN v2 with lightweight feature engineering for zero-shot point + probabilistic forecasts. Handles exogenous features (weather, holidays) without preprocessing.

Prior Labs
Apache-2.0
Import to studio →·View source ↗
ProjectClassification✓ Importable

MotherNet

Hypernetwork that emits a trained tabular classifier in one pass

Foundational hypernetwork trained on synthetic tabular tasks: prompted with a training set, it emits the weights of a small child neural network without gradient descent. Faster inference than TabPFN.

Microsoft Research
Apache-2.0
Import to studio →·View source ↗
ProjectOptimization✓ Importable

PFNs4BO

In-context Bayesian optimisation via PFNs

ICML 2023 implementation using PFNs as surrogates for Bayesian optimisation, replacing Gaussian processes with a pre-trained transformer that predicts posteriors in one forward pass.

Müller, Feurer, Hollmann, Hutter
Apache-2.0
Import to studio →·View source ↗
ProjectOptimization✓ Importable

ifBO

In-context freeze-thaw Bayesian optimization

ICML 2024. Uses a PFN as a freeze-thaw surrogate, predicting learning-curve continuations to decide which configurations to keep training. Anytime-efficient hyperparameter search.

Rakotoarison, Adriaensen et al. · AutoML
MIT
Import to studio →·View source ↗
ProjectProbabilistic✓ Importable

LC-PFN

Bayesian learning-curve extrapolation via a PFN

NeurIPS 2023. Predicts the posterior over a learning curve's continuation given a few initial points, using a PFN trained on a parametric curve prior. Drop-in surrogate for early-stopping.

Adriaensen, Rakotoarison, Müller, Hutter
MIT
Import to studio →·View source ↗
ProjectClassification✓ Importable

TabICL

Tabular foundation model via in-context learning

Permissive BSD-licensed tabular foundation model in the PFN family. Strong benchmark results plus a forecast sub-module derived from TabPFN-TS.

Qu, Holzmüller, Le Morvan · Soda team, Inria
BSD-3-Clause
Import to studio →·View source ↗
ProjectProbabilistic✓ Importable

KinPFN

PFN approximating RNA folding first-passage-time distributions

ICLR 2025. PFN applied to RNA kinetics: a transformer trained on synthetic kinetic priors directly predicts the CDF of first passage times, replacing expensive Kinfold simulations.

Scheuer, Runge et al. · AutoML Freiburg
Apache-2.0
Import to studio →·View source ↗
Looking for more? Awesome-Prior-Data-Fitted-Networks ↗ keeps a curated index of papers, code, and applications across the PFN literature.