ProfitOpsPriorStudio
Based on Prior-Data Fitted Networks

Studio for foundation models.

Design priors, compose architectures, train, test, share — one place to build a foundation model for your domain. No data-generation code to write. No infra to glue.

Built on the prior-fitted networks architecture introduced in Müller et al., ICLR 2022. A prior is just Python code that generates synthetic training data — train on it once, get a model that does in-context inference on any real dataset of the same shape.

Project overview — prior · model · eval · run

Build models that

Forecast time series
sensor · demand · traffic
Classify tabular data
churn · fraud · risk
Discover causal structure
process · root cause
Bayesian inference
in-context posteriors
Made for

Whoever's closest to the prior.

ML engineers

Stop hand-rolling training loops and eval harnesses for every domain. Author once, the studio runs everything.

Data scientists

Get a working in-context model on your problem in minutes — fast enough to make it a prototyping tool, not a quarter-long project.

Domain experts

You know the physics or the data better than any ML team. Start from a prior that matches your domain — no PyTorch required.

Inside the studio

Five tools, one workspace.

Everything you need to take a foundation model from idea to a trained, shareable artifact — and nothing you don't.

Prior designer — "Pick a family to start" picker
01

Designer

Author priors visually, in math, or in Python.

Block-diagram editor for the visual mode. Equation-form editor for the math mode. Raw prior.py for full control. Five built-in families to start from.

Click to expand →
Model designer — stacked block list with config inputs
02

Composer

Compose architectures from blocks.

Drag tabular embedders, transformer encoders, attention pools, and task heads into a model spec. Add your own blocks with a single decorator.

Click to expand →
Priors list — curated catalog of ready-to-fork priors
03

Marketplace

Start from a curated library of priors.

13 ready-to-train priors plus 6 model templates and 9 OSS PFN projects. Fork any of them into your workspace.

Click to expand →
Runs list — completed, queued, and failed runs with their statuses
04

Runner

Train in the cloud, no setup.

Click Run and watch the loss curve update live in your browser — training happens on our infrastructure. CPU included during early access; GPU on demand as you scale.

Click to expand →
Public share page — Try-it widget with predictions table
05

Share

Public Try-it links per trained model.

Generate a public URL for any completed run. Recipients open a Try-it page, paste their data, get predictions — no sign-up required.

Click to expand →
From the marketplace

Start from a prior. Train. Done.

Every prior is a Python file plus a parameter spec — built by the community, validated against a baseline, ready to fork into your project.

Browse all 28
How it works

From idea to trained model in three steps.

1

Pick or design a prior

Install from the marketplace, fork an existing one, or design your own in the visual editor. A prior is just Python that generates synthetic data the model can train on.

2

Train

Hit ▶ Run. We spin up the training job on our infrastructure. You watch the loss curve update live, inspect every step, and fail loudly with structured error logs if something's off.

3

Test and share

Try predictions in the in-product widget. Generate a public link, send it to a customer or colleague — they paste their data and see predictions without an account.

How is this different

Traditional ML stack vs. PriorStudio.

Most teams ship a foundation model by gluing six tools and writing a lot of YAML. PriorStudio collapses the stack so you spend time on the prior, not the infra.

Traditional ML stack
PriorStudio
Months of data labeling per model
Trains on synthetic priors — no labels needed
Hand-rolled training loop per project
The studio runs the loop
Fine-tune a Hugging Face checkpoint
Train a model that does in-context inference
Hand-built UI to show stakeholders
One-click public Try-it links per model
GPU bills before you have a prototype
Free during early access · GPU on demand later

What your customer sees

Public share page — Try-it widget with context, query, and predictions table
A public Try-it link — no sign-up, no account, just paste data and get predictions.
Common questions

Things people ask first.

Short answers to the questions that come up most. Reach out at hello@profitops.ai if yours isn't here.

What's a foundation model? How is this different from GPT or Claude?
A foundation model is any model trained on a lot of data and then reused as the base for many downstream tasks. GPT and Claude are foundation models for language — they learned from internet-scale text and can write, summarise, and code. PriorStudio is for foundation models in your domain: tabular data, time series, causal structure. They don't read text; they read your data and do in-context inference (forecast, classify, discover structure) without fine-tuning. Same paradigm, different substrate — GPT trained on the web, PFN-style models train on synthetic priors you (or the community) author.
What's a prior, in plain English?
A `prior.py` file that returns synthetic training samples — random linear functions, random AR(2) time series, random Bayesian outcomes, whatever fits your domain. Fork one from the marketplace or write your own in the Designer.
Do I need labelled data?
No. PriorStudio trains on synthetic priors — Python code that generates data. The trained model then does in-context inference on your real data at runtime, without ever being shown a labelled example.
How is this different from Hugging Face?
Hugging Face hosts pre-trained models built by other people. PriorStudio is where you author the prior, train your own foundation model on it, and deploy. Different layer of the stack — HF gives you finished cake; we give you the oven.
What if my use case isn't in the marketplace?
Open the Designer and write your own prior in Python. Five built-in families to start from (regression / classification / time-series / Bayesian / causal), or start blank. The marketplace grows from there — we add new priors regularly and accept community contributions.
Is my data private?
Yes. Priors are synthetic — your real data is never used for training. At inference time data is sent only to your account's API endpoint, isolated per-organisation, and is never used to improve the platform.
What does pricing look like?
Free during early access — no card, no usage caps. Tiered pricing (Free / Pro / Team) goes live when we leave beta; we will publish the schedule before the first paid plan is enabled. No surprise bills.
Can I self-host or run on my own GPU?
The training subprocess is portable Python, so `priorstudio run` works from any machine. Hosted training is the default and what we recommend; self-hosted compute (Modal / Vast / RunPod / your own cluster) is scaffolded and lands properly in v0.6 for teams with compliance constraints.

Your data deserves its own studio.

Free during early access. No credit card. No installs. We host the training.