Migrating from Hydra
This guide helps you migrate existing projects from Hydra to Hyperparameter. We'll cover the key differences and provide side-by-side comparisons.
Why Migrate?
| Aspect | Hydra | Hyperparameter |
|---|---|---|
| Performance | Pure Python (slower in loops) | Rust backend (6-850x faster) |
| Dependencies | Heavy (antlr4, omegaconf, etc.) | Minimal (only toml) |
| Config Style | Top-down (pass cfg everywhere) |
Bottom-up (inject into functions) |
| Scoping | Static (compose at startup) | Dynamic (change at runtime) |
Quick Comparison
Defining Parameters
Hydra:
# config.yaml
model:
hidden_size: 256
dropout: 0.1
# main.py
import hydra
from omegaconf import DictConfig
@hydra.main(config_path=".", config_name="config")
def main(cfg: DictConfig):
print(cfg.model.hidden_size) # 256
Hyperparameter:
# config.toml
[model]
hidden_size = 256
dropout = 0.1
# main.py
import hyperparameter as hp
@hp.param("model")
def build_model(hidden_size=128, dropout=0.0):
print(hidden_size) # 256 (from config)
if __name__ == "__main__":
cfg = hp.config("config.toml")
with hp.scope(**cfg):
build_model()
Config Composition (Multiple Files)
Hydra:
# config.yaml
defaults:
- model: resnet
- dataset: imagenet
- _self_
# model/resnet.yaml
name: resnet50
layers: 50
Hyperparameter:
import hyperparameter as hp
# Load and merge multiple configs (later files override earlier)
cfg = hp.config(["base.toml", "model/resnet.toml", "dataset/imagenet.toml"])
with hp.scope(**cfg):
train()
Variable Interpolation
Hydra (OmegaConf):
Hyperparameter:
Both support the same ${key} syntax!
Schema Validation
Hydra (with dataclass):
from dataclasses import dataclass
from hydra.core.config_store import ConfigStore
@dataclass
class ModelConfig:
hidden_size: int = 256
dropout: float = 0.1
cs = ConfigStore.instance()
cs.store(name="model_config", node=ModelConfig)
Hyperparameter:
from dataclasses import dataclass
import hyperparameter as hp
@dataclass
class ModelConfig:
hidden_size: int = 256
dropout: float = 0.1
# Direct validation, no ConfigStore needed
cfg = hp.config("config.toml", schema=ModelConfig)
print(cfg.hidden_size) # IDE autocomplete works!
Command Line Overrides
Hydra:
Hyperparameter:
python train.py -D model.hidden_size=512 -D model.dropout=0.2
# Or with config file:
python train.py -C config.toml -D model.hidden_size=512
Dynamic Scoping (Hyperparameter Exclusive)
This is something Hydra cannot do easily:
import hyperparameter as hp
@hp.param("layer")
def create_layer(dropout=0.1):
return f"Layer with dropout={dropout}"
# Different dropout for different layers - no code change needed!
with hp.scope(**{"layer.dropout": 0.1}):
layer1 = create_layer() # dropout=0.1
with hp.scope(**{"layer.dropout": 0.5}):
layer2 = create_layer() # dropout=0.5
Migration Checklist
- [ ] Config Files: Convert YAML to TOML/JSON (or keep YAML with PyYAML installed)
- [ ] Decorators: Replace
@hydra.mainwith@hp.param+hp.launch() - [ ] Config Access: Replace
cfg.x.ywithhp.scope.x.y | defaultor function injection - [ ] Composition: Replace
defaultslist withhp.config([file1, file2]) - [ ] Interpolation: Same syntax
${key}works - [ ] CLI: Replace positional overrides with
-D key=value
What You'll Gain
- Performance: 6x faster in dynamic access, 850x faster with injection
- Simplicity: No ConfigStore, no
@hydra.mainboilerplate - Flexibility: Dynamic scoping for complex control flows
- Lightweight: Fewer dependencies, faster startup
What You'll Lose (For Now)
- Sweeper Plugins: No built-in Optuna/Ax integration (but easy to implement manually)
- Launcher Plugins: No SLURM/submitit integration
- Output Management: No automatic
outputs/date/timedirectories - Tab Completion: No shell autocomplete for config options
These features may be added in future versions based on community feedback.