Accelerate documentation
The inference API
Getting started
Tutorials
OverviewMigrating to 🤗 AccelerateLaunching distributed codeLaunching distributed training from Jupyter NotebooksTroubleshooting guide
How-To Guides
Start Here!Example ZooHow to perform inference on large models with small resourcesKnowing how big of a model you can fit into memoryHow to quantize modelHow to perform distributed inference with normal resourcesPerforming gradient accumulationAccelerating training with local SGDSaving and loading training statesUsing experiment trackersHow to use Apple Silicon M1 GPUsHow to train in low precision (FP8)How to use DeepSpeedHow to use Fully Sharded Data ParallelismHow to use Megatron-LMHow to use 🤗 Accelerate with SageMakerHow to use 🤗 Accelerate with Intel® Extension for PyTorch for cpu
Concepts and fundamentals
🤗 Accelerate's internal mechanismLoading big models into memoryComparing performance across distributed setupsExecuting and deferring jobsGradient synchronizationHow training in low-precision environments is possible (FP8)TPU best practices
Reference
Main Accelerator classStateful configuration classesThe Command LineTorch wrapper classesExperiment trackersDistributed launchersDeepSpeed utilitiesLoggingWorking with large modelsDistributed inference with big modelsKwargs handlersUtility functions and classesMegatron-LM UtilitiesFully Sharded Data Parallelism Utilities
You are viewing v0.27.2 version. A newer version v1.13.0 is available.
The inference API
These docs refer to the PiPPy integration.
accelerate.prepare_pippy
< source >( model split_points: Union = 'auto' no_split_module_classes: Optional = None example_args: Optional = () example_kwargs: Optional = None num_chunks: Optional = None gather_output: Optional = False )
Parameters
- model (
torch.nn.Module) — A model we want to split for pipeline-parallel inference - split_points (
strorList[str], defaults to ‘auto’) — How to generate the split points and chunk the model across each GPU. ‘auto’ will find the best balanced split given any model. Should be a list of layer names in the model to split by otherwise. - no_split_module_classes (
List[str]) — A list of class names for layers we don’t want to be split. - example_args (tuple of model inputs) — The expected inputs for the model that uses order-based inputs. Recommended to use this method if possible.
- example_kwargs (dict of model inputs) — The expected inputs for the model that uses dictionary-based inputs. This is a highly limiting structure that requires the same keys be present at all inference calls. Not recommended unless the prior condition is true for all cases.
- num_chunks (
int, defaults to the number of available GPUs) — The number of different stages the Pipeline will have. By default it will assign one chunk per GPU, but this can be tuned and played with. In general one should have num_chunks >= num_gpus. - gather_output (
bool, defaults toFalse) — IfTrue, the output from the last GPU (which holds the true outputs) is sent across to all GPUs.
Wraps model for pipeline parallel inference.