MaskRCNN: Optimized for Qualcomm Devices

Mask R-CNN is a machine learning model that extends Faster R-CNN to perform instance segmentation by detecting objects in an image while simultaneously generating a high-quality segmentation mask for each instance. It adds a branch for predicting segmentation masks in parallel with the existing branch for bounding box recognition.

This is based on the implementation of MaskRCNN found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
QNN_DLC float Universal QAIRT 2.43 Download

For more device-specific assets and performance metrics, visit MaskRCNN on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for MaskRCNN on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.semantic_segmentation

Model Stats:

  • Model checkpoint: Mask R-CNN ResNet-50 FPN V2
  • Input resolution: 800x800
  • Number of output classes: 91
  • Number of parameters: 46.4M
  • Model size (float): 177 MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
MaskRCNNProposalGenerator QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 56.939 ms 7 - 1326 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Snapdragon® X2 Elite 58.441 ms 7 - 7 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Snapdragon® X Elite 139.008 ms 7 - 7 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Snapdragon® 8 Gen 3 Mobile 105.679 ms 7 - 1393 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Qualcomm® QCS8275 (Proxy) 415.448 ms 2 - 1195 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Qualcomm® QCS8550 (Proxy) 148.486 ms 7 - 751 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Qualcomm® SA8775P 167.193 ms 1 - 1195 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Qualcomm® QCS8450 (Proxy) 210.601 ms 8 - 1353 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Qualcomm® SA7255P 415.448 ms 2 - 1195 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Qualcomm® SA8295P 168.736 ms 0 - 1139 MB NPU
MaskRCNNProposalGenerator QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 71.498 ms 7 - 1304 MB NPU
MaskRCNNROIHead QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 93.517 ms 51 - 854 MB NPU
MaskRCNNROIHead QNN_DLC float Snapdragon® X2 Elite 99.886 ms 52 - 52 MB NPU
MaskRCNNROIHead QNN_DLC float Snapdragon® X Elite 235.822 ms 52 - 52 MB NPU
MaskRCNNROIHead QNN_DLC float Snapdragon® 8 Gen 3 Mobile 180.185 ms 49 - 929 MB NPU
MaskRCNNROIHead QNN_DLC float Qualcomm® QCS8275 (Proxy) 596.745 ms 49 - 849 MB NPU
MaskRCNNROIHead QNN_DLC float Qualcomm® QCS8550 (Proxy) 242.165 ms 52 - 54 MB NPU
MaskRCNNROIHead QNN_DLC float Qualcomm® SA8775P 1129.878 ms 40 - 841 MB NPU
MaskRCNNROIHead QNN_DLC float Qualcomm® QCS8450 (Proxy) 326.792 ms 39 - 940 MB NPU
MaskRCNNROIHead QNN_DLC float Qualcomm® SA7255P 596.745 ms 49 - 849 MB NPU
MaskRCNNROIHead QNN_DLC float Qualcomm® SA8295P 302.095 ms 49 - 974 MB NPU
MaskRCNNROIHead QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 126.714 ms 34 - 824 MB NPU

License

  • The license for the original implementation of MaskRCNN can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Paper for qualcomm/MaskRCNN