Back to AI Glossary
Model Optimization & Inference

What is Model Pruning?

Model Pruning removes unnecessary weights or neurons to reduce model size and computation while preserving performance. Pruning techniques range from simple magnitude-based to sophisticated structured approaches.

This model optimization and inference term is currently being developed. Detailed content covering implementation approaches, performance tradeoffs, best practices, and deployment considerations will be added soon. For immediate guidance on model optimization strategies, contact Pertama Partners for advisory services.

Why It Matters for Business

Model pruning reduces inference costs by 30-70% while maintaining 95-99% of original accuracy, enabling deployment on cheaper hardware and expanding viable use case economics. Companies pruning models for edge deployment eliminate cloud API dependency, reducing per-inference costs from cents to fractions of a cent at scale. The technique enables deploying powerful models on $200-500 edge devices rather than requiring $10,000+ GPU servers for local inference workloads.

Key Considerations
  • Removes low-magnitude or redundant parameters.
  • Structured pruning removes entire neurons/heads.
  • Unstructured pruning requires sparse computation support.
  • Can require retraining (fine-tuning) for quality recovery.
  • Complementary to quantization for compression.
  • Research technique with growing production use.
  • Structured pruning removes entire channels or attention heads, enabling acceleration on standard hardware without requiring specialized sparse matrix computation libraries.
  • Iterative pruning with fine-tuning cycles preserves more accuracy than one-shot removal; budget 3-5 pruning rounds for production-quality compressed model outputs.
  • Validate pruned model performance on edge-case inputs specifically since accuracy degradation from pruning disproportionately affects underrepresented data categories.

Common Questions

When should we quantize models?

Quantize for deployment when inference cost or latency is concern and minor quality degradation is acceptable. Test quantized models thoroughly on your use cases. 8-bit quantization typically has minimal impact, 4-bit requires more careful evaluation.

How do we choose inference framework?

Consider model format compatibility, hardware support, performance requirements, and operational preferences. vLLM excels for high-throughput serving, TensorRT-LLM for low latency, Ollama for local deployment simplicity.

More Questions

Batching increases throughput but raises per-request latency. Optimize for throughput in offline batch processing, latency for interactive applications. Continuous batching balances both for variable workloads.

References

  1. NIST Artificial Intelligence Risk Management Framework (AI RMF 1.0). National Institute of Standards and Technology (NIST) (2023). View source
  2. Stanford HAI AI Index Report 2025. Stanford Institute for Human-Centered AI (2025). View source

Need help implementing Model Pruning?

Pertama Partners helps businesses across Southeast Asia adopt AI strategically. Let's discuss how model pruning fits into your AI roadmap.