Get in Touch With Us

Submitting the form below will ensure a prompt response from us.

What are FLOPs in Machine Learning? It’s a question every ML enthusiast should ask when evaluating model efficiency. FLOPs, or Floating Point Operations, help you understand the computational power your model consumes — beyond just accuracy and speed.

What are FLOPs in Machine Learning?

In machine learning and deep learning, evaluating model performance isn’t limited to accuracy or loss metrics. When it comes to deploying models in production or on edge devices, computational cost matters. That’s where FLOPs come in. FLOPs, or Floating Point Operations, provide a way to measure the computational complexity of a machine learning model.

What are FLOPs?

FLOPs stands for Floating Point Operations. In machine learning, it refers to the number of mathematical operations (like addition, multiplication, etc.) performed using floating-point numbers to process data and make predictions.

  1. Floating-point operations are operations using numbers with decimals.
  2. FLOPs quantify how “heavy” a model is computationally.
  3. This is not to be confused with FLOPS (floating point operations per second), which is a measure of performance/speed; FLOPs are just the count of operations.

Why are FLOPs Important in Machine Learning?

  1. Model Optimization: Helps reduce the size and complexity of models.
  2. Deployment: Useful for edge devices with limited computational resources.
  3. Benchmarking: Let’s you compare model complexity across architectures.

For example, a MobileNet model may have significantly fewer FLOPs than a ResNet, making it more suitable for mobile devices.

How to Calculate FLOPs?

Calculating FLOPs manually involves breaking down every operation inside a neural network — convolutions, matrix multiplications, activations — and summing them. This gets tedious, especially with deep neural networks.

Instead, you can use tools or libraries depending on your framework.

Calculating FLOPs in PyTorch

Let’s use the popular ptflops library to calculate FLOPs for a simple model in PyTorch.

Installation

bash

pip install ptflops

Code Example

python

import torch
import torchvision.models as models
from ptflops import get_model_complexity_info
model = models.resnet18()  # Pre-trained ResNet18
with torch.cuda.device(0):
macs, params = get_model_complexity_info(model, (3, 224, 224), as_strings=True,
print_per_layer_stat=True, verbose=True)
print(f"FLOPs: {macs}")
print(f"Parameters: {params}")

Output (Example)

makefile

FLOPs: 1.82 GMac

Parameters: 11.69 M

Note: 1 GMac = 1 Giga Multiply-Accumulate operation = ~2 GFLOPs (approx.)

Calculating FLOPs in TensorFlow

TensorFlow doesn’t have built-in FLOP counters, but you can use tf.profiler or TensorFlow Lite Converter to get FLOPs.

Example (Using TFLite Converter)

python

import tensorflow as tf
model = tf.keras.applications.MobileNetV2()
model.save('mobilenet.h5')
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
# Write to file
with open('mobilenet.tflite', 'wb') as f:
f.write(tflite_model)
# Use Netron (netron.app) to inspect FLOPs

For accurate FLOP estimation in TensorFlow, many developers use third-party tools like Netron, TensorBoard, or TensorFlow Profiler.

FLOPs vs Parameters

  • FLOPs measure computation (how much work the model does).
  • Parameters measure memory usage (how much the model stores).

A model may have fewer parameters but more FLOPs if it does more computation with fewer stored weights (and vice versa).

FLOPs in Popular Models

Model Parameters FLOPs (Approx.)
ResNet-18 11.7M ~1.8 GFLOPs
MobileNetV2 3.4M ~0.3 GFLOPs
BERT Base 110M ~22 GFLOPs per seq
GPT-2 (117M) 117M ~38 GFLOPs per token

Limitations of FLOPs

  • Not always proportional to actual runtime due to hardware variations.
  • Doesn’t account for memory bandwidth, parallelism, or batch size.
  • Still, a good approximation of the model cost when comparing architectures.

Need ML Performance Tuning?

Let us help you calculate FLOPs and optimize your deep learning models for faster training and deployment across devices.

Connect with Us Now

Conclusion

FLOPs in machine learning provide a vital metric for understanding how computationally intensive a model is. Whether you’re optimizing for mobile apps, edge devices, or cloud deployments, knowing the FLOPs helps you choose or build efficient architectures.

While not a replacement for benchmarking or profiling, FLOPs offer quick insight into how expensive your model is to run. Combine this knowledge with accuracy and latency metrics to make well-informed deployment decisions.

About Author

Jayanti Katariya is the CEO of Moon Technolabs, a fast-growing IT solutions provider, with 18+ years of experience in the industry. Passionate about developing creative apps from a young age, he pursued an engineering degree to further this interest. Under his leadership, Moon Technolabs has helped numerous brands establish their online presence and he has also launched an invoicing software that assists businesses to streamline their financial operations.

Related Q&A