Skip to main content

Monitoring storage statistics

The Statistics tab lets you supervise datastore consumption and detect any abnormal behavior.

Storage usage

Reading the graph

The main graph displays over the selected period (e.g. 24h):

SeriesDescription
Used (red)Space actually occupied on the datastore
Total (green)Total capacity of your plan

The values table below shows for each series:

ColumnMeaning
LastValue at the last point in the period
MinMinimum value over the period
MaxMaximum value over the period

Interpreting the values

The Used curve reflects the physical space stored on the datastore after deduplication. It is therefore much lower than the gross volume of data backed up.

Example: backing up 500 GB of VMs with a 6× deduplication ratio, the Used curve will be around 80 GB.

Relation to the dashboard

The value shown in the Dashboard tab (e.g. "121.7 GB of 245.0 GB") corresponds to the Last value on the graph.

Consumption trend

Observe the slope of the Used curve over 7 days:

  • Flat slope → effective prune policy, consistent volumes
  • Fast upward slope → growing data, check your retention policy
  • Flat or decreasing → recent GC run or aggressive prune policy

Disk throughput

The Disk Throughput graph shows read/write speed in MB/s on the datastore.

When to analyze this graph?

  • To identify backup windows (write spikes)
  • To detect abnormal activity (constant writes with no scheduled job)
  • To estimate the impact of jobs on overall performance

Disk IOPS

The IOPS (I/O Operations Per Second) graph measures I/O intensity.

ValueInterpretation
Moderate spikes during jobsNormal
Continuously high IOPSMay indicate a stuck GC or verification task
Zero IOPS when a job should be runningCheck network connectivity and job status

Weekly checks

  1. Storage trend: is consumption consistent with expectations?
  2. Deduplication ratio (dashboard): is it stable or declining?
  3. Estimated saturation: how long before capacity needs to be upgraded?

Signals to watch

SignalRecommended action
Saturation < 30 daysPlan a capacity upgrade
Deduplication ratio close to 1×Review prune policy and GC schedule
IOPS spikes outside job windowCheck for running verification or GC tasks

Estimating your real footprint

PBS deduplication operates at two levels depending on data type:

  • Fixed-size chunks (QEMU VM disk images): less sensitive to partial changes
  • Variable-size chunks (LXC archives via .pxar): better deduplication on file trees

In practice, expect 2× to 10× on production infrastructures. Environments with many similar VMs (templates, clones) achieve the best ratios.

If the estimated saturation in the dashboard drops below 30 days, consult Upgrading capacity.