Monitoring storage statistics
The Statistics tab lets you supervise datastore consumption and detect any abnormal behavior.
Storage usage
Reading the graph
The main graph displays over the selected period (e.g. 24h):
| Series | Description |
|---|---|
| Used (red) | Space actually occupied on the datastore |
| Total (green) | Total capacity of your plan |
The values table below shows for each series:
| Column | Meaning |
|---|---|
| Last | Value at the last point in the period |
| Min | Minimum value over the period |
| Max | Maximum value over the period |
Interpreting the values
The Used curve reflects the physical space stored on the datastore after deduplication. It is therefore much lower than the gross volume of data backed up.
Example: backing up 500 GB of VMs with a 6× deduplication ratio, the Used curve will be around 80 GB.
The value shown in the Dashboard tab (e.g. "121.7 GB of 245.0 GB") corresponds to the Last value on the graph.
Consumption trend
Observe the slope of the Used curve over 7 days:
- Flat slope → effective prune policy, consistent volumes
- Fast upward slope → growing data, check your retention policy
- Flat or decreasing → recent GC run or aggressive prune policy
Disk throughput
The Disk Throughput graph shows read/write speed in MB/s on the datastore.
When to analyze this graph?
- To identify backup windows (write spikes)
- To detect abnormal activity (constant writes with no scheduled job)
- To estimate the impact of jobs on overall performance
Disk IOPS
The IOPS (I/O Operations Per Second) graph measures I/O intensity.
| Value | Interpretation |
|---|---|
| Moderate spikes during jobs | Normal |
| Continuously high IOPS | May indicate a stuck GC or verification task |
| Zero IOPS when a job should be running | Check network connectivity and job status |
Recommended monitoring
Weekly checks
- Storage trend: is consumption consistent with expectations?
- Deduplication ratio (dashboard): is it stable or declining?
- Estimated saturation: how long before capacity needs to be upgraded?
Signals to watch
| Signal | Recommended action |
|---|---|
| Saturation < 30 days | Plan a capacity upgrade |
| Deduplication ratio close to 1× | Review prune policy and GC schedule |
| IOPS spikes outside job window | Check for running verification or GC tasks |
Estimating your real footprint
PBS deduplication operates at two levels depending on data type:
- Fixed-size chunks (QEMU VM disk images): less sensitive to partial changes
- Variable-size chunks (LXC archives via
.pxar): better deduplication on file trees
In practice, expect 2× to 10× on production infrastructures. Environments with many similar VMs (templates, clones) achieve the best ratios.
If the estimated saturation in the dashboard drops below 30 days, consult Upgrading capacity.