Initial commit
This commit is contained in:
24
.gitignore
vendored
Normal file
24
.gitignore
vendored
Normal file
@@ -0,0 +1,24 @@
|
||||
# Config files with credentials
|
||||
config.ini
|
||||
*.ini
|
||||
!config.example.ini
|
||||
|
||||
# Report output
|
||||
reports/
|
||||
*.html
|
||||
*.xlsx
|
||||
|
||||
# Python
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
.Python
|
||||
venv/
|
||||
env/
|
||||
.venv/
|
||||
|
||||
# IDE
|
||||
.idea/
|
||||
.vscode/
|
||||
*.swp
|
||||
*.swo
|
||||
317
DATTO_Backup_Performance_Analysis_Report.md
Normal file
317
DATTO_Backup_Performance_Analysis_Report.md
Normal file
@@ -0,0 +1,317 @@
|
||||
# DATTO Backup Performance Analysis Report
|
||||
|
||||
**Prepared for:** Management Review
|
||||
**Date:** December 17, 2025
|
||||
**Prepared by:** IT Infrastructure Team
|
||||
**Subject:** Investigation of Slow VMware VM Backup Performance to DATTO Appliance
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
An investigation was conducted to determine the root cause of extremely slow backup speeds (2-5 MB/s) when backing up VMware virtual machines to the DATTO backup appliance. Despite having 10Gbps network infrastructure capable of 1,000+ MB/s throughput, backups are completing at less than 1% of network capacity.
|
||||
|
||||
**Key Finding:** The network infrastructure (HP switch, cabling, VLANs) has been ruled out as the cause. The bottleneck has been identified as the DATTO backup agent software running inside the Windows virtual machines, specifically the MercuryFTP protocol used for data transfer.
|
||||
|
||||
**Recommendation:** Engage DATTO support with the evidence documented in this report to resolve the software-level performance issue.
|
||||
|
||||
---
|
||||
|
||||
## Problem Statement
|
||||
|
||||
| Metric | Expected | Actual | Gap |
|
||||
|--------|----------|--------|-----|
|
||||
| Network Capacity | 10 Gbps (1,250 MB/s) | - | - |
|
||||
| Practical Throughput | 100-500 MB/s | 2-5 MB/s | **99% under capacity** |
|
||||
| 8TB File Server Backup | 4-8 hours | 24-48+ hours | 6-12x longer |
|
||||
|
||||
The slow backup speeds are causing:
|
||||
- Extended backup windows overlapping with business hours
|
||||
- Incomplete backup jobs
|
||||
- Increased risk of data loss due to stale recovery points
|
||||
|
||||
---
|
||||
|
||||
## Infrastructure Overview
|
||||
|
||||
### Network Topology
|
||||
|
||||
```
|
||||
┌─────────────────────────────────────────────────────────────────────────┐
|
||||
│ │
|
||||
│ HP 5406R zl2 Switch │
|
||||
│ (10Gbps Infrastructure) │
|
||||
│ │
|
||||
│ ┌──────────────┐ ┌──────────────┐ │
|
||||
│ │ CMIFS02 │ │ DATTOBU02 │ │
|
||||
│ │ File Server │ │ Backup │ │
|
||||
│ │ 8.7 TB │ │ Appliance │ │
|
||||
│ └──────┬───────┘ └──────┬───────┘ │
|
||||
│ │ │ │
|
||||
│ VLAN 212 VLAN 250 │
|
||||
│ (FileServer) (IT-Management) │
|
||||
│ │ │ │
|
||||
│ ┌──────┴───────┐ ┌──────┴───────┐ │
|
||||
│ │ Port F2 │ │ Port E5 │ │
|
||||
│ │ 10 Gbps │ │ 10 Gbps │ │
|
||||
│ │ Status: UP │ │ Status: UP │ │
|
||||
│ └──────┬───────┘ └──────┬───────┘ │
|
||||
│ │ │ │
|
||||
│ │ ┌──────────────┐ │ │
|
||||
│ │ │ Port A20 │ │ │
|
||||
│ └────────►│ 1 Gbps │◄────────────────┘ │
|
||||
│ │ Router │ │
|
||||
│ │ (Inter-VLAN) │ │
|
||||
│ └──────────────┘ │
|
||||
│ │
|
||||
└─────────────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
### Device Identification
|
||||
|
||||
| Device | MAC Address | Switch Port | Speed | VLAN | Status |
|
||||
|--------|-------------|-------------|-------|------|--------|
|
||||
| CMIFS02 (File Server) | 00:50:56:8F:35:77 | F2 | 10 Gbps | 212 | Up |
|
||||
| DATTOBU02 (Backup) | 6C:92:CF:17:BD:20 | E5 | 10 Gbps | 250 | Up |
|
||||
| Router/Firewall | Multiple | A20 | 1 Gbps | Multi | Up |
|
||||
|
||||
---
|
||||
|
||||
## Investigation Results
|
||||
|
||||
### 1. HP Switch Analysis - PASSED
|
||||
|
||||
The HP 5406R zl2 switch was thoroughly analyzed and **cleared of any issues**.
|
||||
|
||||
#### System Health
|
||||
| Metric | Value | Assessment |
|
||||
|--------|-------|------------|
|
||||
| Uptime | 242 days | Stable |
|
||||
| CPU Utilization | 0% | Excellent |
|
||||
| Memory Free | 72% | Excellent |
|
||||
| Packet Buffers Missed | **0** | No packet drops |
|
||||
|
||||
#### Port Status
|
||||
| Port | Device | Link Speed | Errors | Drops |
|
||||
|------|--------|------------|--------|-------|
|
||||
| E5 | DATTO Appliance | 10 Gbps | None | None |
|
||||
| F2 | VMware ESXi Host | 10 Gbps | None | None |
|
||||
| A20 | Router | 1 Gbps | None | None |
|
||||
|
||||
#### Configuration Review
|
||||
| Setting | Configuration | Impact on Backups |
|
||||
|---------|--------------|-------------------|
|
||||
| QoS / Rate Limiting | None configured | No throttling |
|
||||
| Port Security | No restrictions | No blocking |
|
||||
| Spanning Tree | Disabled | No blocked ports |
|
||||
| Broadcast Limits | None (0) | No limits |
|
||||
| Flow Control | Off (normal) | No impact |
|
||||
|
||||
**Conclusion:** Switch is operating normally with zero packet loss and no throttling mechanisms.
|
||||
|
||||
---
|
||||
|
||||
### 2. VMware Performance Analysis - PASSED
|
||||
|
||||
Real-time performance monitoring was conducted during an active backup using the vCenter Performance API.
|
||||
|
||||
#### During Active Backup (CMIFS01)
|
||||
| Metric | Value | Assessment |
|
||||
|--------|-------|------------|
|
||||
| Disk Read Speed | 53-76 MB/s | Good - VM reading data quickly |
|
||||
| Disk Latency | 2 ms | Excellent - no storage bottleneck |
|
||||
| CPU Usage | <10% | Good - not CPU bound |
|
||||
| **Network TX** | **0.4-0.5 MB/s** | **BOTTLENECK IDENTIFIED** |
|
||||
|
||||
#### Historical Analysis (30 Days - CMIFS02)
|
||||
| Metric | Average | Maximum | Assessment |
|
||||
|--------|---------|---------|------------|
|
||||
| CPU Usage | 5.7% | 10.4% | No issues |
|
||||
| Disk Latency | 1.5 ms | 15 ms | Excellent |
|
||||
| Memory Usage | Normal | Normal | No issues |
|
||||
|
||||
**Critical Finding:** The VM reads from disk at **76 MB/s** but only transmits **0.5 MB/s** to the network. This is a **150:1 ratio** indicating the bottleneck is inside the VM, not the network.
|
||||
|
||||
---
|
||||
|
||||
### 3. DATTO Appliance Analysis - ISSUES FOUND
|
||||
|
||||
Review of DATTO appliance logs revealed multiple problems:
|
||||
|
||||
| Issue | Description | Severity |
|
||||
|-------|-------------|----------|
|
||||
| Zpool Capacity Exceeded | Storage pool at or near capacity | High |
|
||||
| High CPU Load | "Load average exceeds 2x number of cores" | High |
|
||||
| HIR Failures | "Failed to copy bootmgfw.efi" on Windows Server 2025 | Medium |
|
||||
| Backups Paused | Some agents showing "paused indefinitely" | High |
|
||||
|
||||
#### DATTO Backup Method
|
||||
The DATTO appliance is using **in-guest Windows agent backup** with **MercuryFTP protocol** (TLS-encrypted proprietary transfer). This is NOT using VMware-native backup APIs (VADP).
|
||||
|
||||
Example from DATTO agent log:
|
||||
```
|
||||
Transport: MercuryFTP (TLS)
|
||||
Backup Speed: 0.57 MB/s
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Root Cause Analysis
|
||||
|
||||
### Eliminated Causes
|
||||
|
||||
| Potential Cause | Evidence | Status |
|
||||
|-----------------|----------|--------|
|
||||
| HP Switch | 0% CPU, 0 dropped packets, 10Gbps links up | **Eliminated** |
|
||||
| Network Cabling | All ports showing 10GigFD negotiation | **Eliminated** |
|
||||
| VLAN Configuration | Correct tagging, routing functional | **Eliminated** |
|
||||
| VMware Storage | 2ms latency, 76 MB/s read speed | **Eliminated** |
|
||||
| VMware CPU | <10% utilization during backup | **Eliminated** |
|
||||
| ESXi Host | 10Gbps uplinks, no errors | **Eliminated** |
|
||||
|
||||
### Confirmed Root Cause
|
||||
|
||||
**DATTO Windows Agent / MercuryFTP Protocol Performance**
|
||||
|
||||
Evidence:
|
||||
1. VM disk reads at 76 MB/s, network transmits at 0.5 MB/s (150:1 ratio)
|
||||
2. Bottleneck occurs between disk read and network transmission inside the VM
|
||||
3. DATTO appliance showing resource constraints (storage full, high CPU)
|
||||
4. Windows Server 2025 compatibility issues with DATTO HIR process
|
||||
|
||||
---
|
||||
|
||||
## Bandwidth Utilization Analysis
|
||||
|
||||
```
|
||||
Available Bandwidth vs. Actual Usage
|
||||
|
||||
10 Gbps ─┬─────────────────────────────────────────────────── 1,250 MB/s
|
||||
│
|
||||
│
|
||||
1 Gbps ─┼─────────────────────────────────────────────────── 125 MB/s
|
||||
│ (Router inter-VLAN link - theoretical max for this path)
|
||||
│
|
||||
│
|
||||
│
|
||||
100 MB/s ┼───────────────────────────────────────────────────
|
||||
│
|
||||
│
|
||||
10 MB/s ┼───────────────────────────────────────────────────
|
||||
│
|
||||
5 MB/s ┼─ ▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓▓ Peak observed
|
||||
│
|
||||
2 MB/s ┼─ ████████████████████████████████ Average observed
|
||||
│
|
||||
0 MB/s ┴───────────────────────────────────────────────────
|
||||
|
||||
Actual backup speed: 2-5 MB/s (0.2-0.4% of available capacity)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Business Impact
|
||||
|
||||
### Current State
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| CMIFS02 Data Volume | ~8.7 TB |
|
||||
| Current Backup Speed | 2-5 MB/s |
|
||||
| Full Backup Duration | 20-50 days (theoretical) |
|
||||
| Incremental Backup Duration | Variable, often exceeds backup window |
|
||||
|
||||
### Risk Assessment
|
||||
| Risk | Likelihood | Impact | Mitigation |
|
||||
|------|------------|--------|------------|
|
||||
| Incomplete backups | High | High | Resolve DATTO performance |
|
||||
| Data loss in disaster | Medium | Critical | Resolve DATTO performance |
|
||||
| Backup window overlap with production | High | Medium | Resolve DATTO performance |
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### Immediate Actions
|
||||
|
||||
1. **Open DATTO Support Ticket**
|
||||
- Provide this report as evidence
|
||||
- Request investigation of MercuryFTP protocol performance
|
||||
- Request review of appliance capacity (zpool full)
|
||||
- Inquire about Windows Server 2025 compatibility
|
||||
|
||||
2. **DATTO Appliance Maintenance**
|
||||
- Address "Zpool capacity exceeded" warning
|
||||
- Review and clear old recovery points if possible
|
||||
- Investigate "backups paused indefinitely" status
|
||||
|
||||
### Questions for DATTO Support
|
||||
|
||||
1. Why is MercuryFTP only achieving 0.5 MB/s when the network supports 1,000+ MB/s?
|
||||
2. Can the backup method be changed to use VMware VADP (agentless) instead of in-guest agent?
|
||||
3. Is Windows Server 2025 fully supported? (HIR failures observed)
|
||||
4. What is the recommended resolution for "Zpool capacity exceeded"?
|
||||
5. Are there tuning parameters for MercuryFTP transfer speeds?
|
||||
|
||||
### Alternative Solutions (If DATTO Cannot Resolve)
|
||||
|
||||
| Solution | Pros | Cons |
|
||||
|----------|------|------|
|
||||
| Veeam Backup & Replication | Native VMware VADP support, proven fast | Licensing cost, migration effort |
|
||||
| Nakivo Backup | VMware-native, competitive pricing | Migration effort |
|
||||
| VMware-level DATTO backup | Uses VADP instead of in-guest agent | May require DATTO configuration change |
|
||||
|
||||
---
|
||||
|
||||
## Appendix A: Switch Configuration Summary
|
||||
|
||||
```
|
||||
Switch Model: HP 5406R zl2 (J9850A)
|
||||
Firmware: KB.16.11.0020 (July 2024)
|
||||
Management Modules: Dual (Active/Standby)
|
||||
|
||||
Key Ports:
|
||||
- E5 (DATTOBU02): 10GbE-T, VLAN 250 untagged
|
||||
- F2 (ESXi Host): 10GbE-T, VLAN 212 tagged
|
||||
- A20 (Router): 1GbE, Multi-VLAN tagged
|
||||
|
||||
No QoS, rate limiting, or traffic shaping configured.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Appendix B: Evidence Summary
|
||||
|
||||
| Evidence Type | Source | Finding |
|
||||
|---------------|--------|---------|
|
||||
| Switch CPU/Memory | `show system` | 0% CPU, 72% memory free |
|
||||
| Packet Drops | `show system` | 0 buffers missed |
|
||||
| Port Status | `show interfaces brief` | All 10Gbps links up |
|
||||
| VM Disk Performance | vCenter API | 76 MB/s read, 2ms latency |
|
||||
| VM Network Performance | vCenter API | 0.5 MB/s TX during backup |
|
||||
| DATTO Logs | Appliance UI | Zpool full, high CPU, HIR failures |
|
||||
| Backup Speed | DATTO Agent | 2-5 MB/s via MercuryFTP |
|
||||
|
||||
---
|
||||
|
||||
## Appendix C: Additional Switch Findings (Unrelated to Backup)
|
||||
|
||||
During the investigation, the following items were noted for separate remediation:
|
||||
|
||||
1. **Brute Force Login Attempts (12/14/2025)**
|
||||
- Source: 10.254.50.24
|
||||
- Usernames: "admin", "Cisco"
|
||||
- Recommendation: Identify and investigate this device
|
||||
|
||||
2. **Port A24 Link Flapping (12/16/2025)**
|
||||
- Third-party SFP+ DAC cable showing intermittent connectivity
|
||||
- Recommendation: Replace cable
|
||||
|
||||
3. **DATTOBU01 (Port F5) Offline**
|
||||
- Second DATTO appliance not connected
|
||||
- Verify if intentional or needs reconnection
|
||||
|
||||
---
|
||||
|
||||
**Report End**
|
||||
|
||||
*This report was prepared using data collected from HP switch CLI, VMware vCenter Performance API, and DATTO appliance logs. All network infrastructure components have been verified as functioning correctly. The performance issue has been isolated to the DATTO backup software layer.*
|
||||
360
cmifs02_perf.csv
Normal file
360
cmifs02_perf.csv
Normal file
@@ -0,0 +1,360 @@
|
||||
vm_name,cpu.ready.summation,cpu.usage.average,disk.maxTotalLatency.latest,interval,mem.usage.average,timestamp
|
||||
CMIFS02,71317,532,1,7200,727,2025-11-18 00:00:00+00:00
|
||||
CMIFS02,73938,577,1,7200,801,2025-11-18 02:00:00+00:00
|
||||
CMIFS02,70721,531,1,7200,770,2025-11-18 04:00:00+00:00
|
||||
CMIFS02,74421,527,0,7200,750,2025-11-18 06:00:00+00:00
|
||||
CMIFS02,80510,531,2,7200,778,2025-11-18 08:00:00+00:00
|
||||
CMIFS02,68777,598,1,7200,1092,2025-11-18 10:00:00+00:00
|
||||
CMIFS02,67597,598,0,7200,952,2025-11-18 12:00:00+00:00
|
||||
CMIFS02,77013,714,1,7200,1092,2025-11-18 14:00:00+00:00
|
||||
CMIFS02,79136,754,2,7200,1123,2025-11-18 16:00:00+00:00
|
||||
CMIFS02,68848,651,1,7200,1198,2025-11-18 18:00:00+00:00
|
||||
CMIFS02,84191,860,1,7200,1236,2025-11-18 20:00:00+00:00
|
||||
CMIFS02,66807,598,1,7200,1012,2025-11-18 22:00:00+00:00
|
||||
CMIFS02,66064,565,2,7200,856,2025-11-19 00:00:00+00:00
|
||||
CMIFS02,67046,614,1,7200,753,2025-11-19 02:00:00+00:00
|
||||
CMIFS02,62261,534,2,7200,805,2025-11-19 04:00:00+00:00
|
||||
CMIFS02,63696,595,2,7200,883,2025-11-19 06:00:00+00:00
|
||||
CMIFS02,64269,528,2,7200,805,2025-11-19 08:00:00+00:00
|
||||
CMIFS02,63520,544,2,7200,873,2025-11-19 10:00:00+00:00
|
||||
CMIFS02,66545,635,1,7200,1075,2025-11-19 12:00:00+00:00
|
||||
CMIFS02,67676,607,1,7200,1013,2025-11-19 14:00:00+00:00
|
||||
CMIFS02,66530,593,1,7200,936,2025-11-19 16:00:00+00:00
|
||||
CMIFS02,66863,612,0,7200,1027,2025-11-19 18:00:00+00:00
|
||||
CMIFS02,71039,648,0,7200,1107,2025-11-19 20:00:00+00:00
|
||||
CMIFS02,65683,558,15,7200,852,2025-11-19 22:00:00+00:00
|
||||
CMIFS02,64452,546,1,7200,825,2025-11-20 00:00:00+00:00
|
||||
CMIFS02,65095,583,2,7200,809,2025-11-20 02:00:00+00:00
|
||||
CMIFS02,63123,532,1,7200,777,2025-11-20 04:00:00+00:00
|
||||
CMIFS02,64206,533,2,7200,825,2025-11-20 06:00:00+00:00
|
||||
CMIFS02,63590,528,1,7200,857,2025-11-20 08:00:00+00:00
|
||||
CMIFS02,62745,545,1,7200,944,2025-11-20 10:00:00+00:00
|
||||
CMIFS02,70354,641,7,7200,1024,2025-11-20 12:00:00+00:00
|
||||
CMIFS02,69372,668,1,7200,1149,2025-11-20 14:00:00+00:00
|
||||
CMIFS02,76789,696,1,7200,1128,2025-11-20 16:00:00+00:00
|
||||
CMIFS02,70114,598,4,7200,1093,2025-11-20 18:00:00+00:00
|
||||
CMIFS02,70496,598,1,7200,824,2025-11-20 20:00:00+00:00
|
||||
CMIFS02,68088,577,1,7200,847,2025-11-20 22:00:00+00:00
|
||||
CMIFS02,64618,535,1,7200,756,2025-11-21 00:00:00+00:00
|
||||
CMIFS02,65845,583,1,7200,829,2025-11-21 02:00:00+00:00
|
||||
CMIFS02,62880,529,2,7200,742,2025-11-21 04:00:00+00:00
|
||||
CMIFS02,63945,532,0,7200,832,2025-11-21 06:00:00+00:00
|
||||
CMIFS02,66677,526,2,7200,779,2025-11-21 08:00:00+00:00
|
||||
CMIFS02,62610,546,1,7200,878,2025-11-21 10:00:00+00:00
|
||||
CMIFS02,61989,563,2,7200,943,2025-11-21 12:00:00+00:00
|
||||
CMIFS02,71024,672,2,7200,1021,2025-11-21 14:00:00+00:00
|
||||
CMIFS02,73701,739,1,7200,1213,2025-11-21 16:00:00+00:00
|
||||
CMIFS02,73107,692,3,7200,1096,2025-11-21 18:00:00+00:00
|
||||
CMIFS02,70749,673,1,7200,1066,2025-11-21 20:00:00+00:00
|
||||
CMIFS02,65033,576,1,7200,1041,2025-11-21 22:00:00+00:00
|
||||
CMIFS02,60512,532,1,7200,1021,2025-11-22 00:00:00+00:00
|
||||
CMIFS02,64542,583,2,7200,971,2025-11-22 02:00:00+00:00
|
||||
CMIFS02,61885,532,3,7200,753,2025-11-22 04:00:00+00:00
|
||||
CMIFS02,61885,533,3,7200,733,2025-11-22 06:00:00+00:00
|
||||
CMIFS02,61995,544,2,7200,790,2025-11-22 08:00:00+00:00
|
||||
CMIFS02,63274,548,1,7200,849,2025-11-22 10:00:00+00:00
|
||||
CMIFS02,64879,560,2,7200,896,2025-11-22 12:00:00+00:00
|
||||
CMIFS02,65054,535,1,7200,900,2025-11-22 14:00:00+00:00
|
||||
CMIFS02,67295,537,3,7200,838,2025-11-22 16:00:00+00:00
|
||||
CMIFS02,75266,611,2,7200,1043,2025-11-22 18:00:00+00:00
|
||||
CMIFS02,71328,549,0,7200,924,2025-11-22 20:00:00+00:00
|
||||
CMIFS02,70336,539,3,7200,702,2025-11-22 22:00:00+00:00
|
||||
CMIFS02,70338,539,1,7200,787,2025-11-23 00:00:00+00:00
|
||||
CMIFS02,71424,587,3,7200,784,2025-11-23 02:00:00+00:00
|
||||
CMIFS02,68835,539,2,7200,824,2025-11-23 04:00:00+00:00
|
||||
CMIFS02,70269,545,1,7200,911,2025-11-23 06:00:00+00:00
|
||||
CMIFS02,83191,750,1,7200,1402,2025-11-23 08:00:00+00:00
|
||||
CMIFS02,70388,599,2,7200,1147,2025-11-23 10:00:00+00:00
|
||||
CMIFS02,70609,540,2,7200,589,2025-11-23 12:00:00+00:00
|
||||
CMIFS02,70503,532,1,7200,587,2025-11-23 14:00:00+00:00
|
||||
CMIFS02,69767,531,1,7200,665,2025-11-23 16:00:00+00:00
|
||||
CMIFS02,70668,576,1,7200,851,2025-11-23 18:00:00+00:00
|
||||
CMIFS02,69345,539,0,7200,765,2025-11-23 20:00:00+00:00
|
||||
CMIFS02,69457,530,1,7200,697,2025-11-23 22:00:00+00:00
|
||||
CMIFS02,69198,528,1,7200,714,2025-11-24 00:00:00+00:00
|
||||
CMIFS02,74408,619,1,7200,866,2025-11-24 02:00:00+00:00
|
||||
CMIFS02,69840,522,1,7200,580,2025-11-24 04:00:00+00:00
|
||||
CMIFS02,69672,531,2,7200,644,2025-11-24 06:00:00+00:00
|
||||
CMIFS02,69433,528,1,7200,726,2025-11-24 08:00:00+00:00
|
||||
CMIFS02,68037,548,1,7200,801,2025-11-24 10:00:00+00:00
|
||||
CMIFS02,63818,555,0,7200,766,2025-11-24 12:00:00+00:00
|
||||
CMIFS02,66945,604,0,7200,912,2025-11-24 14:00:00+00:00
|
||||
CMIFS02,66893,580,1,7200,876,2025-11-24 16:00:00+00:00
|
||||
CMIFS02,69299,637,4,7200,885,2025-11-24 18:00:00+00:00
|
||||
CMIFS02,66014,617,4,7200,1069,2025-11-24 20:00:00+00:00
|
||||
CMIFS02,65457,562,2,7200,702,2025-11-24 22:00:00+00:00
|
||||
CMIFS02,61940,540,2,7200,697,2025-11-25 00:00:00+00:00
|
||||
CMIFS02,63713,580,3,7200,699,2025-11-25 02:00:00+00:00
|
||||
CMIFS02,61164,525,1,7200,656,2025-11-25 04:00:00+00:00
|
||||
CMIFS02,60575,530,1,7200,710,2025-11-25 06:00:00+00:00
|
||||
CMIFS02,60759,528,1,7200,704,2025-11-25 08:00:00+00:00
|
||||
CMIFS02,61361,553,3,7200,860,2025-11-25 10:00:00+00:00
|
||||
CMIFS02,67208,583,1,7200,941,2025-11-25 12:00:00+00:00
|
||||
CMIFS02,68218,597,2,7200,907,2025-11-25 14:00:00+00:00
|
||||
CMIFS02,71023,623,2,7200,789,2025-11-25 16:00:00+00:00
|
||||
CMIFS02,70346,618,2,7200,815,2025-11-25 18:00:00+00:00
|
||||
CMIFS02,68993,582,1,7200,762,2025-11-25 20:00:00+00:00
|
||||
CMIFS02,65695,591,1,7200,884,2025-11-25 22:00:00+00:00
|
||||
CMIFS02,66654,545,2,7200,712,2025-11-26 00:00:00+00:00
|
||||
CMIFS02,68119,583,2,7200,755,2025-11-26 02:00:00+00:00
|
||||
CMIFS02,65147,533,2,7200,716,2025-11-26 04:00:00+00:00
|
||||
CMIFS02,65107,590,2,7200,796,2025-11-26 06:00:00+00:00
|
||||
CMIFS02,64687,528,2,7200,772,2025-11-26 08:00:00+00:00
|
||||
CMIFS02,65053,552,1,7200,847,2025-11-26 10:00:00+00:00
|
||||
CMIFS02,68026,578,3,7200,888,2025-11-26 12:00:00+00:00
|
||||
CMIFS02,68068,586,1,7200,867,2025-11-26 14:00:00+00:00
|
||||
CMIFS02,78013,724,8,7200,1110,2025-11-26 16:00:00+00:00
|
||||
CMIFS02,71758,637,7,7200,950,2025-11-26 18:00:00+00:00
|
||||
CMIFS02,75129,654,1,7200,795,2025-11-26 20:00:00+00:00
|
||||
CMIFS02,68014,545,1,7200,769,2025-11-26 22:00:00+00:00
|
||||
CMIFS02,69322,583,1,7200,870,2025-11-27 00:00:00+00:00
|
||||
CMIFS02,69399,595,2,7200,811,2025-11-27 02:00:00+00:00
|
||||
CMIFS02,66256,534,2,7200,739,2025-11-27 04:00:00+00:00
|
||||
CMIFS02,66317,534,1,7200,829,2025-11-27 06:00:00+00:00
|
||||
CMIFS02,69159,532,1,7200,752,2025-11-27 08:00:00+00:00
|
||||
CMIFS02,69209,555,2,7200,847,2025-11-27 10:00:00+00:00
|
||||
CMIFS02,69887,539,2,7200,808,2025-11-27 12:00:00+00:00
|
||||
CMIFS02,69652,542,0,7200,850,2025-11-27 14:00:00+00:00
|
||||
CMIFS02,69554,534,0,7200,847,2025-11-27 16:00:00+00:00
|
||||
CMIFS02,68921,546,1,7200,893,2025-11-27 18:00:00+00:00
|
||||
CMIFS02,68423,529,1,7200,884,2025-11-27 20:00:00+00:00
|
||||
CMIFS02,68841,532,1,7200,935,2025-11-27 22:00:00+00:00
|
||||
CMIFS02,66725,534,1,7200,1027,2025-11-28 00:00:00+00:00
|
||||
CMIFS02,69400,620,0,7200,1092,2025-11-28 02:00:00+00:00
|
||||
CMIFS02,69661,538,1,7200,793,2025-11-28 04:00:00+00:00
|
||||
CMIFS02,68378,531,0,7200,688,2025-11-28 06:00:00+00:00
|
||||
CMIFS02,68570,533,1,7200,744,2025-11-28 08:00:00+00:00
|
||||
CMIFS02,68404,549,1,7200,869,2025-11-28 10:00:00+00:00
|
||||
CMIFS02,69633,533,0,7200,802,2025-11-28 12:00:00+00:00
|
||||
CMIFS02,69577,540,0,7200,878,2025-11-28 14:00:00+00:00
|
||||
CMIFS02,72106,564,0,7200,917,2025-11-28 16:00:00+00:00
|
||||
CMIFS02,73178,543,1,7200,900,2025-11-28 18:00:00+00:00
|
||||
CMIFS02,73056,526,1,7200,751,2025-11-28 20:00:00+00:00
|
||||
CMIFS02,71446,531,0,7200,785,2025-11-28 22:00:00+00:00
|
||||
CMIFS02,70496,527,0,7200,766,2025-11-29 00:00:00+00:00
|
||||
CMIFS02,72299,578,0,7200,811,2025-11-29 02:00:00+00:00
|
||||
CMIFS02,71008,575,1,7200,978,2025-11-29 04:00:00+00:00
|
||||
CMIFS02,69353,538,0,7200,903,2025-11-29 06:00:00+00:00
|
||||
CMIFS02,70203,535,1,7200,904,2025-11-29 08:00:00+00:00
|
||||
CMIFS02,69984,554,2,7200,891,2025-11-29 10:00:00+00:00
|
||||
CMIFS02,70027,534,0,7200,876,2025-11-29 12:00:00+00:00
|
||||
CMIFS02,70852,538,1,7200,720,2025-11-29 14:00:00+00:00
|
||||
CMIFS02,70392,535,2,7200,744,2025-11-29 16:00:00+00:00
|
||||
CMIFS02,76367,564,1,7200,828,2025-11-29 18:00:00+00:00
|
||||
CMIFS02,69478,536,1,7200,868,2025-11-29 20:00:00+00:00
|
||||
CMIFS02,70683,539,0,7200,885,2025-11-29 22:00:00+00:00
|
||||
CMIFS02,70308,534,2,7200,866,2025-11-30 00:00:00+00:00
|
||||
CMIFS02,71012,583,2,7200,862,2025-11-30 02:00:00+00:00
|
||||
CMIFS02,67590,537,1,7200,886,2025-11-30 04:00:00+00:00
|
||||
CMIFS02,68310,574,1,7200,994,2025-11-30 06:00:00+00:00
|
||||
CMIFS02,75475,615,1,7200,1327,2025-11-30 08:00:00+00:00
|
||||
CMIFS02,69920,574,1,7200,1115,2025-11-30 10:00:00+00:00
|
||||
CMIFS02,68356,539,2,7200,590,2025-11-30 12:00:00+00:00
|
||||
CMIFS02,68615,536,2,7200,669,2025-11-30 14:00:00+00:00
|
||||
CMIFS02,69098,528,1,7200,694,2025-11-30 16:00:00+00:00
|
||||
CMIFS02,70308,528,1,7200,663,2025-11-30 18:00:00+00:00
|
||||
CMIFS02,69852,525,2,7200,630,2025-11-30 20:00:00+00:00
|
||||
CMIFS02,69472,559,2,7200,747,2025-11-30 22:00:00+00:00
|
||||
CMIFS02,69084,529,1,7200,741,2025-12-01 00:00:00+00:00
|
||||
CMIFS02,72273,576,1,7200,779,2025-12-01 02:00:00+00:00
|
||||
CMIFS02,69856,530,1,7200,562,2025-12-01 04:00:00+00:00
|
||||
CMIFS02,68763,534,2,7200,581,2025-12-01 06:00:00+00:00
|
||||
CMIFS02,74829,582,1,7200,824,2025-12-01 08:00:00+00:00
|
||||
CMIFS02,72087,557,1,7200,781,2025-12-01 10:00:00+00:00
|
||||
CMIFS02,69108,573,3,7200,824,2025-12-01 12:00:00+00:00
|
||||
CMIFS02,71194,613,1,7200,893,2025-12-01 14:00:00+00:00
|
||||
CMIFS02,69642,582,2,7200,927,2025-12-01 16:00:00+00:00
|
||||
CMIFS02,75078,650,2,7200,976,2025-12-01 18:00:00+00:00
|
||||
CMIFS02,71542,599,1,7200,998,2025-12-01 20:00:00+00:00
|
||||
CMIFS02,71932,596,1,7200,723,2025-12-01 22:00:00+00:00
|
||||
CMIFS02,70029,596,3,7200,856,2025-12-02 00:00:00+00:00
|
||||
CMIFS02,69630,626,3,7200,835,2025-12-02 02:00:00+00:00
|
||||
CMIFS02,65077,531,1,7200,714,2025-12-02 04:00:00+00:00
|
||||
CMIFS02,65439,534,2,7200,749,2025-12-02 06:00:00+00:00
|
||||
CMIFS02,65561,536,1,7200,800,2025-12-02 08:00:00+00:00
|
||||
CMIFS02,65305,599,1,7200,1067,2025-12-02 10:00:00+00:00
|
||||
CMIFS02,68279,590,1,7200,1019,2025-12-02 12:00:00+00:00
|
||||
CMIFS02,72855,637,2,7200,859,2025-12-02 14:00:00+00:00
|
||||
CMIFS02,72071,704,1,7200,952,2025-12-02 16:00:00+00:00
|
||||
CMIFS02,72866,745,1,7200,984,2025-12-02 18:00:00+00:00
|
||||
CMIFS02,64989,566,1,7200,809,2025-12-02 20:00:00+00:00
|
||||
CMIFS02,62756,540,1,7200,805,2025-12-02 22:00:00+00:00
|
||||
CMIFS02,60026,509,2,7200,702,2025-12-03 00:00:00+00:00
|
||||
CMIFS02,61072,556,1,7200,758,2025-12-03 02:00:00+00:00
|
||||
CMIFS02,59144,502,1,7200,686,2025-12-03 04:00:00+00:00
|
||||
CMIFS02,60218,555,2,7200,787,2025-12-03 06:00:00+00:00
|
||||
CMIFS02,58741,502,1,7200,749,2025-12-03 08:00:00+00:00
|
||||
CMIFS02,58623,519,1,7200,868,2025-12-03 10:00:00+00:00
|
||||
CMIFS02,63980,602,2,7200,1049,2025-12-03 12:00:00+00:00
|
||||
CMIFS02,65384,579,3,7200,964,2025-12-03 14:00:00+00:00
|
||||
CMIFS02,68753,613,2,7200,992,2025-12-03 16:00:00+00:00
|
||||
CMIFS02,66459,606,2,7200,813,2025-12-03 18:00:00+00:00
|
||||
CMIFS02,68521,607,2,7200,880,2025-12-03 20:00:00+00:00
|
||||
CMIFS02,66519,586,0,7200,772,2025-12-03 22:00:00+00:00
|
||||
CMIFS02,61803,516,2,7200,669,2025-12-04 00:00:00+00:00
|
||||
CMIFS02,62312,545,2,7200,686,2025-12-04 02:00:00+00:00
|
||||
CMIFS02,61021,504,2,7200,695,2025-12-04 04:00:00+00:00
|
||||
CMIFS02,58961,504,1,7200,764,2025-12-04 06:00:00+00:00
|
||||
CMIFS02,59000,503,1,7200,749,2025-12-04 08:00:00+00:00
|
||||
CMIFS02,59599,521,1,7200,921,2025-12-04 10:00:00+00:00
|
||||
CMIFS02,62492,538,3,7200,812,2025-12-04 12:00:00+00:00
|
||||
CMIFS02,65056,621,1,7200,1041,2025-12-04 14:00:00+00:00
|
||||
CMIFS02,68033,599,1,7200,877,2025-12-04 16:00:00+00:00
|
||||
CMIFS02,92772,1038,2,7200,863,2025-12-04 18:00:00+00:00
|
||||
CMIFS02,94460,1005,2,7200,830,2025-12-04 20:00:00+00:00
|
||||
CMIFS02,91460,970,1,7200,792,2025-12-04 22:00:00+00:00
|
||||
CMIFS02,89485,931,0,7200,646,2025-12-05 00:00:00+00:00
|
||||
CMIFS02,72316,706,1,7200,640,2025-12-05 02:00:00+00:00
|
||||
CMIFS02,58741,501,1,7200,722,2025-12-05 04:00:00+00:00
|
||||
CMIFS02,58835,508,1,7200,639,2025-12-05 06:00:00+00:00
|
||||
CMIFS02,58269,506,1,7200,672,2025-12-05 08:00:00+00:00
|
||||
CMIFS02,57528,524,1,7200,804,2025-12-05 10:00:00+00:00
|
||||
CMIFS02,59588,527,1,7200,792,2025-12-05 12:00:00+00:00
|
||||
CMIFS02,62920,574,1,7200,896,2025-12-05 14:00:00+00:00
|
||||
CMIFS02,66681,632,2,7200,1129,2025-12-05 16:00:00+00:00
|
||||
CMIFS02,64684,582,2,7200,898,2025-12-05 18:00:00+00:00
|
||||
CMIFS02,74438,660,3,7200,966,2025-12-05 20:00:00+00:00
|
||||
CMIFS02,62001,519,1,7200,891,2025-12-05 22:00:00+00:00
|
||||
CMIFS02,61164,509,1,7200,706,2025-12-06 00:00:00+00:00
|
||||
CMIFS02,62136,556,1,7200,771,2025-12-06 02:00:00+00:00
|
||||
CMIFS02,61062,512,1,7200,813,2025-12-06 04:00:00+00:00
|
||||
CMIFS02,60651,501,2,7200,733,2025-12-06 06:00:00+00:00
|
||||
CMIFS02,60173,508,1,7200,796,2025-12-06 08:00:00+00:00
|
||||
CMIFS02,61112,522,1,7200,821,2025-12-06 10:00:00+00:00
|
||||
CMIFS02,59706,508,1,7200,860,2025-12-06 12:00:00+00:00
|
||||
CMIFS02,60981,509,1,7200,908,2025-12-06 14:00:00+00:00
|
||||
CMIFS02,63062,512,1,7200,963,2025-12-06 16:00:00+00:00
|
||||
CMIFS02,63575,555,2,7200,987,2025-12-06 18:00:00+00:00
|
||||
CMIFS02,69643,547,1,7200,765,2025-12-06 20:00:00+00:00
|
||||
CMIFS02,61994,511,1,7200,722,2025-12-06 22:00:00+00:00
|
||||
CMIFS02,61734,508,2,7200,766,2025-12-07 00:00:00+00:00
|
||||
CMIFS02,63456,554,2,7200,815,2025-12-07 02:00:00+00:00
|
||||
CMIFS02,61081,506,2,7200,804,2025-12-07 04:00:00+00:00
|
||||
CMIFS02,60903,506,0,7200,748,2025-12-07 06:00:00+00:00
|
||||
CMIFS02,71239,582,1,7200,1271,2025-12-07 08:00:00+00:00
|
||||
CMIFS02,67644,591,0,7200,1095,2025-12-07 10:00:00+00:00
|
||||
CMIFS02,66402,536,1,7200,615,2025-12-07 12:00:00+00:00
|
||||
CMIFS02,65794,533,2,7200,618,2025-12-07 14:00:00+00:00
|
||||
CMIFS02,65496,531,1,7200,589,2025-12-07 16:00:00+00:00
|
||||
CMIFS02,65444,529,0,7200,664,2025-12-07 18:00:00+00:00
|
||||
CMIFS02,65163,570,1,7200,855,2025-12-07 20:00:00+00:00
|
||||
CMIFS02,66004,537,1,7200,735,2025-12-07 22:00:00+00:00
|
||||
CMIFS02,64894,528,1,7200,767,2025-12-08 00:00:00+00:00
|
||||
CMIFS02,67209,579,0,7200,868,2025-12-08 02:00:00+00:00
|
||||
CMIFS02,64414,533,0,7200,816,2025-12-08 04:00:00+00:00
|
||||
CMIFS02,66894,526,2,7200,695,2025-12-08 06:00:00+00:00
|
||||
CMIFS02,65643,535,2,7200,598,2025-12-08 08:00:00+00:00
|
||||
CMIFS02,66901,555,2,7200,742,2025-12-08 10:00:00+00:00
|
||||
CMIFS02,82411,795,1,7200,953,2025-12-08 12:00:00+00:00
|
||||
CMIFS02,67071,581,1,7200,884,2025-12-08 14:00:00+00:00
|
||||
CMIFS02,89618,889,2,7200,1035,2025-12-08 16:00:00+00:00
|
||||
CMIFS02,70776,620,2,7200,992,2025-12-08 18:00:00+00:00
|
||||
CMIFS02,70233,614,1,7200,969,2025-12-08 20:00:00+00:00
|
||||
CMIFS02,67493,604,1,7200,1011,2025-12-08 22:00:00+00:00
|
||||
CMIFS02,64200,546,2,7200,844,2025-12-09 00:00:00+00:00
|
||||
CMIFS02,66715,586,2,7200,760,2025-12-09 02:00:00+00:00
|
||||
CMIFS02,63507,535,1,7200,720,2025-12-09 04:00:00+00:00
|
||||
CMIFS02,64197,526,1,7200,719,2025-12-09 06:00:00+00:00
|
||||
CMIFS02,65763,531,1,7200,772,2025-12-09 08:00:00+00:00
|
||||
CMIFS02,64805,550,0,7200,857,2025-12-09 10:00:00+00:00
|
||||
CMIFS02,68779,582,0,7200,934,2025-12-09 12:00:00+00:00
|
||||
CMIFS02,76177,672,2,7200,976,2025-12-09 14:00:00+00:00
|
||||
CMIFS02,70450,592,1,7200,933,2025-12-09 16:00:00+00:00
|
||||
CMIFS02,71279,621,1,7200,1011,2025-12-09 18:00:00+00:00
|
||||
CMIFS02,72499,623,1,7200,854,2025-12-09 20:00:00+00:00
|
||||
CMIFS02,66986,559,2,7200,739,2025-12-09 22:00:00+00:00
|
||||
CMIFS02,64787,582,1,7200,838,2025-12-10 00:00:00+00:00
|
||||
CMIFS02,66608,595,3,7200,808,2025-12-10 02:00:00+00:00
|
||||
CMIFS02,64122,532,1,7200,795,2025-12-10 04:00:00+00:00
|
||||
CMIFS02,65118,597,0,7200,852,2025-12-10 06:00:00+00:00
|
||||
CMIFS02,63510,533,3,7200,771,2025-12-10 08:00:00+00:00
|
||||
CMIFS02,63870,570,0,7200,880,2025-12-10 10:00:00+00:00
|
||||
CMIFS02,65366,565,2,7200,921,2025-12-10 12:00:00+00:00
|
||||
CMIFS02,67720,604,1,7200,890,2025-12-10 14:00:00+00:00
|
||||
CMIFS02,67608,627,1,7200,1049,2025-12-10 16:00:00+00:00
|
||||
CMIFS02,66791,619,1,7200,1049,2025-12-10 18:00:00+00:00
|
||||
CMIFS02,66083,610,1,7200,1153,2025-12-10 20:00:00+00:00
|
||||
CMIFS02,62579,557,1,7200,1003,2025-12-10 22:00:00+00:00
|
||||
CMIFS02,60561,541,2,7200,958,2025-12-11 00:00:00+00:00
|
||||
CMIFS02,63511,635,0,7200,1096,2025-12-11 02:00:00+00:00
|
||||
CMIFS02,62366,540,1,7200,696,2025-12-11 04:00:00+00:00
|
||||
CMIFS02,60520,535,1,7200,745,2025-12-11 06:00:00+00:00
|
||||
CMIFS02,62315,537,1,7200,830,2025-12-11 08:00:00+00:00
|
||||
CMIFS02,62240,558,1,7200,946,2025-12-11 10:00:00+00:00
|
||||
CMIFS02,67009,583,2,7200,863,2025-12-11 12:00:00+00:00
|
||||
CMIFS02,67787,601,1,7200,918,2025-12-11 14:00:00+00:00
|
||||
CMIFS02,74013,682,1,7200,998,2025-12-11 16:00:00+00:00
|
||||
CMIFS02,68877,605,1,7200,1025,2025-12-11 18:00:00+00:00
|
||||
CMIFS02,66430,587,1,7200,887,2025-12-11 20:00:00+00:00
|
||||
CMIFS02,66497,556,1,7200,743,2025-12-11 22:00:00+00:00
|
||||
CMIFS02,65309,536,2,7200,767,2025-12-12 00:00:00+00:00
|
||||
CMIFS02,65988,586,1,7200,859,2025-12-12 02:00:00+00:00
|
||||
CMIFS02,61662,531,3,7200,718,2025-12-12 04:00:00+00:00
|
||||
CMIFS02,61591,538,1,7200,811,2025-12-12 06:00:00+00:00
|
||||
CMIFS02,61572,548,1,7200,857,2025-12-12 08:00:00+00:00
|
||||
CMIFS02,63026,551,1,7200,980,2025-12-12 10:00:00+00:00
|
||||
CMIFS02,67757,626,1,7200,1174,2025-12-12 12:00:00+00:00
|
||||
CMIFS02,79672,742,1,7200,1038,2025-12-12 14:00:00+00:00
|
||||
CMIFS02,79774,705,2,7200,896,2025-12-12 16:00:00+00:00
|
||||
CMIFS02,70333,611,2,7200,894,2025-12-12 18:00:00+00:00
|
||||
CMIFS02,70046,618,3,7200,937,2025-12-12 20:00:00+00:00
|
||||
CMIFS02,67944,552,2,7200,793,2025-12-12 22:00:00+00:00
|
||||
CMIFS02,68014,553,1,7200,833,2025-12-13 00:00:00+00:00
|
||||
CMIFS02,69944,590,3,7200,892,2025-12-13 02:00:00+00:00
|
||||
CMIFS02,68041,534,1,7200,792,2025-12-13 04:00:00+00:00
|
||||
CMIFS02,69048,540,1,7200,838,2025-12-13 06:00:00+00:00
|
||||
CMIFS02,69395,551,1,7200,880,2025-12-13 08:00:00+00:00
|
||||
CMIFS02,69411,539,1,7200,843,2025-12-13 10:00:00+00:00
|
||||
CMIFS02,70243,536,5,7200,835,2025-12-13 12:00:00+00:00
|
||||
CMIFS02,71954,584,1,7200,985,2025-12-13 14:00:00+00:00
|
||||
CMIFS02,71138,536,0,7200,816,2025-12-13 16:00:00+00:00
|
||||
CMIFS02,69574,535,0,7200,839,2025-12-13 18:00:00+00:00
|
||||
CMIFS02,70634,536,1,7200,902,2025-12-13 20:00:00+00:00
|
||||
CMIFS02,76242,559,2,7200,756,2025-12-13 22:00:00+00:00
|
||||
CMIFS02,69716,532,2,7200,680,2025-12-14 00:00:00+00:00
|
||||
CMIFS02,72063,579,2,7200,750,2025-12-14 02:00:00+00:00
|
||||
CMIFS02,70009,532,2,7200,779,2025-12-14 04:00:00+00:00
|
||||
CMIFS02,69888,538,1,7200,859,2025-12-14 06:00:00+00:00
|
||||
CMIFS02,75557,606,1,7200,1312,2025-12-14 08:00:00+00:00
|
||||
CMIFS02,68125,572,1,7200,1078,2025-12-14 10:00:00+00:00
|
||||
CMIFS02,66993,524,2,7200,609,2025-12-14 12:00:00+00:00
|
||||
CMIFS02,67962,523,2,7200,676,2025-12-14 14:00:00+00:00
|
||||
CMIFS02,69930,571,1,7200,819,2025-12-14 16:00:00+00:00
|
||||
CMIFS02,69184,528,0,7200,773,2025-12-14 18:00:00+00:00
|
||||
CMIFS02,69059,532,1,7200,809,2025-12-14 20:00:00+00:00
|
||||
CMIFS02,70143,769,2,7200,1034,2025-12-14 22:00:00+00:00
|
||||
CMIFS02,68424,531,2,7200,814,2025-12-15 00:00:00+00:00
|
||||
CMIFS02,70711,581,1,7200,850,2025-12-15 02:00:00+00:00
|
||||
CMIFS02,68405,537,1,7200,821,2025-12-15 04:00:00+00:00
|
||||
CMIFS02,68021,528,0,7200,692,2025-12-15 06:00:00+00:00
|
||||
CMIFS02,67762,528,1,7200,711,2025-12-15 08:00:00+00:00
|
||||
CMIFS02,68595,607,1,7200,942,2025-12-15 10:00:00+00:00
|
||||
CMIFS02,67331,568,1,7200,855,2025-12-15 12:00:00+00:00
|
||||
CMIFS02,70258,633,5,7200,981,2025-12-15 14:00:00+00:00
|
||||
CMIFS02,70443,636,2,7200,1080,2025-12-15 16:00:00+00:00
|
||||
CMIFS02,77629,781,2,7200,1307,2025-12-15 18:00:00+00:00
|
||||
CMIFS02,95042,1030,2,7200,1217,2025-12-15 20:00:00+00:00
|
||||
CMIFS02,75428,679,1,7200,894,2025-12-15 22:00:00+00:00
|
||||
CMIFS02,64841,535,2,7200,758,2025-12-16 00:00:00+00:00
|
||||
CMIFS02,65166,584,1,7200,780,2025-12-16 02:00:00+00:00
|
||||
CMIFS02,63075,531,0,7200,788,2025-12-16 04:00:00+00:00
|
||||
CMIFS02,63414,531,0,7200,795,2025-12-16 06:00:00+00:00
|
||||
CMIFS02,63155,529,1,7200,792,2025-12-16 08:00:00+00:00
|
||||
CMIFS02,63866,552,0,7200,893,2025-12-16 10:00:00+00:00
|
||||
CMIFS02,67566,581,2,7200,943,2025-12-16 12:00:00+00:00
|
||||
CMIFS02,66668,591,2,7200,936,2025-12-16 14:00:00+00:00
|
||||
CMIFS02,67332,603,1,7200,1046,2025-12-16 16:00:00+00:00
|
||||
CMIFS02,76204,704,1,7200,1221,2025-12-16 18:00:00+00:00
|
||||
CMIFS02,67872,617,1,7200,1079,2025-12-16 20:00:00+00:00
|
||||
CMIFS02,66762,559,1,7200,811,2025-12-16 22:00:00+00:00
|
||||
CMIFS02,63723,547,1,7200,779,2025-12-17 00:00:00+00:00
|
||||
CMIFS02,66248,612,1,7200,907,2025-12-17 02:00:00+00:00
|
||||
CMIFS02,64004,542,1,7200,880,2025-12-17 04:00:00+00:00
|
||||
CMIFS02,65515,593,0,7200,791,2025-12-17 06:00:00+00:00
|
||||
CMIFS02,66597,535,2,7200,872,2025-12-17 08:00:00+00:00
|
||||
CMIFS02,64641,482,0,7200,907,2025-12-17 10:00:00+00:00
|
||||
CMIFS02,72223,588,2,7200,938,2025-12-17 12:00:00+00:00
|
||||
CMIFS02,84836,823,1,7200,1163,2025-12-17 14:00:00+00:00
|
||||
CMIFS02,67603,649,1,7200,995,2025-12-17 16:00:00+00:00
|
||||
CMIFS02,83826,979,3,7200,1213,2025-12-17 18:00:00+00:00
|
||||
CMIFS02,82786,802,4,7200,1165,2025-12-17 20:00:00+00:00
|
||||
|
12
config.example.ini
Normal file
12
config.example.ini
Normal file
@@ -0,0 +1,12 @@
|
||||
[vcenter]
|
||||
# vCenter Server hostname or IP address
|
||||
server = vcenter.example.com
|
||||
|
||||
# vCenter username (typically administrator@vsphere.local or domain user)
|
||||
username = administrator@vsphere.local
|
||||
|
||||
# vCenter password
|
||||
password = your_password_here
|
||||
|
||||
# Port (optional, defaults to 443)
|
||||
# port = 443
|
||||
346
perf_history.py
Normal file
346
perf_history.py
Normal file
@@ -0,0 +1,346 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Historical VM Performance Report
|
||||
Pull performance stats from vCenter for the past month to identify patterns.
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import configparser
|
||||
import csv
|
||||
import ssl
|
||||
import sys
|
||||
from datetime import datetime, timedelta
|
||||
|
||||
try:
|
||||
from pyVim.connect import SmartConnect, Disconnect
|
||||
from pyVmomi import vim
|
||||
except ImportError:
|
||||
print("Error: pyvmomi is required. Install with: pip install pyvmomi")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def connect_vcenter(server, username, password, port=443):
|
||||
"""Connect to vCenter."""
|
||||
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
|
||||
context.check_hostname = False
|
||||
context.verify_mode = ssl.CERT_NONE
|
||||
|
||||
try:
|
||||
si = SmartConnect(host=server, user=username, pwd=password, port=port, sslContext=context)
|
||||
return si
|
||||
except Exception as e:
|
||||
print(f"Error connecting: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_historical_intervals(perf_manager):
|
||||
"""Get available historical intervals."""
|
||||
intervals = {}
|
||||
for interval in perf_manager.historicalInterval:
|
||||
intervals[interval.samplingPeriod] = {
|
||||
'name': interval.name,
|
||||
'length': interval.length,
|
||||
'level': interval.level,
|
||||
}
|
||||
return intervals
|
||||
|
||||
|
||||
def get_counter_ids(perf_manager, metrics_needed):
|
||||
"""Get performance counter IDs for specified metrics."""
|
||||
metric_ids = {m: None for m in metrics_needed}
|
||||
|
||||
for counter in perf_manager.perfCounter:
|
||||
full_name = f"{counter.groupInfo.key}.{counter.nameInfo.key}.{counter.rollupType}"
|
||||
if full_name in metric_ids:
|
||||
metric_ids[full_name] = counter.key
|
||||
|
||||
return metric_ids
|
||||
|
||||
|
||||
def get_vm_by_name(content, vm_name):
|
||||
"""Find VM by name."""
|
||||
container = content.viewManager.CreateContainerView(
|
||||
content.rootFolder, [vim.VirtualMachine], True
|
||||
)
|
||||
|
||||
target_vm = None
|
||||
for vm in container.view:
|
||||
if vm.name.lower() == vm_name.lower():
|
||||
target_vm = vm
|
||||
break
|
||||
|
||||
container.Destroy()
|
||||
return target_vm
|
||||
|
||||
|
||||
def get_historical_perf(si, entity, metric_ids, days=30):
|
||||
"""Get historical performance data."""
|
||||
content = si.RetrieveContent()
|
||||
perf_manager = content.perfManager
|
||||
|
||||
# Use daily interval (86400 seconds) for month-long data
|
||||
# Or hourly (3600) for more detail but more data
|
||||
# Available intervals: 300 (5min), 1800 (30min), 7200 (2hr), 86400 (daily)
|
||||
|
||||
if days <= 1:
|
||||
interval_id = 300 # 5-minute samples for last day
|
||||
elif days <= 7:
|
||||
interval_id = 1800 # 30-minute samples for last week
|
||||
else:
|
||||
interval_id = 7200 # 2-hour samples for longer periods
|
||||
|
||||
end_time = datetime.now()
|
||||
start_time = end_time - timedelta(days=days)
|
||||
|
||||
# Build metric ID objects
|
||||
metric_id_objs = []
|
||||
for name, counter_id in metric_ids.items():
|
||||
if counter_id:
|
||||
metric_id_objs.append(vim.PerformanceManager.MetricId(
|
||||
counterId=counter_id,
|
||||
instance=""
|
||||
))
|
||||
|
||||
if not metric_id_objs:
|
||||
print("No valid metrics found")
|
||||
return []
|
||||
|
||||
query_spec = vim.PerformanceManager.QuerySpec(
|
||||
entity=entity,
|
||||
metricId=metric_id_objs,
|
||||
intervalId=interval_id,
|
||||
startTime=start_time,
|
||||
endTime=end_time,
|
||||
)
|
||||
|
||||
try:
|
||||
results = perf_manager.QueryPerf(querySpec=[query_spec])
|
||||
except Exception as e:
|
||||
print(f"Error querying performance: {e}")
|
||||
return []
|
||||
|
||||
# Parse results into time series
|
||||
data = []
|
||||
if results:
|
||||
for result in results:
|
||||
# Get timestamps
|
||||
timestamps = result.sampleInfo
|
||||
|
||||
# Create data structure
|
||||
for i, sample_info in enumerate(timestamps):
|
||||
sample = {
|
||||
'timestamp': sample_info.timestamp,
|
||||
'interval': sample_info.interval,
|
||||
}
|
||||
|
||||
for val in result.value:
|
||||
counter_id = val.id.counterId
|
||||
if i < len(val.value):
|
||||
value = val.value[i]
|
||||
|
||||
for name, cid in metric_ids.items():
|
||||
if cid == counter_id:
|
||||
sample[name] = value
|
||||
break
|
||||
|
||||
data.append(sample)
|
||||
|
||||
return data
|
||||
|
||||
|
||||
def analyze_vm_history(si, vm_name, days=30):
|
||||
"""Analyze historical performance for a VM."""
|
||||
content = si.RetrieveContent()
|
||||
perf_manager = content.perfManager
|
||||
|
||||
vm = get_vm_by_name(content, vm_name)
|
||||
if not vm:
|
||||
print(f"VM '{vm_name}' not found")
|
||||
return
|
||||
|
||||
print(f"\nAnalyzing historical performance for: {vm_name}")
|
||||
print(f"Period: Last {days} days")
|
||||
print("-" * 60)
|
||||
|
||||
metrics = [
|
||||
'cpu.usage.average',
|
||||
'cpu.ready.summation',
|
||||
'mem.usage.average',
|
||||
'disk.read.average',
|
||||
'disk.write.average',
|
||||
'disk.totalReadLatency.average',
|
||||
'disk.totalWriteLatency.average',
|
||||
'disk.maxTotalLatency.latest',
|
||||
'net.received.average',
|
||||
'net.transmitted.average',
|
||||
]
|
||||
|
||||
metric_ids = get_counter_ids(perf_manager, metrics)
|
||||
data = get_historical_perf(si, vm, metric_ids, days)
|
||||
|
||||
if not data:
|
||||
print("No historical data available")
|
||||
return
|
||||
|
||||
print(f"Retrieved {len(data)} samples")
|
||||
|
||||
# Calculate statistics
|
||||
stats = {}
|
||||
for metric in metrics:
|
||||
values = [d.get(metric, 0) for d in data if metric in d]
|
||||
if values:
|
||||
stats[metric] = {
|
||||
'min': min(values),
|
||||
'max': max(values),
|
||||
'avg': sum(values) / len(values),
|
||||
'samples': len(values),
|
||||
}
|
||||
|
||||
# Display results
|
||||
print("\n" + "=" * 60)
|
||||
print("PERFORMANCE STATISTICS")
|
||||
print("=" * 60)
|
||||
|
||||
if 'cpu.usage.average' in stats:
|
||||
s = stats['cpu.usage.average']
|
||||
print(f"\nCPU Usage:")
|
||||
print(f" Average: {s['avg']/100:.1f}%")
|
||||
print(f" Maximum: {s['max']/100:.1f}%")
|
||||
if s['max']/100 > 80:
|
||||
print(f" ⚠️ CPU reached {s['max']/100:.1f}% - potential bottleneck")
|
||||
|
||||
if 'mem.usage.average' in stats:
|
||||
s = stats['mem.usage.average']
|
||||
print(f"\nMemory Usage:")
|
||||
print(f" Average: {s['avg']/100:.1f}%")
|
||||
print(f" Maximum: {s['max']/100:.1f}%")
|
||||
|
||||
if 'disk.read.average' in stats and 'disk.write.average' in stats:
|
||||
r = stats['disk.read.average']
|
||||
w = stats['disk.write.average']
|
||||
print(f"\nDisk I/O (KB/s):")
|
||||
print(f" Read - Avg: {r['avg']:.0f}, Max: {r['max']:.0f} ({r['max']/1024:.1f} MB/s)")
|
||||
print(f" Write - Avg: {w['avg']:.0f}, Max: {w['max']:.0f} ({w['max']/1024:.1f} MB/s)")
|
||||
|
||||
if 'disk.totalReadLatency.average' in stats and 'disk.totalWriteLatency.average' in stats:
|
||||
rl = stats['disk.totalReadLatency.average']
|
||||
wl = stats['disk.totalWriteLatency.average']
|
||||
print(f"\nDisk Latency (ms):")
|
||||
print(f" Read - Avg: {rl['avg']:.1f}, Max: {rl['max']:.0f}")
|
||||
print(f" Write - Avg: {wl['avg']:.1f}, Max: {wl['max']:.0f}")
|
||||
if rl['max'] > 20 or wl['max'] > 20:
|
||||
print(f" ⚠️ High disk latency detected - storage may be bottleneck")
|
||||
|
||||
if 'disk.maxTotalLatency.latest' in stats:
|
||||
s = stats['disk.maxTotalLatency.latest']
|
||||
print(f"\nPeak Disk Latency:")
|
||||
print(f" Average Peak: {s['avg']:.1f} ms")
|
||||
print(f" Maximum Peak: {s['max']:.0f} ms")
|
||||
if s['max'] > 50:
|
||||
print(f" ⚠️ SEVERE: Peak latency reached {s['max']} ms!")
|
||||
|
||||
if 'net.received.average' in stats and 'net.transmitted.average' in stats:
|
||||
rx = stats['net.received.average']
|
||||
tx = stats['net.transmitted.average']
|
||||
print(f"\nNetwork I/O (KB/s):")
|
||||
print(f" RX - Avg: {rx['avg']:.0f}, Max: {rx['max']:.0f} ({rx['max']/1024:.1f} MB/s)")
|
||||
print(f" TX - Avg: {tx['avg']:.0f}, Max: {tx['max']:.0f} ({tx['max']/1024:.1f} MB/s)")
|
||||
|
||||
# Summary
|
||||
print("\n" + "=" * 60)
|
||||
print("BOTTLENECK ANALYSIS")
|
||||
print("=" * 60)
|
||||
|
||||
issues = []
|
||||
|
||||
if 'cpu.usage.average' in stats and stats['cpu.usage.average']['max']/100 > 80:
|
||||
issues.append(f"CPU spiked to {stats['cpu.usage.average']['max']/100:.0f}%")
|
||||
|
||||
if 'disk.maxTotalLatency.latest' in stats:
|
||||
max_lat = stats['disk.maxTotalLatency.latest']['max']
|
||||
avg_lat = stats['disk.maxTotalLatency.latest']['avg']
|
||||
if max_lat > 50:
|
||||
issues.append(f"Disk latency peaked at {max_lat:.0f}ms (severe)")
|
||||
elif max_lat > 20:
|
||||
issues.append(f"Disk latency peaked at {max_lat:.0f}ms (moderate)")
|
||||
|
||||
if issues:
|
||||
print("\nPotential issues detected:")
|
||||
for issue in issues:
|
||||
print(f" ⚠️ {issue}")
|
||||
else:
|
||||
print("\n✓ No major VMware-side bottlenecks detected in historical data")
|
||||
print(" If backups are still slow, the issue is likely:")
|
||||
print(" - DATTO agent/MercuryFTP performance")
|
||||
print(" - DATTO appliance storage/CPU")
|
||||
print(" - Network between guest and DATTO (not VMware layer)")
|
||||
|
||||
return data, stats
|
||||
|
||||
|
||||
def export_to_csv(data, filename, vm_name):
|
||||
"""Export historical data to CSV."""
|
||||
if not data:
|
||||
return
|
||||
|
||||
with open(filename, 'w', newline='') as f:
|
||||
writer = csv.writer(f)
|
||||
|
||||
# Get all keys
|
||||
keys = set()
|
||||
for d in data:
|
||||
keys.update(d.keys())
|
||||
keys = sorted(keys)
|
||||
|
||||
writer.writerow(['vm_name'] + keys)
|
||||
|
||||
for d in data:
|
||||
row = [vm_name] + [d.get(k, '') for k in keys]
|
||||
writer.writerow(row)
|
||||
|
||||
print(f"\nData exported to: {filename}")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Historical VM performance analysis')
|
||||
parser.add_argument('--config', '-c', help='Config file path')
|
||||
parser.add_argument('--server', '-s', help='vCenter server')
|
||||
parser.add_argument('--username', '-u', help='Username')
|
||||
parser.add_argument('--password', '-p', help='Password')
|
||||
parser.add_argument('--vm', '-v', required=True, help='VM name to analyze')
|
||||
parser.add_argument('--days', '-d', type=int, default=30, help='Number of days to analyze (default: 30)')
|
||||
parser.add_argument('--export', '-e', help='Export data to CSV file')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
server = args.server
|
||||
username = args.username
|
||||
password = args.password
|
||||
|
||||
if args.config:
|
||||
config = configparser.ConfigParser()
|
||||
config.read(args.config)
|
||||
if 'vcenter' in config:
|
||||
server = server or config.get('vcenter', 'server', fallback=None)
|
||||
username = username or config.get('vcenter', 'username', fallback=None)
|
||||
password = password or config.get('vcenter', 'password', fallback=None)
|
||||
|
||||
if not all([server, username, password]):
|
||||
print("Error: server, username, and password required")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Connecting to {server}...")
|
||||
si = connect_vcenter(server, username, password)
|
||||
|
||||
try:
|
||||
data, stats = analyze_vm_history(si, args.vm, args.days)
|
||||
|
||||
if args.export and data:
|
||||
export_to_csv(data, args.export, args.vm)
|
||||
|
||||
finally:
|
||||
Disconnect(si)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
373
perf_monitor.py
Normal file
373
perf_monitor.py
Normal file
@@ -0,0 +1,373 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Real-time VM Performance Monitor
|
||||
Run this during a backup to identify bottlenecks (CPU, disk, network).
|
||||
"""
|
||||
|
||||
import argparse
|
||||
import configparser
|
||||
import ssl
|
||||
import sys
|
||||
import time
|
||||
from datetime import datetime
|
||||
|
||||
try:
|
||||
from pyVim.connect import SmartConnect, Disconnect
|
||||
from pyVmomi import vim
|
||||
except ImportError:
|
||||
print("Error: pyvmomi is required. Install with: pip install pyvmomi")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def connect_vcenter(server, username, password, port=443):
|
||||
"""Connect to vCenter."""
|
||||
context = ssl.SSLContext(ssl.PROTOCOL_TLS_CLIENT)
|
||||
context.check_hostname = False
|
||||
context.verify_mode = ssl.CERT_NONE
|
||||
|
||||
try:
|
||||
si = SmartConnect(host=server, user=username, pwd=password, port=port, sslContext=context)
|
||||
return si
|
||||
except Exception as e:
|
||||
print(f"Error connecting: {e}")
|
||||
sys.exit(1)
|
||||
|
||||
|
||||
def get_counter_ids(perf_manager):
|
||||
"""Get performance counter IDs."""
|
||||
metric_ids = {
|
||||
'cpu.usage.average': None,
|
||||
'cpu.ready.summation': None,
|
||||
'disk.read.average': None,
|
||||
'disk.write.average': None,
|
||||
'disk.totalReadLatency.average': None,
|
||||
'disk.totalWriteLatency.average': None,
|
||||
'disk.maxTotalLatency.latest': None,
|
||||
'net.received.average': None,
|
||||
'net.transmitted.average': None,
|
||||
'mem.usage.average': None,
|
||||
}
|
||||
|
||||
for counter in perf_manager.perfCounter:
|
||||
full_name = f"{counter.groupInfo.key}.{counter.nameInfo.key}.{counter.rollupType}"
|
||||
if full_name in metric_ids:
|
||||
metric_ids[full_name] = counter.key
|
||||
|
||||
return metric_ids
|
||||
|
||||
|
||||
def get_vm_perf(si, vm_name, metric_ids):
|
||||
"""Get performance stats for a specific VM."""
|
||||
content = si.RetrieveContent()
|
||||
perf_manager = content.perfManager
|
||||
|
||||
container = content.viewManager.CreateContainerView(
|
||||
content.rootFolder, [vim.VirtualMachine], True
|
||||
)
|
||||
|
||||
target_vm = None
|
||||
for vm in container.view:
|
||||
if vm.name.lower() == vm_name.lower():
|
||||
target_vm = vm
|
||||
break
|
||||
|
||||
container.Destroy()
|
||||
|
||||
if not target_vm:
|
||||
print(f"VM '{vm_name}' not found")
|
||||
return None
|
||||
|
||||
if target_vm.runtime.powerState != vim.VirtualMachinePowerState.poweredOn:
|
||||
print(f"VM '{vm_name}' is not powered on")
|
||||
return None
|
||||
|
||||
# Build query
|
||||
metric_id_objs = []
|
||||
for name, counter_id in metric_ids.items():
|
||||
if counter_id:
|
||||
metric_id_objs.append(vim.PerformanceManager.MetricId(
|
||||
counterId=counter_id,
|
||||
instance=""
|
||||
))
|
||||
|
||||
query_spec = vim.PerformanceManager.QuerySpec(
|
||||
entity=target_vm,
|
||||
metricId=metric_id_objs,
|
||||
intervalId=20,
|
||||
maxSample=1
|
||||
)
|
||||
|
||||
results = perf_manager.QueryPerf(querySpec=[query_spec])
|
||||
|
||||
perf_data = {
|
||||
'cpu_pct': 0,
|
||||
'cpu_ready_ms': 0,
|
||||
'mem_pct': 0,
|
||||
'disk_read_kbps': 0,
|
||||
'disk_write_kbps': 0,
|
||||
'disk_read_lat_ms': 0,
|
||||
'disk_write_lat_ms': 0,
|
||||
'disk_max_lat_ms': 0,
|
||||
'net_rx_kbps': 0,
|
||||
'net_tx_kbps': 0,
|
||||
}
|
||||
|
||||
if results:
|
||||
for result in results:
|
||||
for val in result.value:
|
||||
counter_id = val.id.counterId
|
||||
value = val.value[0] if val.value else 0
|
||||
|
||||
for name, cid in metric_ids.items():
|
||||
if cid == counter_id:
|
||||
if name == 'cpu.usage.average':
|
||||
perf_data['cpu_pct'] = round(value / 100, 1)
|
||||
elif name == 'cpu.ready.summation':
|
||||
perf_data['cpu_ready_ms'] = round(value / 20, 1) # Convert to ms per interval
|
||||
elif name == 'mem.usage.average':
|
||||
perf_data['mem_pct'] = round(value / 100, 1)
|
||||
elif name == 'disk.read.average':
|
||||
perf_data['disk_read_kbps'] = value
|
||||
elif name == 'disk.write.average':
|
||||
perf_data['disk_write_kbps'] = value
|
||||
elif name == 'disk.totalReadLatency.average':
|
||||
perf_data['disk_read_lat_ms'] = value
|
||||
elif name == 'disk.totalWriteLatency.average':
|
||||
perf_data['disk_write_lat_ms'] = value
|
||||
elif name == 'disk.maxTotalLatency.latest':
|
||||
perf_data['disk_max_lat_ms'] = value
|
||||
elif name == 'net.received.average':
|
||||
perf_data['net_rx_kbps'] = value
|
||||
elif name == 'net.transmitted.average':
|
||||
perf_data['net_tx_kbps'] = value
|
||||
break
|
||||
|
||||
return perf_data
|
||||
|
||||
|
||||
def get_all_vms_perf(si, metric_ids):
|
||||
"""Get performance stats for all powered-on VMs."""
|
||||
content = si.RetrieveContent()
|
||||
perf_manager = content.perfManager
|
||||
|
||||
container = content.viewManager.CreateContainerView(
|
||||
content.rootFolder, [vim.VirtualMachine], True
|
||||
)
|
||||
|
||||
all_perf = []
|
||||
|
||||
for vm in container.view:
|
||||
if vm.runtime.powerState != vim.VirtualMachinePowerState.poweredOn:
|
||||
continue
|
||||
|
||||
try:
|
||||
metric_id_objs = []
|
||||
for name, counter_id in metric_ids.items():
|
||||
if counter_id:
|
||||
metric_id_objs.append(vim.PerformanceManager.MetricId(
|
||||
counterId=counter_id,
|
||||
instance=""
|
||||
))
|
||||
|
||||
query_spec = vim.PerformanceManager.QuerySpec(
|
||||
entity=vm,
|
||||
metricId=metric_id_objs,
|
||||
intervalId=20,
|
||||
maxSample=1
|
||||
)
|
||||
|
||||
results = perf_manager.QueryPerf(querySpec=[query_spec])
|
||||
|
||||
perf_data = {
|
||||
'name': vm.name,
|
||||
'cpu_pct': 0,
|
||||
'mem_pct': 0,
|
||||
'disk_read_mbps': 0,
|
||||
'disk_write_mbps': 0,
|
||||
'disk_lat_ms': 0,
|
||||
'net_mbps': 0,
|
||||
}
|
||||
|
||||
if results:
|
||||
for result in results:
|
||||
for val in result.value:
|
||||
counter_id = val.id.counterId
|
||||
value = val.value[0] if val.value else 0
|
||||
|
||||
for name, cid in metric_ids.items():
|
||||
if cid == counter_id:
|
||||
if name == 'cpu.usage.average':
|
||||
perf_data['cpu_pct'] = round(value / 100, 1)
|
||||
elif name == 'mem.usage.average':
|
||||
perf_data['mem_pct'] = round(value / 100, 1)
|
||||
elif name == 'disk.read.average':
|
||||
perf_data['disk_read_mbps'] = round(value / 1024, 1)
|
||||
elif name == 'disk.write.average':
|
||||
perf_data['disk_write_mbps'] = round(value / 1024, 1)
|
||||
elif name == 'disk.maxTotalLatency.latest':
|
||||
perf_data['disk_lat_ms'] = value
|
||||
elif name == 'net.received.average':
|
||||
perf_data['net_mbps'] += round(value / 1024, 1)
|
||||
elif name == 'net.transmitted.average':
|
||||
perf_data['net_mbps'] += round(value / 1024, 1)
|
||||
break
|
||||
|
||||
all_perf.append(perf_data)
|
||||
|
||||
except Exception as e:
|
||||
pass
|
||||
|
||||
container.Destroy()
|
||||
return sorted(all_perf, key=lambda x: x['disk_write_mbps'], reverse=True)
|
||||
|
||||
|
||||
def format_bar(value, max_val, width=20):
|
||||
"""Create ASCII progress bar."""
|
||||
filled = int((value / max_val) * width) if max_val > 0 else 0
|
||||
filled = min(filled, width)
|
||||
return '█' * filled + '░' * (width - filled)
|
||||
|
||||
|
||||
def monitor_vm(si, vm_name, interval=5):
|
||||
"""Monitor a specific VM in real-time."""
|
||||
content = si.RetrieveContent()
|
||||
metric_ids = get_counter_ids(content.perfManager)
|
||||
|
||||
print(f"\nMonitoring VM: {vm_name}")
|
||||
print("Press Ctrl+C to stop\n")
|
||||
print("-" * 100)
|
||||
|
||||
try:
|
||||
while True:
|
||||
perf = get_vm_perf(si, vm_name, metric_ids)
|
||||
if not perf:
|
||||
break
|
||||
|
||||
timestamp = datetime.now().strftime('%H:%M:%S')
|
||||
|
||||
# Determine bottleneck indicators
|
||||
cpu_warn = "⚠️ " if perf['cpu_pct'] > 80 else ""
|
||||
lat_warn = "⚠️ " if perf['disk_max_lat_ms'] > 20 else ""
|
||||
|
||||
print(f"\r{timestamp} | "
|
||||
f"CPU: {cpu_warn}{perf['cpu_pct']:5.1f}% | "
|
||||
f"Mem: {perf['mem_pct']:5.1f}% | "
|
||||
f"Disk R: {perf['disk_read_kbps']:6} KB/s | "
|
||||
f"Disk W: {perf['disk_write_kbps']:6} KB/s | "
|
||||
f"Lat: {lat_warn}{perf['disk_max_lat_ms']:3}ms | "
|
||||
f"Net RX: {perf['net_rx_kbps']:6} KB/s | "
|
||||
f"Net TX: {perf['net_tx_kbps']:6} KB/s",
|
||||
end='', flush=True)
|
||||
|
||||
time.sleep(interval)
|
||||
print() # New line for next update
|
||||
|
||||
except KeyboardInterrupt:
|
||||
print("\n\nMonitoring stopped.")
|
||||
|
||||
|
||||
def show_all_vms(si):
|
||||
"""Show performance summary for all VMs."""
|
||||
content = si.RetrieveContent()
|
||||
metric_ids = get_counter_ids(content.perfManager)
|
||||
|
||||
print("\nCollecting VM performance data...")
|
||||
all_perf = get_all_vms_perf(si, metric_ids)
|
||||
|
||||
print("\n" + "=" * 100)
|
||||
print(f"{'VM Name':<35} {'CPU%':>6} {'Mem%':>6} {'DiskR':>8} {'DiskW':>8} {'Lat':>6} {'Net':>8}")
|
||||
print(f"{'':<35} {'':>6} {'':>6} {'(MB/s)':>8} {'(MB/s)':>8} {'(ms)':>6} {'(MB/s)':>8}")
|
||||
print("=" * 100)
|
||||
|
||||
for vm in all_perf:
|
||||
# Highlight high values
|
||||
cpu_mark = "*" if vm['cpu_pct'] > 80 else " "
|
||||
lat_mark = "*" if vm['disk_lat_ms'] > 20 else " "
|
||||
|
||||
print(f"{vm['name']:<35} {vm['cpu_pct']:>5.1f}{cpu_mark} {vm['mem_pct']:>6.1f} "
|
||||
f"{vm['disk_read_mbps']:>8.1f} {vm['disk_write_mbps']:>8.1f} "
|
||||
f"{vm['disk_lat_ms']:>5}{lat_mark} {vm['net_mbps']:>8.1f}")
|
||||
|
||||
print("=" * 100)
|
||||
print("* = potential bottleneck (CPU > 80% or Latency > 20ms)")
|
||||
|
||||
|
||||
def main():
|
||||
parser = argparse.ArgumentParser(description='Real-time VM performance monitor')
|
||||
parser.add_argument('--config', '-c', help='Config file path')
|
||||
parser.add_argument('--server', '-s', help='vCenter server')
|
||||
parser.add_argument('--username', '-u', help='Username')
|
||||
parser.add_argument('--password', '-p', help='Password')
|
||||
parser.add_argument('--vm', '-v', help='VM name to monitor (omit for all VMs summary)')
|
||||
parser.add_argument('--interval', '-i', type=int, default=5, help='Polling interval in seconds (default: 5)')
|
||||
parser.add_argument('--watch', '-w', action='store_true', help='Continuous monitoring mode')
|
||||
|
||||
args = parser.parse_args()
|
||||
|
||||
server = args.server
|
||||
username = args.username
|
||||
password = args.password
|
||||
|
||||
if args.config:
|
||||
config = configparser.ConfigParser()
|
||||
config.read(args.config)
|
||||
if 'vcenter' in config:
|
||||
server = server or config.get('vcenter', 'server', fallback=None)
|
||||
username = username or config.get('vcenter', 'username', fallback=None)
|
||||
password = password or config.get('vcenter', 'password', fallback=None)
|
||||
|
||||
if not all([server, username, password]):
|
||||
print("Error: server, username, and password required")
|
||||
sys.exit(1)
|
||||
|
||||
print(f"Connecting to {server}...")
|
||||
si = connect_vcenter(server, username, password)
|
||||
|
||||
try:
|
||||
if args.vm:
|
||||
if args.watch:
|
||||
monitor_vm(si, args.vm, args.interval)
|
||||
else:
|
||||
content = si.RetrieveContent()
|
||||
metric_ids = get_counter_ids(content.perfManager)
|
||||
perf = get_vm_perf(si, args.vm, metric_ids)
|
||||
if perf:
|
||||
print(f"\nPerformance for {args.vm}:")
|
||||
print(f" CPU Usage: {perf['cpu_pct']}%")
|
||||
print(f" CPU Ready: {perf['cpu_ready_ms']} ms")
|
||||
print(f" Memory Usage: {perf['mem_pct']}%")
|
||||
print(f" Disk Read: {perf['disk_read_kbps']} KB/s ({perf['disk_read_kbps']/1024:.1f} MB/s)")
|
||||
print(f" Disk Write: {perf['disk_write_kbps']} KB/s ({perf['disk_write_kbps']/1024:.1f} MB/s)")
|
||||
print(f" Disk Read Lat: {perf['disk_read_lat_ms']} ms")
|
||||
print(f" Disk Write Lat: {perf['disk_write_lat_ms']} ms")
|
||||
print(f" Disk Max Lat: {perf['disk_max_lat_ms']} ms")
|
||||
print(f" Network RX: {perf['net_rx_kbps']} KB/s ({perf['net_rx_kbps']/1024:.1f} MB/s)")
|
||||
print(f" Network TX: {perf['net_tx_kbps']} KB/s ({perf['net_tx_kbps']/1024:.1f} MB/s)")
|
||||
|
||||
# Analysis
|
||||
print("\n Analysis:")
|
||||
if perf['cpu_pct'] > 80:
|
||||
print(" ⚠️ HIGH CPU - VM may be CPU bottlenecked")
|
||||
if perf['disk_max_lat_ms'] > 20:
|
||||
print(" ⚠️ HIGH DISK LATENCY - Storage may be bottleneck")
|
||||
if perf['disk_max_lat_ms'] <= 20 and perf['cpu_pct'] <= 80:
|
||||
print(" ✓ No obvious VMware-side bottlenecks detected")
|
||||
else:
|
||||
if args.watch:
|
||||
try:
|
||||
while True:
|
||||
print("\033[2J\033[H") # Clear screen
|
||||
show_all_vms(si)
|
||||
print(f"\nRefreshing every {args.interval} seconds... (Ctrl+C to stop)")
|
||||
time.sleep(args.interval)
|
||||
except KeyboardInterrupt:
|
||||
print("\nStopped.")
|
||||
else:
|
||||
show_all_vms(si)
|
||||
|
||||
finally:
|
||||
Disconnect(si)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
main()
|
||||
2
requirements.txt
Normal file
2
requirements.txt
Normal file
@@ -0,0 +1,2 @@
|
||||
pyvmomi>=8.0.0.1
|
||||
openpyxl>=3.1.0
|
||||
1173
vcenter_reports.py
Normal file
1173
vcenter_reports.py
Normal file
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user