Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Incorrect VM disk information #141

Closed
1 task done
to4ko opened this issue Oct 30, 2023 · 5 comments
Closed
1 task done

Incorrect VM disk information #141

to4ko opened this issue Oct 30, 2023 · 5 comments
Assignees
Labels
bug Something isn't working duplicate This issue or pull request already exists

Comments

@to4ko
Copy link

to4ko commented Oct 30, 2023

Checklist

  • I have updated to the latest available Proxmox VE Custom Integration version.

Describe the issue you are experiencing

Wrong data recieved from Proxmox VE 8 - actual vm disk size 16gb and disk is used over 75%

image

In which version of Home Assistant Core do you have the problem?

core-2023.10.5

What version of Proxmox VE Custom Integration has the issue?

2.0.4

What version of Proxmox VE do you have the problem?

8.0.4

Additional information

No response

@to4ko to4ko added the bug Something isn't working label Oct 30, 2023
@to4ko
Copy link
Author

to4ko commented Oct 30, 2023

i just increase VM disk size - it shows correctly 100gb, but usage still incorrect

image

@dougiteixeira
Copy link
Owner

There is a similar issue with some Proxmox VMs, that the issue is caused by Proxmox and not the integration (see #72 (comment)). But yours is different...

Can you provide the debug logs? Include this in your config file:

logger:
  default: warning
  logs:
    custom_components.proxmoxve: debug
    proxmoxer: debug

After including this in your configuration file, restart Home Assistant, save the log file and send it here, please.

@dougiteixeira dougiteixeira self-assigned this Nov 5, 2023
@jgpazos70
Copy link

Hi everyone,

I have the same problem but only with VMs, not with my LXC.

Data is ok for my ProxmoxVE (pve01) host and for the only one LXC (cloudflared) I have deployed but it is wrong for both the two VMs (debdockervm and homeassistant) I have.

I have done changes for logger and attach my proxmoxve debug logs following your indications after restarting. I hope it helps.

Thank you in advance.
proxmoxve.log

@dougiteixeira
Copy link
Owner

Hi everyone,

I have the same problem but only with VMs, not with my LXC.

Data is ok for my ProxmoxVE (pve01) host and for the only one LXC (cloudflared) I have deployed but it is wrong for both the two VMs (debdockervm and homeassistant) I have.

I have done changes for logger and attach my proxmoxve debug logs following your indications after restarting. I hope it helps.

Thank you in advance. proxmoxve.log

@jgpazos70, the problem with a VM's disk information not appearing is not due to the integration, but rather to the Proxmox API, see #72.

In your case, see that the disk information returned by the API for VM 100 is zero, this is the cause of the problem.

There is nothing to do on the integration side.

2023-11-08 12":"57":24.829 DEBUG (SyncWorker_5) [custom_components.proxmoxve] API Response - QEMU:
{
   "ha":{
      "managed":0
   },
   "running-machine":"pc-i440fx-8.0+pve0",
   "vmid":100,
   "freemem":702676992,
   "proxmox-support":{
      "pbs-library-version":"1.4.0 (UNKNOWN)",
      "pbs-dirty-bitmap":true,
      "backup-max-workers":true,
      "query-bitmap-info":true,
      "pbs-dirty-bitmap-migration":true,
      "pbs-dirty-bitmap-savevm":true,
      "pbs-masterkey":true
   },
   "status":"running",
   "name":"homeassistant",
   "ballooninfo":{
      "last_update":1699444644,
      "actual":4294967296,
      "mem_swapped_out":126173184,
      "mem_swapped_in":82198528,
      "minor_page_faults":2601609330,
      "max_mem":4294967296,
      "free_mem":702676992,
      "total_mem":4109819904,
      "major_page_faults":1080302
   },
   "agent":1,
   "diskwrite":113491921920,
   "netin":232469998852,
   "disk":0,
   "blockstat":{
      "efidisk0":{
         "failed_unmap_operations":0,
         "rd_merged":0,
         "flush_total_time_ns":0,
         "invalid_wr_operations":0,
         "failed_rd_operations":0,
         "flush_operations":0,
         "invalid_rd_operations":0,
         "rd_operations":0,
         "wr_merged":0,
         "account_failed":true,
         "failed_wr_operations":0,
         "wr_operations":0,
         "account_invalid":true,
         "wr_highest_offset":0,
         "timed_stats":[
            
         ],
         "invalid_unmap_operations":0,
         "invalid_flush_operations":0,
         "rd_bytes":0,
         "wr_total_time_ns":0,
         "unmap_merged":0,
         "unmap_total_time_ns":0,
         "unmap_bytes":0,
         "failed_flush_operations":0,
         "wr_bytes":0,
         "unmap_operations":0,
         "rd_total_time_ns":0
      },
      "scsi0":{
         "invalid_wr_operations":0,
         "failed_unmap_operations":0,
         "rd_merged":0,
         "flush_total_time_ns":924746639470,
         "failed_rd_operations":0,
         "invalid_rd_operations":0,
         "flush_operations":696421,
         "rd_operations":2667905,
         "wr_merged":0,
         "idle_time_ns":97464830,
         "wr_operations":2940736,
         "account_invalid":true,
         "failed_wr_operations":0,
         "account_failed":true,
         "timed_stats":[
            
         ],
         "invalid_unmap_operations":0,
         "wr_highest_offset":137438916608,
         "rd_bytes":156259162624,
         "invalid_flush_operations":0,
         "wr_total_time_ns":17825650178186,
         "unmap_total_time_ns":34820924918,
         "unmap_merged":0,
         "wr_bytes":113491921920,
         "unmap_bytes":60017652736,
         "failed_flush_operations":0,
         "unmap_operations":13283,
         "rd_total_time_ns":2551627247405
      },
      "pflash0":{
         "wr_merged":0,
         "account_failed":true,
         "wr_operations":0,
         "account_invalid":true,
         "failed_wr_operations":0,
         "failed_rd_operations":0,
         "failed_unmap_operations":0,
         "rd_merged":0,
         "flush_total_time_ns":0,
         "invalid_wr_operations":0,
         "rd_operations":0,
         "flush_operations":0,
         "invalid_rd_operations":0,
         "unmap_bytes":0,
         "failed_flush_operations":0,
         "wr_bytes":0,
         "unmap_merged":0,
         "unmap_total_time_ns":0,
         "unmap_operations":0,
         "rd_total_time_ns":0,
         "invalid_flush_operations":0,
         "rd_bytes":0,
         "wr_highest_offset":0,
         "invalid_unmap_operations":0,
         "timed_stats":[
            
         ],
         "wr_total_time_ns":0
      }
   },
   "qmpstatus":"running",
   "cpu":0.713674951201602,
   "maxmem":4294967296,
   "running-qemu":"8.0.2",
   "balloon":4294967296,
   "uptime":415542,
   "mem":3407142912,
   "nics":{
      "tap100i0":{
         "netin":232469998852,
         "netout":35410732964
      }
   },
   "cpus":2,
   "netout":35410732964,
   "maxdisk":137438953472,
   "diskread":156259162624,
   "pid":329860
}

@dougiteixeira
Copy link
Owner

@to4ko I believe your problem is the same... I will close this issue. Please consult your debug logs and if the disk information returned has a value and the integration is still not returning the data, open a new issue.

@dougiteixeira dougiteixeira added the duplicate This issue or pull request already exists label Nov 9, 2023
@dougiteixeira dougiteixeira changed the title Wrong data from Proxmox VE 8 Incorrect VM disk information Nov 9, 2023
Repository owner locked and limited conversation to collaborators Nov 9, 2023
@dougiteixeira dougiteixeira pinned this issue Nov 9, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
bug Something isn't working duplicate This issue or pull request already exists
Projects
None yet
Development

No branches or pull requests

3 participants