Skip to main content

Proxmox Cheatsheet




Common Setup
Find fastest repository
apt install netselect-apt
netselect-apt sid -nc ID -o /etc/apt/sources.list

Downgrade debian to stable
apt show base-files
Update /etc/apt/sources.list
Replacing all testing with stable.

# cat /etc/apt/sources.list
deb http://deb.debian.org/debian stable main contrib non-free
deb-src http://deb.debian.org/debian stable main contrib non-free

deb http://deb.debian.org/debian-security/ stable/updates main contrib non-free
deb-src http://deb.debian.org/debian-security/ stable/updates main contrib non-free

deb http://deb.debian.org/debian stable-updates main contrib non-free
deb-src http://deb.debian.org/debian stable-updates main contrib non-free

Running apt update && apt dist-upgrade now wouldn't change any packages, because packages from testing have newer version numbers than the ones from stable.

To circumvent this, create /etc/apt/preferences.d/downgrade with the following content. This file can be deleted after the update has completed.

# $ cat /etc/apt/preferences.d/downgrade
Package: *
Pin: release a=stable
Pin-Priority: 1001
Now, Debian will install packages from stable.

WARNING: BACKUP YOUR SYSTEM BEFORE THE NEXT STEP.

sudo apt update
sudo apt dist-upgrade

Cleanup
rm /etc/apt/preferences.d/downgrade
Clean Proxmox kernelsudo apt-get install cron curl git
curl -o pvekclean.sh https://raw.githubusercontent.com/jordanhillis/pvekclean/master/pvekclean.sh
chmod +x pvekclean.sh
./pvekclean.sh
Remove kernel bootproxmox-boot-tool kernel list
proxmox-boot-tool kernel remove [Kernel Version]
proxmox-boot-tool refresh
uname -a
Enable QEMU guestapt update && apt -y install qemu-guest-agent
systemctl enable qemu-guest-agent
systemctl start qemu-guest-agent
systemctl status qemu-guest-agent
How to replace proxmox IPChange IP in this area
-----
nano /etc/hosts
nano /etc/hostname
nano /etc/network/interfaces
----
reboot
Setup proxmox with openvswitch & LACPInstall openvswitch
-----------
apt update
apt install openvswitch-switch

Create Config [/etc/network/interfaces]
-------------------------
auto lo
iface lo inet loopback

iface eno1 inet manual
iface eno2 inet manual

allow-vmbr1 bond0
iface bond0 inet manual
ovs_bonds eno1 eno2
ovs_type OVSBond
ovs_bridge vmbr1
ovs_options other_config:lacp-time=fast bond_mode=balance-tcp lacp=active
pre-up ip link set eno1 mtu 9000
pre-up ip link set eno2 mtu 9000
mtu 9000

auto vmbr1
iface vmbr1 inet manual
ovs_type OVSBridge
ovs_ports bond0 mgmt1
ovs_extra set int vmbr1 mtu_request=9000
mtu 9000

allow-vmbr1 mgmt1
iface mgmt1 inet static
address [your-ip-address]
netmask [your-netmask]
gateway [your-gateway]
ovs_type OVSIntPort
ovs_bridge vmbr1
#ovs_options tag=102
ovs_extra set int mgmt1 mtu_request=8000
mtu 8000


Create ZFSRaid0
----------
zpool create <pool> <path1> <path2>
check path lsblk
zfs list
zfs create <pool>/<folder
Install GlusterFS# Requires names set in /etc/hosts
wget -O - https://download.gluster.org/pub/gluster/glusterfs/7/rsa.pub | apt-key add -

#install glusterfs
-----------
apt-get update
apt full-upgrade -y
apt install glusterfs-server
systemctl enable glusterd
systemctl start glusterd

#prepare storage
-----------
cfdisk /dev/sda
choose gpt
choose write
choose quit

#format & mount storage (master)
---------------
mkfs.xfs /dev/sda -f
mkdir -p /glusterfs/1
echo "/dev/sdb /glusterfs/1 xfs defaults 0 0" >> /etc/fstab
mount -a && mount
mkdir -p /glusterfs/1/gv0

#format & mount storage (slave)
---------------
mkfs.xfs /dev/sda -f
mkdir -p /glusterfs/2
echo "/dev/sdb /glusterfs/2 xfs defaults 0 0" >> /etc/fstab
mount -a && mount
mkdir -p /glusterfs/2/gv0

#format & mount storage (master 2)
---------------
mkfs.xfs /dev/sdb -f
mkdir -p /glusterfs/3
echo "/dev/sdb /glusterfs/3 xfs defaults 0 0" >> /etc/fstab
mount -a && mount
mkdir -p /glusterfs/3/gv0

#format & mount storage (slave 2)
---------------
mkfs.xfs /dev/sdc -f
mkdir -p /glusterfs/4
echo "/dev/sdc /glusterfs/4 xfs defaults 0 0" >> /etc/fstab
mount -a && mount
mkdir -p /glusterfs/4/gv0



#create glusterfs volume
-----------------
gluster volume create gv0 replica 2 10.1.1.10:/glusterfs/1/gv0 10.1.1.10:/glusterfs/2/gv0 10.1.1.11:/glusterfs/3/gv0 10.1.1.11:/glusterfs/4/gv0 force

#required probing check on slave node
gluster peer probe IP-Proxmod-Master

#start gluster volume
--------------
gluster volume start gv0

#check status gluster volume
------
gluster vol info gv0
gluster vol status gv0
gluster volume profile gv0 start
gluster volume profile gv0 info

#restart glusterfs
--------
systemctl restart glusterd.service
=========================================

Add New Storage
-----
gluster volume add-brick gv0 replica 3 IP-Node:/glusterfs/3/gv0 force
gluster volume add-brick gv0 replica 4 IP-Node:/glusterfs/4/gv0 force


Remove glusterfs volume
-------
gluster volume delete [Volname]

Commands to avoid split-brain situation:
---------------------------
gluster vol set gv0 cluster.heal-timeout 5
gluster volume heal gv0 enable
gluster vol set gv0 cluster.quorum-reads false
gluster vol set gv0 cluster.quorum-count 1
gluster vol set gv0 network.ping-timeout 2
gluster volume set gv0 cluster.favorite-child-policy mtime
gluster volume heal gv0 granular-entry-heal enable
gluster volume set gv0 cluster.data-self-heal-algorithm full
gluster volume set gv0 performance.io-thread-count 8


(First do not forget to edit /etc/hosts to include the proper hostnames and IP's for each network)
Lines to add in /etc/fstab (do this in both server)
/dev/sdx1 /data xfs defaults 0 0
gluster1:VMS /vms glusterfs defaults,_netdev,x-systemd.automount,backupvolfile-server=gluster2 0 0

Lines to add in /etc/glusterfs/glusterd.vol (do this in both server)
option transport.rdma.bind-address gluster1
option transport.socket.bind-address gluster1
option transport.tcp.bind-address gluster1
COROSYNC

quorum {
provider: corosync_votequorum
expected_votes: 1
two_node: 1
}
Recover Missing VMCreate new vm with same configuration but no disk
qm rescan --vmid [id-vm]
Rename NodePowerdown all VMs and containers
-------
Edit /etc/hostname and /etc/hosts with the new hostname
Reboot the host
---
At this point you will see your old host as "disconnected" in the web interface, and a new host with your new hostname appears.

SSH into the machine and navigate to /etc/pve/nodes - here you will see two folders (one with your new hostname, one with your old hostname)

The config for the containers is located at /etc/pve/nodes/<currenthostname>/lxc
The config for virtual machines is stored at /etc/pve/nodes/<currenthostname>/qemu-server
etc.

depending on what other technologies you are using
So I just moved the contents of each folder into the folder for the new host - i.e. /etc/pve/nodes/<newhostname>/lxc etc.
The second I did this, I saw the web interface update with the VMs and containers now showing in the correct datacenter and under the correct host.

Finally, move the folder with the old server's hostname (/etc/pve/nodes/<oldhostname>) somewhere for backup.
Reboot
Fix warning-remote host identification has changedUpdate to all node
==============
pvecm updatecerts -F
systemctl restart pvedaemon pveproxy


Remove all in
----
/etc/pve/priv/known_hosts


2. ClusterRemove clustersystemctl stop pve-cluster corosync
pmxcfs -l
rm /etc/corosync/*
rm /etc/pve/corosync.conf
killall pmxcfs
systemctl start pve-cluster

===================
Remove node
=================
pvecm delnode [node]
rm -rf /etc/pve/nodes/[nodes name]
Fix login cluster errorpvecm expected 1


3. Virtual Machines (VM)Add new disk Alpine Linuxfdisk /dev/sdb
choose n,p,1,w
mkfs.ext4 /dev/sdb1
mkdir /data
mount /dev/sdb1 /data
mount
blkid (find UUID)
nano /etc/fstab
echo -e "UUID=6648cee4-6063-4df1-842c-4814599ce958 /data ext4 rw,relatime,user 0 0" >> /etc/fstab
reboot
Add new disk proxmox XFSfdisk /dev/sdb
choose n,p,1,w
blkid (find UUID)
nano /etc/fstab
UUID=77de149f-f46b-0041-ba8a-9756756dd1e9 / xfs defaults 0 1
Install docker Apline Linuxapk add --update docker openrc
edit the /etc/apk/repositories file to add (or uncomment) a line. Community repository link: http://dl-cdn.alpinelinux.org/alpine/latest-stable/community
rc-update add docker boot
rc-service docker start
service docker status

===============
Docker UI
===============
docker run -d \
--name="portainer" \
--restart on-failure \
-p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce
Install docker LXCapt-get update --allow-releaseinfo-change
apt-get upgrade
apt-get install curl

Prune/Cleanup Docker
================
docker system prune --all --volumes --force

============
Automatic
============
curl -fsSL https://get.docker.com -o get-docker.sh
sh get-docker.sh

==============
Manual
==============
apt install docker.io
systemctl enable docker
systemctl start docker
systemctl status docker

===============
Docker UI
===============
docker run -d \
--name="portainer" \
--restart on-failure \
-p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce

=============
IP Container
==============
ip addr

============
Tools
============
apt install docker-compose
Install Gnome desktop debianapt install tasksel -y
tasksel install desktop gnome--desktop
Install docker prometheus & Grafana Alpine Linux
Get Muhammad Adam Nur Rahman’s stories in your inbox


Join Medium for free to get updates from this writer.



Subscribe




Remember me for faster sign in
==============
Install Docker
==============
apk add --update docker openrc
rc-update add docker boot
rc-service docker start
service docker status

===============
Docker UI
===============
docker run -d \
--name="portainer" \
--restart on-failure \
-p 9000:9000 \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer-ce

==============
Node Exporter
==============
docker pull prom/node-exporter
docker run -d -p 9100:9100 --net="host" prom/node-exporter

==========================
Install Docker Prometheus
=========================
mkdir /etc/monitoring
nano /etc/monitoring/prometheus.yml
----------------
global:
scrape_interval: 15s

scrape_configs:
- job_name: node
static_configs:
- targets: ['localhost:9100']
------------------
docker run \
-p 9090:9090 \
-v /prometheus-2.39.1.linux-amd64:/prometheus-2.39.1.linux-amd64 \
prom/prometheus

docker run \
-p 9090:9090 \
-v /etc/monitoring/prometheus.yml:/etc/prometheus/prometheus.yml \
prom/prometheus

=====================
Install Prometheus Node
=====================
wget https://github.com/prometheus/prometheus/releases/download/v2.39.1/prometheus-2.39.1.linux-amd64.tar.gz
tar xvf prometheus-*.*-amd64.tar.gz
cd prometheus-*.*
nano prometheus.yml
----------------
global:
scrape_interval: 15s

scrape_configs:
- job_name: node
static_configs:
- targets: ['localhost:9100']
------------------

==============
Start Service
==============
cd prometheus-*.*
./prometheus --config.file=./prometheus.yml

============
Stop Service
============
pgrep -f prometheus
kill -TERM <id service>

================
Install Docker Grafana
================
docker run --name grafana -d -p 3000:3000 grafana/grafana-enterprise
Install docker Sambadocker pull dperson/samba
docker images
docker run -it -e TZ=Asia/Jakarta --name samba -p 139:139 -p 445:445 \
-v /mnt/usbc:/mount \
-d dperson/samba -p \
-u "user;password" \
-s "user;/mount;yes;no;no;user"

docker pull elswork/samba
docker images
sudo docker run -it -e TZ=Asia/Jakarta --name samba -m 1024m -p 139:139 -p 445:445 \
-v /mnt/usbc:/mount \
-d elswork/samba -p \
-u "1000:1000:user:user:password" \
-s "user:/mount:rw:user"
docker run -d -p 139:139 -p 445:445 -e TZ=Asia/Jakarta --name samba \
-v /mnt/usbc:/mount elswork/samba \
-u "1000:1000:user:user:password" \
-s "user:/mount:rw:user"
Install Samba with Alpine Linuxapk -U upgrade
apk add samba
setup-disk
mkdir /[storage-mount]
chmod 0777 /[storage-mount]
apk add nano
nano /etc/samba/smb.conf
[nas]
browseable = yes
writeable = yes
path = /data
adduser <username>
smbpasswd -a <username>
rc-update add samba
rc-service samba start
Instant boot debiannano /etc/default/grub
GRUB-TIMEOUT=0
sudo update-grub
Resize diskqm resize <vmid> <disk> <size>
---------
qm resize 100 virtio0 +5G

LXC
====
shutdown a CT.
edit /etc/pve/lxc/xxx.conf - with a new correct size.



4. CephBenchmark Cephapt install fio
fio --ioengine=libaio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4K --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio

fio --ioengine=libaio --filename=/dev/sda --direct=1 --sync=1 --rw=write --bs=4M --numjobs=1 --iodepth=1 --runtime=60 --time_based --name=fio
Ceph OSD crush ruleadd new crush
========
ceph osd crush rule create-replicated replicated-ssd default host ssd
ceph osd crush rule create-replicated replicated-hdd default host hdd

check crush
=======
ceph osd crush tree --show-shadow

remove crush
======
ceph osd crush rule rm [Name Crush]
Fix Error [rbd error/ rbd/ listing images failed/ (2) No such file or directory (500)]rbd ls -l [cephpool]
rbd rm vm-[id]-disk-[1] -p [cephpool]
Increase Ceph pg_numbceph osd lspools
ceph osd pool get <your-ceph-storage> pg_num
ceph osd pool set <your-ceph-storage> pg_num <new-number>
Optimize Ceph configurationauth_cluster_required = none
auth_service_required = none
auth_client_required = none
debug_lockdep = 0/0
debug_context = 0/0
debug_crush = 0/0
debug_buffer = 0/0
debug_timer = 0/0
debug_journaler = 0/0
debug_osd = 0/0
debug_optracker = 0/0
debug_objclass = 0/0
debug_filestore = 0/0
debug_journal = 0/0
debug_ms = 0/0
debug_monc = 0/0
debug_tp = 0/0
debug_auth = 0/0
debug_finisher = 0/0
debug_heartbeatmap = 0/0
debug_perfcounter = 0/0
debug_asok = 0/0
debug_throttle = 0/0
Remove Ceph1 - Stop/Out all OSDs
2 - Remove all OSDs
3 - Remove ALL Mons (except the master)
4 - Remove ALL Managers (except the master)
5 - Execute on each OSD node: pveceph purge
6 - On last node (master mon/mgr): stop all ceph services, and execute: pveceph purge
====================
rm -rf /etc/systemd/system/ceph*
killall -9 ceph-mon ceph-mgr ceph-mds
rm -rf /var/lib/ceph/mon/ /var/lib/ceph/mgr/ /var/lib/ceph/mds/
pveceph purge
apt purge ceph-mon ceph-osd ceph-mgr ceph-mds
rm /etc/init.d/ceph
for i in $(apt search ceph | grep installed | awk -F/ '{print $1}'); do apt reinstall $i; done
dpkg-reconfigure ceph-base
dpkg-reconfigure ceph-mds
dpkg-reconfigure ceph-common
dpkg-reconfigure ceph-fuse
for i in $(apt search ceph | grep installed | awk -F/ '{print $1}'); do apt reinstall $i; done
rm -rf /etc/ceph
rm -rf /var/lib/ceph
pveceph install

=========
Remove monitor ceph
ceph mon remove [monitor]
Remove CephFSpveceph fs destroy NAME --remove-storages --remove-pools


BasicList VMs
qm list

Create or restore a virtual machine.
qm create <vmid>

Create or restore a virtual machine with core, memory, disks specified.
qm create <vmid> --name <vm-name> --cores <number-of-cores> --memory <memory-size-in-bytes> --scsi0 file=<vg-name>:<size-in-gb> --cdrom local:<iso-name> --net0 virtio,bridge=<bridge-name>

Start a VM
qm start <vmid>

Suspend virtual machine.
qm suspend <vmid>

Shutdown a VM
qm shutdown <vmid>

Reboot a VM
qm reboot <vmid>

Reset a VM
qm reset <vmid>

Stop a VM
qm stop <vmid>

Destroy the VM and all used/owned volumes.
Note: Removes any VM specific permissions and firewall rules
qm destroy <vmid>

Enter Qemu Monitor interface.
qm monitor <vmid>

Get the virtual machine configuration with both current and pending values.
qm pending <vmid>

Send key event to virtual machine.
qm sendkey <vmid> <key> [OPTIONS]

Show command line which is used to start the VM (debug info).
qm showcmd <vmid> [OPTIONS]

Unlock the VM.
qm unlock <vmid>

Clone a VM
qm clone <vmid> <newid>

Clone a VM in full clone mode and also set the name.
qm clone <vmid> <newid> --full --name <name>

Migrate a VM
qm migrate <vmid> <target-node>

Show VM status
qm status <vmid>

Clean up resources for a VM
qm cleanup <vmid> <clean-shutdown> <guest-requested>

Create a Template.
qm template <vmid> [OPTIONS]

Set virtual machine options (synchrounous API)
qm set <vmid> [OPTIONS]
CloudinitGet automatically generated cloudinit config.
qm cloudinit dump <vmid> <type>

Get the cloudinit configuration with both current and pending values.
qm cloudinit pending <vmid>

Regenerate and change cloudinit config drive.
qm cloudinit update <vmid>
DiskImport an external disk image as an unused disk in a VM.
Note: The image format has to be supported by qemu-img(1).
qm disk import <vmid> <source> <storage>

Move volume to different storage or to a different VM.
qm disk move <vmid> <disk> [<storage>] [OPTIONS]

Rescan all storages and update disk sizes and unused disk images.
qm disk rescan [OPTIONS]

Extend volume size.
qm disk resize <vmid> <disk> <size> [OPTIONS]

Unlink/delete disk images.
qm disk unlink <vmid> --idlist <string> [OPTIONS]

Rescan volumes
qm rescan
SnapshootList all snapshots.
qm listsnapshot <vmid>

Snapshot a VM
qm snapshot <vmid> <snapname>

Delete a snapshot.
qm delsnapshot <vmid> <snapname>

Rollback a snapshot
qm rollback <vmid> <snapname>

Open a terminal using a serial device
(The VM need to have a serial device configured, for example serial0: socket)
qm terminal <vmid> [OPTIONS]

Proxy VM VNC traffic to stdin/stdout
qm vncproxy <vmid>
PV, VG & LV ManagementCreate a PV
pvcreate <disk-device-name>

Remove a PV
pvremove <disk-device-name>

List all PVs
pvs

Create a VG
vgcreate <vg-name> <disk-device-name>

Remove a VG
vgremove <vg-name>

List all VGs
vgs

Create a LV
lvcreate -L <lv-size> -n <lv-name> <vg-name>

Remove a LV
lvremove <vg-name>/<lv-name>

List all LVs
lvs
Storage ManagementCreate a new storage.
pvesm add <type> <storage> [OPTIONS]

Allocate disk images.
pvesm alloc <storage> <vmid> <filename> <size> [OPTIONS]

Delete volume
pvesm free <volume> [OPTIONS]

Delete storage configuration.
pvesm remove <storage>

List storage content.
pvesm list <storage> [OPTIONS]

An alias for pvesm scan lvm.
pvesm lvmscan

An alias for pvesm scan lvmthin.
pvesm lvmthinscan

List local LVM volume groups.
pvesm scan lvm

List local LVM Thin Pools.
pvesm scan lvmthin <vg>

Get status for all datastores.
pvesm status [OPTIONS]
Template ManagementList All Templates
pveam available

List All Templates
pveam list <storage>

Download Appliance Templates
pveam download <storage> <template>

Remove a Template
pveam remove <template-path>

Update Container Template Database.
pveam update
Container Management======== Basic ===========
--------------------------

List Containers
pct list

Create or Restore a Container
pct create <vmid> <ostemplate> [OPTIONS]

Start the Container
pct start <vmid> [OPTIONS]

Create a Container Clone/Copy
pct clone <vmid> <newid> [OPTIONS]

Suspend the Container - This is Experimental
pct suspend <vmid>

Resume the Container
pct resume <vmid>

Stop the Container
This will abruptly stop all processes running in the container.
pct stop <vmid> [OPTIONS]

Shutdown the Container
This will trigger a clean shutdown of the container, see lxc-stop(1) for details
pct shutdown <vmid> [OPTIONS]

Destroy the Container (Also Delete All Uses Files)
pct destroy <vmid> [OPTIONS]

Show CT Status
pct status <vmid> [OPTIONS]

Migrate the Container to Another Node. Creates a New Bigration Task
pct migrate <vmid> <target> [OPTIONS]

Get Container Configuration
pct config <vmid> [OPTIONS]

Print the List of Assigned CPU Sets
pct cpusets

Get Container Configuration, Including Pending Changes
pct pending <vmid>

Reboot the Container by Shutting it Down, and Starting it Again. Applies Pending Changes.
pct reboot <vmid> [OPTIONS]

Create or Restore a Container
pct restore <vmid> <ostemplate> [OPTIONS]

Set Container Options
pct set <vmid> [OPTIONS]

Create a Template.
pct template <vmid>

Unlock the VM.
pct unlock <vmid>

============= Disk ===========
------------------------------

Get the Container’s Current Disk Usage
pct df <vmid>

Run a Filesystem Check (fsck) on a Container Volume
pct fsck <vmid> [OPTIONS]

Run fstrim on a Chosen CT and Its Mountpoints
pct fstrim <vmid> [OPTIONS]

Mount the Container’s Filesystem on the Host
This will hold a lock on the container and is meant for emergency maintenance only
as it will prevent further operations on the container other than start and stop.
pct mount <vmid>

Move a rootfs-/mp-Volume to a Different Storage or to a Different Container
pct move-volume <vmid> <volume> [<storage>] [<target-vmid>] [<target-volume>] [OPTIONS]

Unmount the Container’s Filesystem
pct unmount <vmid>

Resize a Container Mount Point
pct resize <vmid> <disk> <size> [OPTIONS]

Rescan All Storages and Update Disk Sizes and Unused Disk Images
pct rescan [OPTIONS]

Launch a Console for the Specified Container
pct console <vmid> [OPTIONS]

Launch a Shell for the Specified Container
pct enter <vmid>

Launch a Command Inside the Specified Container
pct exec <vmid> [<extra-args>]

Copy a File from the Container to the Local System
pct pull <vmid> <path> <destination> [OPTIONS]

Copy a Local File to the Container
pct push <vmid> <file> <destination> [OPTIONS]

============= Snapshoot =============
-------------------------------------

Snapshot a Container
pct snapshot <vmid> <snapname> [OPTIONS]

List all Snapshots
pct listsnapshot <vmid>

Rollback LXC State to Specified Snapshot
pct rollback <vmid> <snapname> [OPTIONS]

Delete a LXC Snapshot
pct delsnapshot <vmid> <snapname> [OPTIONS]

============= Web GUI =================
---------------------------------------

# Restart Web GUI
service pveproxy restart
Important File/Dir Path====== PVE ========
-------------------

/etc/pve/authkey.pub Public key used by the ticket system
/etc/pve/ceph.conf Ceph configuration file (note: /etc/ceph/ceph.conf is a symbolic link to this)
/etc/pve/corosync.conf Corosync cluster configuration file (prior to Proxmox VE 4.x, this file was called cluster.conf)
/etc/pve/datacenter.cfg Proxmox VE data center-wide configuration (keyboard layout, proxy, …)
/etc/pve/domains.cfg Proxmox VE authentication domains
/etc/pve/firewall/cluster.fw Firewall configuration applied to all nodes
/etc/pve/firewall/<NAME>.fw Firewall configuration for individual nodes
/etc/pve/firewall/<VMID>.fw Firewall configuration for VMs and containers
/etc/pve/ha/crm_commands Displays HA operations that are currently being carried out by the CRM
/etc/pve/ha/manager_status JSON-formatted information regarding HA services on the cluster
/etc/pve/ha/resources.cfg Resources managed by high availability, and their current state
/etc/pve/nodes/<NAME>/config Node-specific configuration
/etc/pve/nodes/<NAME>/lxc/<VMID>.conf VM configuration data for LXC containers
/etc/pve/nodes/<NAME>/openvz/ Prior to PVE 4.0, used for container configuration data (deprecated, removed soon)
/etc/pve/nodes/<NAME>/pve-ssl.key Private SSL key for pve-ssl.pem
/etc/pve/nodes/<NAME>/pve-ssl.pem Public SSL certificate for web server (signed by cluster CA)
/etc/pve/nodes/<NAME>/pveproxy-ssl.key Private SSL key for pveproxy-ssl.pem (optional)
/etc/pve/nodes/<NAME>/pveproxy-ssl.pem Public SSL certificate (chain) for web server (optional override for pve-ssl.pem)
/etc/pve/nodes/<NAME>/qemu-server/<VMID>.conf VM configuration data for KVM VMs
/etc/pve/priv/authkey.key Private key used by ticket system
/etc/pve/priv/authorized_keys SH keys of cluster members for authentication
/etc/pve/priv/ceph* Ceph authentication keys and associated capabilities
/etc/pve/priv/known_hosts SSH keys of the cluster members for verification
/etc/pve/priv/lock/* Lock files used by various services to ensure safe cluster-wide operations
/etc/pve/priv/pve-root-ca.key Private key of cluster CA
/etc/pve/priv/shadow.cfg Shadow password file for PVE Realm users
/etc/pve/priv/storage/<STORAGE-ID>.pw Contains the password of a storage in plain text
/etc/pve/priv/tfa.cfg Base64-encoded two-factor authentication configuration
/etc/pve/priv/token.cfg API token secrets of all tokens
/etc/pve/pve-root-ca.pem Public certificate of cluster CA
/etc/pve/pve-www.key Private key used for generating CSRF tokens
/etc/pve/sdn/* Shared configuration files for Software Defined Networking (SDN)
/etc/pve/status.cfg Proxmox VE external metrics server configuration
/etc/pve/storage.cfg Proxmox VE storage configuration
/etc/pve/user.cfg Proxmox VE access control configuration (users/groups/…)
/etc/pve/virtual-guest/cpu-models.conf For storing custom CPU models
/etc/pve/vzdump.cron Cluster-wide vzdump backup-job schedule

========== Debug =========
--------------------------

/etc/pve/.version File Versions (to detect file modifications)
/etc/pve/.members Info about Cluster Members
/etc/pve/.vmlist List of all VMs
/etc/pve/.clusterlog Cluster Log (last 50 entries)
/etc/pve/.rrd RRD Data (most recent entries)

============ OpenVz ==========
------------------------------

/etc/vz/conf/xxx.conf config
/var/lib/vz/root/xxx data
/var/lib/vz/template/cache template
/var/lib/vz/dump snapshot
/etc/vz/vz.conf OpenVZ config

============= KVM ===========
-----------------------------

/var/lib/vz/images/xxx data
/var/lib/vz/template/iso template
/var/lib/vz/dump snapshot

============= LXC ===========
-----------------------------

/var/lib/lxc/xxx/config config
/var/lib/vz/images/xxx data
/var/lib/vz/template/cache template
/var/lib/vz/dump snapshot
OthersExecute Qemu Guest Agent commands.
qm guest cmd <vmid> <command>

Executes the given command via the guest agent
qm guest exec <vmid> [<extra-args>] [OPTIONS]

Gets the status of the given pid started by the guest-agent
qm guest exec-status <vmid> <pid>

Sets the password for the given user to the given password
qm guest passwd <vmid> <username> [OPTIONS]


References:
https://medium.com/@sm4rthink/proxmox-cheatsheet-b3e92da768bc
https://pengwin.ca/posts/Proxmox-Cheatsheet/

Comments

Popular posts from this blog

Tutorial Java Servlet

Kumpulan Video Coding

https://www.youtube.com/user/TheGeekStyle/playlists https://www.youtube.com/playlist?list=PLxOOFFsKqfQQfgkcRBS0gJZv7Eo4DlzW-

Software Developer Hiring - Coding Test at HackerRank

       Hi all, wherever you are, whatever you do, just whatever. Ya Allah gua udah kaga ngoding lagi malah dapet coding interview di hackerRank, gua udah milih jalan menjadi system admin seperti di sekolah dulu yang berhubungan dengan Teknik Komputer dan Jaringan, tapi gua ga pilih jaringan yang dimana seharusnya itu menjadi pilihan gua karena itu ga ada coding codingan tapi gua malah pilih devops karena sedang booming padahal itu sama aja sebenernya kayak sysadmin cuma versi 4 kata mas Lazuardi.      Di umur gua yang ke 25 waktu itu gua udah memilih dan memutuskan untuk pergi dari karir programmer/software developer/software engineer/whatever they callled. Gua lebih memilih dan nyaman dan lain-lain dengan kerjaan yang berhubungan dengan server/vps seperti menginstall web server, generate ssl dari certbot nya lestencrypt, pointing domain, install mysql server, konfigurasi web server untuk laravel, etc.      Tidak menyebutkan brand tapi a...