Join this channel to get access to perks: kzitem.info/rock/iaQzXI5528Il6r2NNkrkJAjoin Shop our Store (receive 3% or 5% off unlimited items w/channel membership) shop.digitalspaceport.com/
@GeoffSeeley
2 жыл бұрын
Never create a ZFS vdev using the O/S enumerated device names (sda, sdb, etc.) as they can change when adding/removing drives. Use a unique device identifier like the WWN or serial number (see /dev/disk/by-id)
@andrewjohnston359
2 жыл бұрын
One issue in calling it shared storage, is over time one might forget it's not actually shared storage - it still relies on replication for HA to work (whereas actual shared storage does not). One thing you could also try out for another demo video that I have tried and is really fun, is creating a ceph cluster rather than ZFS. When you use CEPH with hyperconverged storage like you have here, then if DOES actually become truly shared/distributed storage - no need to configure replication, just configure HA rules. I've test live migration in this scenario with a SIP client making a voice call and it didn't drop the call - no ping drops. In the event a node loses power, the VM just starts itself up on another node. CEPH does have some downsides...it really needs NVMe storage for VDI type VM's , and needs SSD's in general for journals, plus minimum 10Gbps networking between nodes....but it's super cool!!
@adrianelviejillo
11 ай бұрын
Hi, I am using GlusterFS with replica 3 on a 3 node cluster, I've enabled some HA rules for VMs, no issues so far.
@camerontinker1948
2 жыл бұрын
Great video! I always appreciate the detail you put into your content. Proxmox HA is pretty cool stuff!
@DigitalSpaceport
2 жыл бұрын
There are so many details with proxmox, and thats before even gwtring to the cli only features. Thanks!
@teoechico
2 жыл бұрын
So clear explaination and demo. Thank you!
@ierosgr
2 жыл бұрын
7.03 you ve said what I needed to hear. 10g connection between nodes. For that you ll create a vmbr and you ll assign a 10g port in order to work right? If you were about to enable jumbo frames, it gives you the option to set the MTU to 9000 (this is the number for mega frames) at port level and vmbr level. Where would you enable it? only at the port oprion window, only at the vmbr which will be based on that port, or both port and vmbr?
@DigitalSpaceport
2 жыл бұрын
set the vmbr0 "bridge ports" to the 10g cards name. Mine is like enp2s0 and restart the network. I find jumbo frames not really needed and if you do go that route, you need to ensure all the traffic on that logical network has jumbo frames enabled or it can cause more issues then solve.
@ierosgr
2 жыл бұрын
@@DigitalSpaceport I know the procedure I don t know the specific option i ve asked though and you seemed to elude answer straight. I also know that the whole internal 10g network then needs that option as well. I don t know though in proxmox at which level you set that. Vmbr or port? Even bond has that option in order to be changed to the desirable level. Do you happen to know?
@CJ-vg4tg
Жыл бұрын
Great tutorial of zfs etc thanks! I have it all set up on 3 nodes. Replication done etc. I keep hitting an issues with adding resourcres to HA. The CT or VM's after being added go into a 'freeze' state on the active node. Been onthis for days now initially with Ceph pool and OSD disks thinking that was the issue - started from scratch and tried ZFS and still hit the same issues - HA CTs go into freeze state. I'm losing my mind over this. Any ideas? Thanks
@DigitalSpaceport
Жыл бұрын
Do you have Guest Agent configured and running on the VM's? What is the CPU TYPE you are using here also? Recommend testing with QEMU CPU type mode for instance on all machines. I think I had this issue with HOST mode. Also there was a bug in one of the kernels earlier this year so I recommend you update to the latest 5.19 if you have not tried that also.
@jeytis72
2 жыл бұрын
Hi, as for a mysql Vm or container, I was wondering if this kind of configuration + HA would make all data perfectly synchronized in case the main node goes down. In particular, let's assume that a mysql VM on the node 'one' of the cluster goes down suddenly, would all mysql data (even the last record updated a few seconds before) be immediately available on the other [target] node? Thanks
@DigitalSpaceport
2 жыл бұрын
Hey so you need to look at "Galera Clustering" for use of MySQL like this. You would probably run into write lock if you just rely on this and a node falls off imo. For anything distributed/mysql related, Galera.
@jeytis72
2 жыл бұрын
@@DigitalSpaceportI once heard about Galera. I'll look it up. Last thing, if the main node goes down, say that it breaks completely, once you have restored the machine with the same HD configuration, what would you need to do to make it work as it was. Does Proxmox cluster run kind of a "resilvering" or what? Thanks
@Halomania697
Жыл бұрын
I'm getting an error when trying to migrate the VM over to another one of my nodes. I only have 2 in this instance. 2022-12-29 15:30:55 starting migration of VM 100 to node 'ALG-Proxmox02' (192.168.3.202) 2022-12-29 15:30:55 found local disk 'crate01:iso/ubuntu-22.04.1-live-server-amd64.iso' (in current VM config) 2022-12-29 15:30:55 found local, replicated disk 'datastore01:vm-100-disk-0' (in current VM config) 2022-12-29 15:30:55 can't migrate local disk 'crate01:iso/ubuntu-22.04.1-live-server-amd64.iso': local cdrom image 2022-12-29 15:30:55 ERROR: Problem found while scanning volumes - can't migrate VM - check log 2022-12-29 15:30:55 aborting phase 1 - cleanup resources 2022-12-29 15:30:55 ERROR: migration aborted (duration 00:00:00): Problem found while scanning volumes - can't migrate VM - check log Any clue why this is happening? Using a directory to upload the ISO image to create the VM.
@DigitalSpaceport
Жыл бұрын
You need to remove the DVD/CD image that is attached. That is stored locally on your machine on a non migrating volume it looks like. "can't migrate local disk 'crate01:iso/ubuntu-22.04.1-live-server-amd64.iso"
@michaborczynski2985
2 жыл бұрын
Cześć, czy te node'y to virtualki ? Jeżeli tak - to jak rozwiązałeś problem z siecią wewnątrz proxmoxa ? Zrobiłem 3 node(proxmoxa) na hyperv i HA, ZFS działa jak zloto. Ale utworzona VM nie ma dostepu do Sieci i nie mam pomysłu jak to ogarnać dalej.
@DigitalSpaceport
2 жыл бұрын
Musisz skonfigurować sieć wirtualną w menedżerze sieci i ustawić ją na vrmb0 i podłączyć ją do karty sieciowej hyperV.
@DenmarkAngeles-dy4oz
3 ай бұрын
Does it matter if the nodes in the cluster have different raid setup but still have the same name? Does replication still works between nodes? Example pool-name: VM-storage node1: raidz1 node2: raidzmirror node3: raidzstripe Thank you.
@DigitalSpaceport
3 ай бұрын
I think as long as you are all ZFS you are OK. I would be careful and test this first with a live migration but if that works I would assume it's OK. The HA is based of proxmox's ZFS implementation under the hood which I'd imagine confirms closely to open-zfs standard.
@DenmarkAngeles-dy4oz
3 ай бұрын
@@DigitalSpaceport Ok I'll test it first. I'll let you know if live migration works!. Thank you!
@Awanbandung
Жыл бұрын
so with ZFS every node must have disk for shared? how if the size of one of nodes different or havent let's say node-A = ZFS 1TB node-B = ZFS 2TB node-C = no ZFS its possible?
@DigitalSpaceport
Жыл бұрын
node-C will be a problem for the cool shared storage being missing. Missized wont be a problem aside from the obvious "what will fit"
@Awanbandung
Жыл бұрын
@@DigitalSpaceport i see, ty for explaining, so actually possible to create cluster without shared storage? Just for access all nodes in 1 page.
@DigitalSpaceport
Жыл бұрын
@@Awanbandung you can but the shared storage is really one of the biggest reasons to do a cluster. Don't think clusters come with 0 risk either. You cant "uncluster" without some potential for issues.
@Awanbandung
Жыл бұрын
@@DigitalSpaceport i only have 1 ssd per nodes, and used by proxmox OS, can i use the remaining size of SSD for High avaibility? which method best
@eminismailzade
Жыл бұрын
Hello. I am getting "task error: Cluster join aborted!" When joining to cluster. Maybe you know reason.
@DigitalSpaceport
Жыл бұрын
You have an issue with your connection handshake most likely. Are you running a baseline install?
@eminismailzade
Жыл бұрын
@@DigitalSpaceport Yes, i installed latest official release. Both of the nodes located on different esxi machines. First in pc, another one located in Notebook.
Пікірлер: 30