Preferences

Okay then my question still stands. You are saying "similar to any other system which needs to reboot", but this is nowhere near similar to something like k8s, which has 1st class support for this. You cordon the node you are about to take off for maintenance, kubernetes automatically redistributes all the workloads to the other nodes, and after you are done you uncordon the node.

How does this look with Incus? Obviously if the workload you are running has some kind of multinode support you can use that, but I'm wondering if Incus a way to do this in some kind of generalized way like k8s?

But I did some more reading, there seems to be support for live migration for VMs, and limited live migration for containers. Moving stopped instances is supported for both VMs and containers.


I think what you are asking falls into "cluster member evacuation and re-balancing" [0], combined with live migration [1] with minimal downtime.

[0] https://linuxcontainers.org/incus/docs/main/howto/cluster_ma...

[1] https://linuxcontainers.org/incus/docs/main/howto/move_insta...

CoolCold
Thank you!

Indeed, container live migration is limited and a bit unclear on "network devices" - bridged interface is network device or not?

Bit ironic, that even with using CRUI, which AFAIK was created by the same Virtuozzo guys which provided OpenVZ back then, and that VEs could live migrate, was personally testing it in 2007-2008. Granted, there we no systemd by that days, if this complicates things. And of course required their patched kernel.

> Live migration for containers For containers, there is limited support for live migration using CRIU. However, because of extensive kernel dependencies, only very basic containers (non-systemd containers without a network device) can be migrated reliably. In most real-world scenarios, you should stop the container, move it over and then start it again.

63stack OP
Amazing, exactly what I was looking for, thank you.

This item has no comments currently.