From the course: VMware vSphere 8 Certified Technical Associate - Data Center Virtualization (VCTA-DCV) (1V0-21.20) Cert Prep
Migrating VMs with vMotion - vSphere Tutorial
From the course: VMware vSphere 8 Certified Technical Associate - Data Center Virtualization (VCTA-DCV) (1V0-21.20) Cert Prep
Migrating VMs with vMotion
- [Rick] In this video, we'll learn how vMotion can be used to migrate a running virtual machine from one ESXi host to another. So anytime you hear the term vMotion for any of the different types of vMotion that we're going to talk about, the one thing that they all have in common is the virtual machine is running. We're going to migrate a running VM without taking it down. That's what vMotion is all about. So with vMotion, we're going to move a running virtual machine from one ESXi host to another. There is no service interruption as the VM is moved from host to host. And this gives me the ability to manually load balance hardware. Like for example, let's say, that a virtual machine is running on an ESXi host that is running low on memory, or that has inadequate CPU for the virtual machines running on it. Well, I could use vMotion to migrate certain virtual machines off of that host and onto other hosts. Or if I need to perform maintenance on a host and migrate all of the VMs off of it, I can use vMotion to accomplish that. But before I can utilize vMotion, I have to make sure that all of the necessary prerequisites have been met. So I just want to point out in this diagram, I am speaking specifically to just a regular, old vMotion. We're not talking about a shared nothing vMotion. We're not talking about a storage vMotion. We're not talking about a cross vCenter vMotion, anything weird like that. We're just talking about a regular, old vMotion. We're migrating a VM from one host to another. This is the type of vMotion that is utilized by DRS or Distributed Resource Scheduler. So it's very important that we understand how just a regular, old vMotion works, and it's very important that the prerequisites are met. So here in this slide, you see a virtual machine running on ESXi Host 1. So this is where the live state of my VM is. This VM is running on this ESXi host. It is using the memory of this host. It's using the CPUs of this host. And maybe, I need to take this host down for some reason. Like for example, maybe I need to perform a memory upgrade on the physical host itself. So I want to take this virtual machine and get it off of that host. I want to move it to another host without taking it down. I can use vMotion to do that. But in order for vMotion to work, the virtual machine must be able to access its files after the vMotion operation completes. So we are going to need access to shared storage. When I say shared storage, I mean a data store that is accessible to both the source and destination, ESXi host. That's what we mean by shared storage. And shared storage is required for many vSphere features. vMotion is a big one. High availability is another big one. The VM is also going to need to be able to properly communicate over the network after it's moved. So here, we can see that this VM is connected to a virtual switch. When the VM is migrated to ESXi host two. I need a virtual switch with an identical port group, with an identical VLAN assignment, so that that virtual machine doesn't get cut off from the network. We need to make sure that moving it to the destination host will not break network connectivity for that virtual machine. So as a vSphere 6, some of the vMotion requirements have been relaxed a little bit for the virtual switch. You can do a cross virtual switch vMotion. So we don't need the port groups to be identically named. We can migrate it to a different port group. But what you do still need is a connection to a physical switch and the proper VLAN to be assigned, so that the address of that virtual machine still works when it's moved to that destination host. 'Cause the IP address of the VM doesn't change. So when the VM migrates to this destination host, it's still going to have the same IP. Now, it may connect to some other port group with a different name. That port group needs to have similar characteristics though. It needs to be on the same VLAN and allow that VM to still communicate and notice that when the VM moves to ESXi host number two, it maintains access to all of its files that are on this shared storage. So the VM can still continue to run even after it's been migrated. Okay, so let's walk through all of the things that occur when we carry out of vMotion. So my VM is running on ESXi hosts number one, and on both the source and destination hosts, I have a VM kernel port. The VM kernel port is marked for vMotion. And the job of the VM kernel port is going to be to help my hosts create a copy of this virtual machine on the destination host. So let's just back up a little bit here. The VM kernel ports are ports that are used by the VM kernel. When I say the VM kernel, I mean basically, the core process of an ESXi host is the VM kernel. So our hosts will use VM kernel ports to carry out functions that the hosts need to perform, like for example, migrating the contents of memory of a VM from one host to another. So what we'll do is we'll create a VM kernel port. We'll go in, and we'll configure that VM kernel port. We'll market for vMotion traffic, essentially telling the ESXi host, hey, if you want to migrate a VM, send that information using this VM kernel port. And again, the information that's going to be sent are things like the contents of memory of this VM. We're basically going to create a duplicate copy of this VM on the destination host when we carry out a vMotion. And once that copy is complete, now it's time to cut over to the new version of this virtual machine on the destination host. Now, that being said, the virtual machine is running this entire time. There's no downtime. So as the VM is being copied, as a copy of it is being created on ESXi hosts two, there may be things that are still changing in the memory tables of the source VM. Because the VM is still live. These changes are going to be captured in something called a memory bitmap. And once the copy operation is complete, the source VM is actually going to be halted. And the contents of that memory bitmap are essentially finalized at that point. Any changes that have occurred in memory are captured in that memory bitmap, and that memory bitmap is then copied over the vMotion network to the instance of the VM running on the destination host. And at that point, this new instance, this new VM instance on the destination host takes over. So that's why you may notice when you carry out a vMotion, if you are continuously pinging the VM, when you carry out of vMotion, there's a slight delay. There's a brief moment where those pings stop responding or maybe where one ping takes longer. That's the reason why the VM on the source host is actually halted. And there's a very brief moment in time where the memory bitmap is copied over and the new VM then takes over. So there is ever so slight a moment in time where you'll notice you'll get a slower response from pings or maybe a ping will even time out. But that's something that would never be noticeable to your end user. Okay, so we've already talked about the storage and networking requirements for vMotion. There are a few other things that we need to bear in mind when it comes to vMotion compatibility. So as I am deploying ESXi hosts, I need to make sure that those ESXi hosts are vMotion compatible. So in this case, we see a VM running on ESXi host one, and we see another ESXi host that we maybe want to migrate this VM 2 using vMotion. Well, if the VM is attached to a local ISO image, like for example, maybe this is a Windows VM, and we just created it, and we booted it from a Windows ISO image to install our operating system. Well, if that ISO image is on the local physical storage of ESXi Host 1 that is not going to be available on the local physical storage of ESXi Host 2. This ISO image is local to this one ESXi host. So if I migrate this VM, it's not going to be able to access that ISO image that it's connected to anymore. That will break vMotion. If I try to carry out a vMotion on this VM, I'll get an error. What if the source host is equipped with an Intel CPU, and the destination host is equipped with an AMD CPU? Well, in that case, vMotion is not going to work either. The hosts have to have compatible CPUs. Think about it this way. You've got a running virtual machine that's interacting with a processor, right? If I, now, all of a sudden, take that VM and move it from an Intel CPU architecture to an AMD CPU architecture, that running VM isn't going to respond well to that. So this will give me a vMotion error. If I try to migrate a VM from an ESXi host with an Intel CPU to an AMD or the other way around, that's going to result in a vMotion error. Okay, so we have to have compatible CPUs in order to carry out vMotion. If we have a one gigabit per second network between ESXi hosts, we can carry out a maximum of four concurrent vMotions across that one-gig network. If we have a 10-gig network between ESXi hosts, we can carry out a maximum of 10 concurrent vMotions across those ESXi hosts. All right, so now, that we've learned the basics about a typical standard vMotion, there are a couple other vMotion types that are supported. We can do a Cross-vCenter vMotion. This is a relatively new feature. So prior to vSphere 6, you could only do a vMotion between two hosts that were in the same vCenter inventory. Across vCenter vMotion, now, gives you the ability to migrate a virtual machine to a host that's managed by a different vCenter instance. The only major requirement here is that the vCenter servers are time synchronized and that they're part of the same single sign-on domain. We can carry out a long-distance vMotion. This can be used to move virtual machines across a connection that has up to 150 milliseconds of latency. Again, some changes here occurred in vSphere 6 where the long-distance vMotion requirements were relaxed quite a bit.
Contents
-
-
-
-
-
-
-
Introduction48s
-
Migrating VMs with vMotion15m 43s
-
(Locked)
Demo: Configuring vMotion in vSphere14m 22s
-
(Locked)
Demo: Performing a vMotion migration in vSphere10m 54s
-
(Locked)
Demo: Configure a scheduled vMotion in vSphere4m 5s
-
(Locked)
Storage vMotion concepts6m
-
(Locked)
Demo: Performing a Storage vMotion in vSphere5m 23s
-
(Locked)
Demo: Shared-nothing vMotion in vSphere5m 34s
-
(Locked)
vMotion Configuration maximums3m 25s
-
(Locked)
vMotion enhancement in vSphere2m 1s
-
-
-