First of all, I don't think that is the latest status on Xen 4.4+ version. see the latest one (in 2014, Xen on ARM summit, in the Reference link below).
Here I'm going to give some more information.
- Unlike x86( or x64) XEN, no shadow page table management code is required any longer in XEN on ARM.
In the first generation of Intel VT-x architecture processors, there is no memory virtualization(no EPT, Extended Page Table), which means there is no hardware support for address translations from guest physical memory to host physical memory. XEN and any other hypervisors must sets up the shadow page tables to do those things in a software way (and keep synchronization with guest virtual address page tables by monitoring MOV-CR3 and INVLPG instructions. This solution is sometimes also called as VTLB (virtual TLB).
For compatibility reason, this part of code is still in XEN because there are still many machines on the market that have no EPT-capable VT-x processors.
However, Xen/ARM doesn't suffer the similar compatibility issue, because it is started from scratch due to the fact that hardware memory (MMU) virtualization is supported since from ARM virtualization was introduced (true? please correct me if i'm wrong). In ARM architecture, new processor mode (Hyp mode, PL2 privilege) is responsible for controlling a new MMU memory address translation layer, which is called as "stage 2" translation from IPA (Intermediate Physical Address) to final PA. So XEN/ARM doesn't require the similar code like Virtual TLB. This can reduce thousands of code lines.
- The white paper says Xen/ARM does not need QEMU because it does not do any emulation. <Is this still true on latest Xen/ARM? > ... it uses paravirtualized interfaces for IO devices. Actually I'm not quite understand the Xen/ARM architecture for device emulation. It still uses Dom0 to hold PV backend drivers that have communication with PV frontend drivers in guest OS (DomU).
For device virtualization, on a Xen/x86 system, there are many legacy devices that still are using I/O port (e.g. IN/OUT instructions) for communication between CPU and memory. In this manner, Xen/ARM doesn't have such an issue because all the I/O communication interfaces are memory-mapped to system memory bus. so the I/O device access can be initiated by using regular memory operation instruction.
So Xen/ARM can unify the I/O device interface, and hence simply the Hypervisor design as well as PV backend/frontend driver design. The code size may also be smaller.
- Xen/x86 has two kinds of guests: HVM guest (unmodified kernel, e.g. Windows OS) and PV guest whose kernel must be modified to be aware of the presence of Xen. For example, some of low-level CPU or MMU primitive operation functions in PV guest OS must be changed to call a HyperCall interface to communicate with underlying Xen for guest virtualization.
But Xen/ARM doesn't want to introduce this differentiation. Whatever the guest kernel must have to be changed, then change it. I think partly because Xen/ARM won't need to support Windows OS, or other closed-source operating systems, at least for now. So Xen/ARM code is smaller again.
- Xen/x86 supports both AMD and Intel Hardware Virtualization Technologies, AMD-v (Pacifica, SVM) and Intel VT-x.
Although these two technologies are almost the same from guest's perspective, the hypervisor still need to maintain two copies of x86 (and x64) code implementations because there are many minor differences in virtual machine monitor (VMM) due to some different virtualization architecture design.
However, ARM doesn't have the similar issue either. Till now there are only ARMv7-A and ARMv8-A architectures that support ARM hardware virtualization, and variation is very minimal.
- Similarly, Xen/x86 also support different generations of AMD/Intel processor architectures and corresponding platform/chipsets. And each of the generations are different, but Xen/x86 must maintain backward compatibilities, so the code size is increasing from time to time.
For example, Intel processor has 16-bit "real mode" in addition to 32bit protected mode and 64bit protected mode. In the earlier generation VT-x processor (prior to Westmere), the code running in 16-bit "real mode" cannot be natively virtualized, so the VMM/hypervisor software must have to emulate real mode instruction executions on guest logical processor. Until VMX "Unrestricted Guest" mode was introduced, this issue eventually was resolved by just only adding several lines of code to enable it.
I don't think Xen/ARM will have NO such an problem in near future.
- In the latest Intel VT-x processors (Haswell +), a new feature "Hardware Nested Virtualization" is introduced. Intel calls it as "VMCS Shadowing", which provides kind of hardware virtualization supports to allow VMM/hypervisor software running on top of itself or other 3rd party hypervisors. Xen currently adds partial nested virtualization support (The virtualization industry leader, VMware, does it better, most of virtualization features can be nested in their products). This definitely introduces some code overhead.
These above are just some examples to explain why Xen/ARM has smaller code size. We cannot say which one of architectures is better just based on the code size. As a newcomer for hardware virtualization, ARM should and must do better!
Xen On ARM - Stefano Stabellini