Hello community,
here is the log from the commit of package xen for openSUSE:11.3
checked in at Tue May 10 11:08:48 CEST 2011.
--------
--- old-versions/11.3/UPDATES/all/xen/xen.changes 2011-03-07 17:29:27.000000000 +0100
+++ 11.3/xen/xen.changes 2011-05-03 23:45:15.000000000 +0200
@@ -1,0 +2,111 @@
+Tue May 3 08:54:51 MDT 2011 - carnold@novell.com
+
+- bnc#691238 - L3: question on behaviour change xm list
+ snapshot-xend.patch
+
+-------------------------------------------------------------------
+Thu Apr 28 10:24:46 MDT 2011 - jfehlig@novell.com
+
+- bnc#688473 - VUL-0: potential buffer overflow in tools
+ cve-2011-1583-4.0.patch
+
+-------------------------------------------------------------------
+Tue Apr 26 11:30:39 MDT 2011 - carnold@novell.com
+
+- bnc#623680 - xen kernel freezes during boot when processor module
+ is loaded
+ 23228-x86-conditional-write_tsc.patch
+- bnc#680824 - dom0 can't recognize boot disk when IOMMU is enabled
+ 23200-amd-iommu-intremap-sync.patch
+- Upstream patches from Jan
+ 23127-vtd-bios-settings.patch
+ 23153-x86-amd-clear-DramModEn.patch
+ 23154-x86-amd-iorr-no-rdwr.patch
+ 23199-amd-iommu-unmapped-intr-fault.patch
+
+-------------------------------------------------------------------
+Thu Apr 19 06:43:19 MST 2011 - jfehlig@novell.com
+
+- bnc#687981 - L3: mistyping model type when defining VIF crashes
+ VM
+ xend-validate-nic-model.patch
+
+-------------------------------------------------------------------
+Tue Apr 5 10:57:20 MDT 2011 - carnold@novell.com
+
+- Upstream patches from Jan
+ 23103-x86-pirq-guest-eoi-check.patch
+ 23030-x86-hpet-init.patch
+ 23061-amd-iommu-resume.patch
+ 23127-vtd-bios-settings.patch
+
+-------------------------------------------------------------------
+Mon Mar 28 09:28:49 MDT 2011 - carnold@novell.com
+
+- Enable support for kernel decompression for gzip, bzip2, and LZMA
+ so that kernels compressed with any of these methods can be
+ launched
+
+-------------------------------------------------------------------
+Thu Mar 17 06:22:30 MDT 2011 - carnold@novell.com
+
+- bnc#675817 - Kernel panic when creating HVM guests on AMD
+ platforms with XSAVE
+ 22462-x86-xsave-init-common.patch
+
+-------------------------------------------------------------------
+Tue Mar 15 09:22:24 MDT 2011 - carnold@novell.com
+
+- bnc#679344 - Xen: multi-vCPU pv guest may crash host
+ 23034-x86-arch_set_info_guest-DoS.patch
+- bnc#678871 - dom0 hangs long time when starting hvm guests with
+ memory >= 64GB
+ 22780-pod-preempt.patch
+- bnc#675363 - Random lockups with kernel-xen. Possibly graphics
+ related
+ 22997-x86-map_pages_to_xen-check.patch
+- Upstream patches from Jan
+ 22949-x86-nmi-pci-serr.patch
+ 22992-x86-fiop-m32i.patch
+ 22996-x86-alloc_xen_pagetable-no-BUG.patch
+ 23020-x86-cpuidle-ordering.patch
+ 23039-csched-constrain-cpu.patch
+
+-------------------------------------------------------------------
+Mon Mar 14 10:11:19 MDT 2011 - carnold@novell.com
+
+- bnc#678229 - restore of sles HVM fails
+ 22873-svm-sr-32bit-sysenter-msrs.patch
+
+-------------------------------------------------------------------
+Mon Feb 28 14:07:01 CST 2011 - cyliu@novell.com
+
+- Fix /vm/uuid xenstore leak on tapdisk2 device cleanup
+ 22499-xen-hotplug-cleanup.patch
+
+-------------------------------------------------------------------
+Fri Feb 25 14:07:01 MST 2011 - carnold@novell.com
+
+- Upstream patches from Jan
+ 22872-amd-iommu-pci-reattach.patch
+ 22879-hvm-no-self-set-mem-type.patch
+ 22899-x86-tighten-msr-permissions.patch
+ 22915-x86-hpet-msi-s3.patch
+ 22947-amd-k8-mce-init-all-msrs.patch
+
+-------------------------------------------------------------------
+Thu Feb 17 21:18:19 MST 2011 - jfehlig@novell.com
+
+- bnc#672833 - xen-tools bug causing problems with Ubuntu 10.10
+ under Xen 4.
+ 22238-pygrub-grub2-fix.patch
+
+-------------------------------------------------------------------
+Thu Feb 17 20:06:07 CST 2011 - lidongyang@novell.com
+
+- bnc#665610 - xm console > 1 to same VM messes up both consoles
+ Upstream rejected due to portability concern, see
+ http://lists.xensource.com/archives/html/xen-devel/2011-02/msg00942.html
+ xenconsole-no-multiple-connections.patch
+
+-------------------------------------------------------------------
calling whatdependson for 11.3-i586
New:
----
22238-pygrub-grub2-fix.patch
22462-x86-xsave-init-common.patch
22499-xen-hotplug-cleanup.patch
22780-pod-preempt.patch
22872-amd-iommu-pci-reattach.patch
22873-svm-sr-32bit-sysenter-msrs.patch
22879-hvm-no-self-set-mem-type.patch
22899-x86-tighten-msr-permissions.patch
22915-x86-hpet-msi-s3.patch
22947-amd-k8-mce-init-all-msrs.patch
22949-x86-nmi-pci-serr.patch
22992-x86-fiop-m32i.patch
22996-x86-alloc_xen_pagetable-no-BUG.patch
22997-x86-map_pages_to_xen-check.patch
23020-x86-cpuidle-ordering.patch
23030-x86-hpet-init.patch
23034-x86-arch_set_info_guest-DoS.patch
23039-csched-constrain-cpu.patch
23061-amd-iommu-resume.patch
23103-x86-pirq-guest-eoi-check.patch
23127-vtd-bios-settings.patch
23153-x86-amd-clear-DramModEn.patch
23154-x86-amd-iorr-no-rdwr.patch
23199-amd-iommu-unmapped-intr-fault.patch
23200-amd-iommu-intremap-sync.patch
23228-x86-conditional-write_tsc.patch
cve-2011-1583-4.0.patch
xenconsole-no-multiple-connections.patch
xend-validate-nic-model.patch
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Other differences:
------------------
++++++ xen.spec ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:11.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:11.000000000 +0200
@@ -38,7 +38,7 @@
%if %{?with_kmp}0
BuildRequires: kernel-source kernel-syms module-init-tools xorg-x11
%endif
-Version: 4.0.1_21326_06
+Version: 4.0.1_21326_08
Release: 0.<RELEASE2>
License: GPLv2+
Group: System/Kernel
@@ -117,47 +117,74 @@
Patch39: 22223-vtd-workarounds.patch
Patch40: 22231-x86-pv-ucode-msr-intel.patch
Patch41: 22232-x86-64-lahf-lm-bios-workaround.patch
-Patch42: 22280-kexec.patch
-Patch43: 22337-vtd-scan-single-func.patch
-Patch44: 22348-vtd-check-secbus-devfn.patch
-Patch45: 22369-xend-pci-passthru-fix.patch
-Patch46: 22385-vif-common.patch
-Patch47: 22388-x2apic-panic.patch
-Patch48: 22389-amd-iommu-decls.patch
-Patch49: 22416-acpi-check-mwait.patch
-Patch50: 22417-vpmu-nehalem.patch
-Patch51: 22431-p2m-remove-bug-check.patch
-Patch52: 22448-x86_64-gdt-ldt-fault-filter.patch
-Patch53: 22451-hvm-cap-clobber.patch
-Patch54: 22452-x86-irq-migrate-directed-eoi.patch
-Patch55: 22466-x86-sis-apic-bug.patch
-Patch56: 22470-vlapic-tick-loss.patch
-Patch57: 22475-x2apic-cleanup.patch
-Patch58: 22484-vlapic-tmcct-periodic.patch
-Patch59: 22504-iommu-dom0-holes.patch
-Patch60: 22506-x86-iommu-dom0-estimate.patch
-Patch61: 22526-ept-access-once.patch
-Patch62: 22533-x86-32bit-apicid.patch
-Patch63: 22534-x86-max-local-apic.patch
-Patch64: 22535-x2apic-preenabled.patch
-Patch65: 22538-keyhandler-relax.patch
-Patch66: 22540-32on64-hypercall-debug.patch
-Patch67: 22549-vtd-map-page-leak.patch
-Patch68: 22574-ept-skip-validation.patch
-Patch69: 22632-vtd-print-entries.patch
-Patch70: 22645-amd-flush-filter.patch
-Patch71: 22694-x86_64-no-weak.patch
-Patch72: 22693-fam10-mmio-conf-base-protect.patch
-Patch73: 22707-x2apic-preenabled-check.patch
-Patch74: 22708-xenctx-misc.patch
-Patch75: 22744-ept-pod-locking.patch
-Patch76: 22749-vtd-workarounds.patch
-Patch77: 22777-vtd-ats-fixes.patch
-Patch78: 22781-pod-hap-logdirty.patch
-Patch79: 22782-x86-emul-smsw.patch
-Patch80: 22789-i386-no-x2apic.patch
-Patch81: 22790-svm-resume-migrate-pirqs.patch
-Patch82: 22816-x86-pirq-drop-priv-check.patch
+Patch42: 22238-pygrub-grub2-fix.patch
+Patch43: 22280-kexec.patch
+Patch44: 22337-vtd-scan-single-func.patch
+Patch45: 22348-vtd-check-secbus-devfn.patch
+Patch46: 22369-xend-pci-passthru-fix.patch
+Patch47: 22385-vif-common.patch
+Patch48: 22388-x2apic-panic.patch
+Patch49: 22389-amd-iommu-decls.patch
+Patch50: 22416-acpi-check-mwait.patch
+Patch51: 22417-vpmu-nehalem.patch
+Patch52: 22431-p2m-remove-bug-check.patch
+Patch53: 22448-x86_64-gdt-ldt-fault-filter.patch
+Patch54: 22451-hvm-cap-clobber.patch
+Patch55: 22452-x86-irq-migrate-directed-eoi.patch
+Patch56: 22462-x86-xsave-init-common.patch
+Patch57: 22466-x86-sis-apic-bug.patch
+Patch58: 22470-vlapic-tick-loss.patch
+Patch59: 22475-x2apic-cleanup.patch
+Patch60: 22484-vlapic-tmcct-periodic.patch
+Patch61: 22499-xen-hotplug-cleanup.patch
+Patch62: 22504-iommu-dom0-holes.patch
+Patch63: 22506-x86-iommu-dom0-estimate.patch
+Patch64: 22526-ept-access-once.patch
+Patch65: 22533-x86-32bit-apicid.patch
+Patch66: 22534-x86-max-local-apic.patch
+Patch67: 22535-x2apic-preenabled.patch
+Patch68: 22538-keyhandler-relax.patch
+Patch69: 22540-32on64-hypercall-debug.patch
+Patch70: 22549-vtd-map-page-leak.patch
+Patch71: 22574-ept-skip-validation.patch
+Patch72: 22632-vtd-print-entries.patch
+Patch73: 22645-amd-flush-filter.patch
+Patch74: 22694-x86_64-no-weak.patch
+Patch75: 22693-fam10-mmio-conf-base-protect.patch
+Patch76: 22707-x2apic-preenabled-check.patch
+Patch77: 22708-xenctx-misc.patch
+Patch78: 22744-ept-pod-locking.patch
+Patch79: 22749-vtd-workarounds.patch
+Patch80: 22777-vtd-ats-fixes.patch
+Patch81: 22780-pod-preempt.patch
+Patch82: 22781-pod-hap-logdirty.patch
+Patch83: 22782-x86-emul-smsw.patch
+Patch84: 22789-i386-no-x2apic.patch
+Patch85: 22790-svm-resume-migrate-pirqs.patch
+Patch86: 22816-x86-pirq-drop-priv-check.patch
+Patch87: 22872-amd-iommu-pci-reattach.patch
+Patch88: 22873-svm-sr-32bit-sysenter-msrs.patch
+Patch89: 22879-hvm-no-self-set-mem-type.patch
+Patch90: 22899-x86-tighten-msr-permissions.patch
+Patch91: 22915-x86-hpet-msi-s3.patch
+Patch92: 22947-amd-k8-mce-init-all-msrs.patch
+Patch93: 22949-x86-nmi-pci-serr.patch
+Patch94: 22992-x86-fiop-m32i.patch
+Patch95: 22996-x86-alloc_xen_pagetable-no-BUG.patch
+Patch96: 22997-x86-map_pages_to_xen-check.patch
+Patch97: 23020-x86-cpuidle-ordering.patch
+Patch98: 23030-x86-hpet-init.patch
+Patch99: 23034-x86-arch_set_info_guest-DoS.patch
+Patch100: 23039-csched-constrain-cpu.patch
+Patch101: 23061-amd-iommu-resume.patch
+Patch102: 23103-x86-pirq-guest-eoi-check.patch
+Patch103: 23127-vtd-bios-settings.patch
+Patch104: 23153-x86-amd-clear-DramModEn.patch
+Patch105: 23154-x86-amd-iorr-no-rdwr.patch
+Patch106: 23199-amd-iommu-unmapped-intr-fault.patch
+Patch107: 23200-amd-iommu-intremap-sync.patch
+Patch108: 23228-x86-conditional-write_tsc.patch
+Patch109: cve-2011-1583-4.0.patch
# Upstream ioemu patches
Patch200: 7410-qemu-alt-gr.patch
Patch201: 7426-xenfb-depth.patch
@@ -209,6 +236,7 @@
Patch356: ioemu-vnc-resize.patch
Patch357: ioemu-debuginfo.patch
Patch358: vif-bridge-no-iptables.patch
+Patch359: xenconsole-no-multiple-connections.patch
# Needs to go upstream
Patch360: checkpoint-rename.patch
Patch361: xm-save-check-file.patch
@@ -227,7 +255,6 @@
Patch374: xend-devid-or-name.patch
Patch375: 22326-cpu-pools-numa-placement.patch
Patch376: 20158-revert.patch
-#Patch377: suspend_evtchn_lock.patch
# Patches for snapshot support
Patch400: snapshot-ioemu-save.patch
Patch401: snapshot-ioemu-restore.patch
@@ -262,6 +289,7 @@
Patch439: bdrv_default_rwflag.patch
Patch440: blktap2.patch
Patch442: xen-minimum-restart-time.patch
+Patch443: xend-validate-nic-model.patch
# Jim's domain lock patch
Patch450: xend-domain-lock.patch
# Hypervisor and PV driver Patches
@@ -705,6 +733,33 @@
%patch80 -p1
%patch81 -p1
%patch82 -p1
+%patch83 -p1
+%patch84 -p1
+%patch85 -p1
+%patch86 -p1
+%patch87 -p1
+%patch88 -p1
+%patch89 -p1
+%patch90 -p1
+%patch91 -p1
+%patch92 -p1
+%patch93 -p1
+%patch94 -p1
+%patch95 -p1
+%patch96 -p1
+%patch97 -p1
+%patch98 -p1
+%patch99 -p1
+%patch100 -p1
+%patch101 -p1
+%patch102 -p1
+%patch103 -p1
+%patch104 -p1
+%patch105 -p1
+%patch106 -p1
+%patch107 -p1
+%patch108 -p1
+%patch109 -p1
%patch200 -p1
%patch201 -p1
%patch202 -p1
@@ -754,6 +809,7 @@
%patch356 -p1
%patch357 -p1
%patch358 -p1
+%patch359 -p1
%patch360 -p1
%patch361 -p1
%patch362 -p1
@@ -771,7 +827,6 @@
%patch374 -p1
%patch375 -p1
%patch376 -p1
-#%patch377 -p1 bnc#649209
%patch400 -p1
%patch401 -p1
%patch402 -p1
@@ -803,6 +858,7 @@
%patch439 -p1
%patch440 -p1
%patch442 -p1
+%patch443 -p1
%patch450 -p1
%patch500 -p1
%patch501 -p1
++++++ 22223-vtd-workarounds.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:11.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:11.000000000 +0200
@@ -41,7 +41,15 @@
Signed-off-by: Allen Kay
-Added WLAN device ID 0x422C that was found on Fujitsu's Calpella system to WLAN quirk.
+# HG changeset patch
+# User Keir Fraser
+# Date 1294221021 0
+# Node ID e635e6641c07ee2da66b16f46f45442c9a46821d
+# Parent 76d897a06b316bf2278220b006d578faf31ce3fb
+[VTD] added WLAN device ID on Fujitsu's platform in quirks.c
+
+Added WLAN device ID 0x422C that was found on Fujitsu's Calpella
+system to WLAN quirk.
Signed-off-by: Allen Kay
++++++ 22238-pygrub-grub2-fix.patch ++++++
# HG changeset patch
# User Ian Campbell
# Date 1286966726 -3600
# Node ID 6eaab829768109e57f31a141efa9d06689e64670
# Parent 5eee6789914049a82de4edc1f7ddc1e20b554fff
pygrub: support grub2 "(hdX,msdosY)" partition syntax
This appeared in Debian Squeeze at some point.
Signed-off-by: Ian Campbell
Signed-off-by: Stefano Stabellini
committer: Stefano Stabellini
diff -r 5eee67899140 -r 6eaab8297681 tools/pygrub/src/GrubConf.py
--- a/tools/pygrub/src/GrubConf.py Wed Oct 13 11:37:02 2010 +0100
+++ b/tools/pygrub/src/GrubConf.py Wed Oct 13 11:45:26 2010 +0100
@@ -77,6 +77,8 @@
self._part = val
return
val = val.replace("(", "").replace(")", "")
+ if val[:5] == "msdos":
+ val = val[5:]
self._part = int(val)
part = property(get_part, set_part)
++++++ 22462-x86-xsave-init-common.patch ++++++
References: bnc#675817
# HG changeset patch
# User Keir Fraser
# Date 1291746398 0
# Node ID 98eb4a334b7723c3e515038feaddbd01cec45a3a
# Parent 70501ee741a6dccd940c1cb4481650cdc1afdcf3
amd xsave: Move xsave initialization code to a common place
This patch moves xsave/xrstor code to CPU common file. First of all,
it prepares xsave/xrstor support for AMD CPUs. Secondly, Xen would
crash on __context_switch() without this patch on xsave-capable AMD
CPUs. The crash was due to cpu_has_xsave reports true in domain.c
while xsave space wasn't initialized.
Signed-off-by: Wei Huang
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -22,6 +22,8 @@ static int cachesize_override __cpuinitd
static int disable_x86_fxsr __cpuinitdata;
static int disable_x86_serial_nr __cpuinitdata;
+static int use_xsave;
+boolean_param("xsave", use_xsave);
unsigned int __devinitdata opt_cpuid_mask_ecx = ~0u;
integer_param("cpuid_mask_ecx", opt_cpuid_mask_ecx);
unsigned int __devinitdata opt_cpuid_mask_edx = ~0u;
@@ -400,6 +402,13 @@ void __cpuinit identify_cpu(struct cpuin
if (this_cpu->c_init)
this_cpu->c_init(c);
+ /* Initialize xsave/xrstor features */
+ if ( !use_xsave )
+ clear_bit(X86_FEATURE_XSAVE, boot_cpu_data.x86_capability);
+
+ if ( cpu_has_xsave )
+ xsave_init();
+
/* Disable the PN if appropriate */
squash_the_stupid_serial_number(c);
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -20,9 +20,6 @@
extern int trap_init_f00f_bug(void);
-static int use_xsave;
-boolean_param("xsave", use_xsave);
-
#ifdef CONFIG_X86_INTEL_USERCOPY
/*
* Alignment at which movsl is preferred for bulk memory copies.
@@ -256,12 +253,6 @@ static void __devinit init_intel(struct
set_bit(X86_FEATURE_ARAT, c->x86_capability);
start_vmx();
-
- if ( !use_xsave )
- clear_bit(X86_FEATURE_XSAVE, boot_cpu_data.x86_capability);
-
- if ( cpu_has_xsave )
- xsave_init();
}
++++++ 22475-x2apic-cleanup.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:11.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:11.000000000 +0200
@@ -169,7 +169,7 @@
unsigned long apic_phys;
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
-@@ -250,8 +250,8 @@ static void __init early_cpu_detect(void
+@@ -252,8 +252,8 @@ static void __init early_cpu_detect(void
c->x86 = 4;
if (c->cpuid_level >= 0x00000001) {
@@ -180,7 +180,7 @@
c->x86 = (tfms >> 8) & 15;
c->x86_model = (tfms >> 4) & 15;
if (c->x86 == 0xf)
-@@ -260,9 +260,12 @@ static void __init early_cpu_detect(void
+@@ -262,9 +262,12 @@ static void __init early_cpu_detect(void
c->x86_model += ((tfms >> 16) & 0xF) << 4;
c->x86_mask = tfms & 15;
cap0 &= ~cleared_caps[0];
++++++ 22499-xen-hotplug-cleanup.patch ++++++
Index: xen-4.0.1-testing/tools/hotplug/Linux/xen-hotplug-cleanup
===================================================================
--- xen-4.0.1-testing.orig/tools/hotplug/Linux/xen-hotplug-cleanup
+++ xen-4.0.1-testing/tools/hotplug/Linux/xen-hotplug-cleanup
@@ -21,10 +21,12 @@ if [ "$vm" != "" ]; then
# if the vm path does not exist and the device class is 'vbd' then we may have
# a tap2 device
- if [ "$(xenstore-read "$vm_dev" 2>/dev/null)" != "" ] \
- && [ "${path_array[1]}" = "vbd" ]; then
- vm_dev="$vm/device/tap2/${path_array[3]}"
- fi
+ $(xenstore-read "$vm_dev" 2>/dev/null) || \
+ {
+ if [ "${path_array[1]}" = "vbd" ]; then
+ vm_dev="$vm/device/tap2/${path_array[3]}"
+ fi
+ }
else
vm_dev=
fi
++++++ 22749-vtd-workarounds.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:11.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:11.000000000 +0200
@@ -28,7 +28,31 @@
Signed-off-by: Allen Kay
-jb: Disabled the body of pci_vtd_quirk() for ix86.
+# HG changeset patch
+# User Allen Kay
+# Date 1296587456 0
+# Node ID 3edd21ffe407ac0e853d51aa8302d9bdb4068749
+# Parent 0e2c8b75f7d233f15f8bb49d9db0579e7a350964
+passthrough/vtd: disable 64-bit MMCFG quirk on 32-bit Xen
+
+Attached patch disables pci_vtd_quirk for 32-bit Xen since 32-bit xen
+does not support MMCFG access.
+
+Signed-off-by: Allen Kay
+Committed-by: Ian Jackson
+
+# HG changeset patch
+# User Keir Fraser
+# Date 1297240805 0
+# Node ID c23b711f92646a7e441ee80dbb15b9e1e87c83f8
+# Parent aeda4adecaf85618918dc674855721e3fc9eb33d
+[VTD][QUIRK] add spin lock across snb pre/postamble functions
+
+Added a spinlock across snb_vtd_ops_preamble() and
+snb_vtd_ops_postamble() to make modifications to IGD registers atomic.
+Continue keeping snb_igd_quirk default off.
+
+Signed-off-by: Allen Kay
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -51,7 +75,7 @@
spin_unlock(&pcidevs_lock);
--- a/xen/drivers/passthrough/vtd/quirks.c
+++ b/xen/drivers/passthrough/vtd/quirks.c
-@@ -47,11 +47,13 @@
+@@ -47,12 +47,15 @@
#define IS_CTG(id) (id == 0x2a408086)
#define IS_ILK(id) (id == 0x00408086 || id == 0x00448086 || id== 0x00628086 || id == 0x006A8086)
#define IS_CPT(id) (id == 0x01008086 || id == 0x01048086)
@@ -63,9 +87,11 @@
static int is_cantiga_b3;
+static int is_snb_gfx;
static u8 *igd_reg_va;
++static spinlock_t igd_lock;
/*
-@@ -92,6 +94,12 @@ static void cantiga_b3_errata_init(void)
+ * QUIRK to workaround Xen boot issue on Calpella/Ironlake OEM BIOS
+@@ -92,6 +95,13 @@ static void cantiga_b3_errata_init(void)
is_cantiga_b3 = 1;
}
@@ -73,12 +99,13 @@
+static void snb_errata_init(void)
+{
+ is_snb_gfx = IS_SNB_GFX(igd_id);
++ spin_lock_init(&igd_lock);
+}
+
/*
* QUIRK to workaround Cantiga IGD VT-d low power errata.
* This errata impacts IGD assignment on Cantiga systems
-@@ -104,12 +112,15 @@ static void cantiga_b3_errata_init(void)
+@@ -104,12 +114,15 @@ static void cantiga_b3_errata_init(void)
/*
* map IGD MMIO+0x2000 page to allow Xen access to IGD 3D register.
*/
@@ -97,7 +124,7 @@
/* get IGD mmio address in PCI BAR */
igd_mmio = ((u64)pci_conf_read32(0, IGD_DEV, 0, 0x14) << 32) +
-@@ -121,6 +132,7 @@ static void map_igd_reg(void)
+@@ -121,6 +134,7 @@ static void map_igd_reg(void)
/* ioremap this physical page */
set_fixmap_nocache(FIX_IGD_MMIO, igd_reg);
igd_reg_va = (u8 *)fix_to_virt(FIX_IGD_MMIO);
@@ -105,7 +132,7 @@
}
/*
-@@ -134,6 +146,9 @@ static int cantiga_vtd_ops_preamble(stru
+@@ -134,6 +148,9 @@ static int cantiga_vtd_ops_preamble(stru
if ( !is_igd_drhd(drhd) || !is_cantiga_b3 )
return 0;
@@ -115,7 +142,7 @@
/*
* read IGD register at IGD MMIO + 0x20A4 to force IGD
* to exit low power state. Since map_igd_reg()
-@@ -144,11 +159,69 @@ static int cantiga_vtd_ops_preamble(stru
+@@ -144,11 +161,74 @@ static int cantiga_vtd_ops_preamble(stru
}
/*
@@ -181,21 +208,31 @@
{
cantiga_vtd_ops_preamble(iommu);
+ if ( snb_igd_quirk )
++ {
++ spin_lock(&igd_lock);
++
++ /* match unlock in postamble */
+ snb_vtd_ops_preamble(iommu);
++ }
}
/*
-@@ -156,7 +229,8 @@ void vtd_ops_preamble_quirk(struct iommu
+@@ -156,7 +236,13 @@ void vtd_ops_preamble_quirk(struct iommu
*/
void vtd_ops_postamble_quirk(struct iommu* iommu)
{
- return;
+ if ( snb_igd_quirk )
++ {
+ snb_vtd_ops_postamble(iommu);
++
++ /* match the lock in preamble */
++ spin_unlock(&igd_lock);
++ }
}
/* initialize platform identification flags */
-@@ -175,6 +249,8 @@ void __init platform_quirks_init(void)
+@@ -175,6 +261,8 @@ void __init platform_quirks_init(void)
/* initialize cantiga B3 identification */
cantiga_b3_errata_init();
@@ -204,7 +241,7 @@
/* ioremap IGD MMIO+0x2000 page */
map_igd_reg();
}
-@@ -246,11 +322,14 @@ void me_wifi_quirk(struct domain *domain
+@@ -246,11 +334,14 @@ void me_wifi_quirk(struct domain *domain
id = pci_conf_read32(bus, PCI_SLOT(devfn), PCI_FUNC(devfn), 0);
switch (id)
{
@@ -222,13 +259,13 @@
case 0x422b8086:
case 0x422c8086:
map_me_phantom_function(domain, 22, map);
-@@ -258,6 +337,28 @@ void me_wifi_quirk(struct domain *domain
+@@ -258,6 +349,28 @@ void me_wifi_quirk(struct domain *domain
default:
break;
}
+ }
+}
-
++
+/*
+ * Mask reporting Intel VT-d faults to IOH core logic:
+ * - Some platform escalates VT-d faults to platform errors
@@ -237,12 +274,12 @@
+ */
+void pci_vtd_quirk(struct pci_dev *pdev)
+{
-+#ifndef __i386__
++#ifdef CONFIG_X86_64
+ int bus = pdev->bus;
+ int dev = PCI_SLOT(pdev->devfn);
+ int func = PCI_FUNC(pdev->devfn);
+ int id, val;
-+
+
+ id = pci_conf_read32(bus, dev, func, 0);
+ if ( id == 0x342e8086 || id == 0x3c288086 )
+ {
++++++ 22780-pod-preempt.patch ++++++
References: bnc#678871
# HG changeset patch
# User George Dunlap
# Date 1295274253 0
# Node ID 97ab84aca65cdcbce2ddccc51629fb24adb056cf
# Parent d1631540bcc4d369d7e7ec1d87e54e1a8f5d5f78
PoD: Allow pod_set_cache_target hypercall to be preempted
For very large VMs, setting the cache target can take long enough that
dom0 complains of soft lockups. Allow the hypercall to be preempted.
Signed-off-by: George Dunlap
Acked-by: Tim Deegan
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1704,8 +1704,8 @@ int hypercall_xlat_continuation(unsigned
unsigned long nval = 0;
va_list args;
- BUG_ON(*id > 5);
- BUG_ON(mask & (1U << *id));
+ BUG_ON(id && *id > 5);
+ BUG_ON(id && (mask & (1U << *id)));
va_start(args, mask);
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4527,14 +4527,22 @@ long arch_memory_op(int op, XEN_GUEST_HA
rc = p2m_pod_set_mem_target(d, target.target_pages);
}
- target.tot_pages = d->tot_pages;
- target.pod_cache_pages = d->arch.p2m->pod.count;
- target.pod_entries = d->arch.p2m->pod.entry_count;
-
- if ( copy_to_guest(arg, &target, 1) )
+ if ( rc == -EAGAIN )
+ {
+ rc = hypercall_create_continuation(
+ __HYPERVISOR_memory_op, "lh", op, arg);
+ }
+ else if ( rc >= 0 )
{
- rc= -EFAULT;
- goto pod_target_out_unlock;
+ target.tot_pages = d->tot_pages;
+ target.pod_cache_pages = d->arch.p2m->pod.count;
+ target.pod_entries = d->arch.p2m->pod.entry_count;
+
+ if ( copy_to_guest(arg, &target, 1) )
+ {
+ rc= -EFAULT;
+ goto pod_target_out_unlock;
+ }
}
pod_target_out_unlock:
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -383,7 +383,7 @@ static struct page_info * p2m_pod_cache_
/* Set the size of the cache, allocating or freeing as necessary. */
static int
-p2m_pod_set_cache_target(struct domain *d, unsigned long pod_target)
+p2m_pod_set_cache_target(struct domain *d, unsigned long pod_target, int preemptible)
{
struct p2m_domain *p2md = d->arch.p2m;
int ret = 0;
@@ -416,6 +416,12 @@ p2m_pod_set_cache_target(struct domain *
}
p2m_pod_cache_add(d, page, order);
+
+ if ( hypercall_preempt_check() && preemptible )
+ {
+ ret = -EAGAIN;
+ goto out;
+ }
}
/* Decreasing the target */
@@ -460,6 +466,12 @@ p2m_pod_set_cache_target(struct domain *
put_page(page+i);
put_page(page+i);
+
+ if ( hypercall_preempt_check() && preemptible )
+ {
+ ret = -EAGAIN;
+ goto out;
+ }
}
}
@@ -537,7 +549,7 @@ p2m_pod_set_mem_target(struct domain *d,
ASSERT( pod_target >= p2md->pod.count );
- ret = p2m_pod_set_cache_target(d, pod_target);
+ ret = p2m_pod_set_cache_target(d, pod_target, 1/*preemptible*/);
out:
p2m_unlock(p2md);
@@ -701,7 +713,7 @@ out_entry_check:
/* If we've reduced our "liabilities" beyond our "assets", free some */
if ( p2md->pod.entry_count < p2md->pod.count )
{
- p2m_pod_set_cache_target(d, p2md->pod.entry_count);
+ p2m_pod_set_cache_target(d, p2md->pod.entry_count, 0/*can't preempt*/);
}
out_unlock:
--- a/xen/arch/x86/x86_64/compat/mm.c
+++ b/xen/arch/x86/x86_64/compat/mm.c
@@ -127,6 +127,9 @@ int compat_arch_memory_op(int op, XEN_GU
if ( rc < 0 )
break;
+ if ( rc == __HYPERVISOR_memory_op )
+ hypercall_xlat_continuation(NULL, 0x2, nat, arg);
+
XLAT_pod_target(&cmp, nat);
if ( copy_to_guest(arg, &cmp, 1) )
++++++ 22781-pod-hap-logdirty.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:11.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:11.000000000 +0200
@@ -18,7 +18,7 @@
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
-@@ -1064,14 +1064,22 @@ p2m_pod_demand_populate(struct domain *d
+@@ -1076,14 +1076,22 @@ p2m_pod_demand_populate(struct domain *d
if ( unlikely(d->is_dying) )
goto out_fail;
@@ -49,7 +49,7 @@
/* Keep track of the highest gfn demand-populated by a guest fault */
if ( q == p2m_guest && gfn > p2md->pod.max_guest )
-@@ -1098,7 +1106,10 @@ p2m_pod_demand_populate(struct domain *d
+@@ -1110,7 +1118,10 @@ p2m_pod_demand_populate(struct domain *d
set_p2m_entry(d, gfn_aligned, mfn, order, p2m_ram_rw);
for( i = 0 ; i < (1UL << order) ; i++ )
++++++ 22789-i386-no-x2apic.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -17,7 +17,20 @@
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
-@@ -958,6 +958,10 @@ void x2apic_setup(void)
+@@ -67,10 +67,12 @@ static int enable_local_apic __initdata
+ */
+ int apic_verbosity;
+
++#ifndef __i386__
+ static int opt_x2apic = 1;
+ boolean_param("x2apic", opt_x2apic);
+
+ int x2apic_enabled __read_mostly = 0;
++#endif
+ int directed_eoi_enabled __read_mostly = 0;
+
+ /*
+@@ -958,6 +960,10 @@ void x2apic_setup(void)
if ( !cpu_has_x2apic )
return;
@@ -28,7 +41,7 @@
if ( !opt_x2apic )
{
if ( !x2apic_enabled )
-@@ -1019,6 +1023,7 @@ restore_out:
+@@ -1019,6 +1025,7 @@ restore_out:
unmask_8259A();
out:
@@ -76,3 +89,35 @@
+ clear_bit(X86_FEATURE_X2APIC, boot_cpu_data.x86_capability);
+#endif
}
+--- a/xen/drivers/passthrough/vtd/intremap.c
++++ b/xen/drivers/passthrough/vtd/intremap.c
+@@ -810,6 +810,7 @@ out:
+ spin_unlock_irqrestore(&iommu->register_lock, flags);
+ }
+
++#ifndef __i386__
+ /*
+ * This function is used to enable Interrutp remapping when
+ * enable x2apic
+@@ -866,6 +867,7 @@ int iommu_enable_IR(void)
+
+ return 0;
+ }
++#endif
+
+ /*
+ * This function is used to disable Interrutp remapping when
+--- a/xen/include/asm-x86/apic.h
++++ b/xen/include/asm-x86/apic.h
+@@ -22,7 +22,11 @@
+ #define IO_APIC_REDIR_DEST_PHYSICAL 0x00000
+
+ extern int apic_verbosity;
++#ifdef __i386__
++#define x2apic_enabled 0
++#else
+ extern int x2apic_enabled;
++#endif
+ extern int directed_eoi_enabled;
+
+ void check_x2apic_preenabled(void);
++++++ 22872-amd-iommu-pci-reattach.patch ++++++
# HG changeset patch
# User Keir Fraser
# Date 1297011241 0
# Node ID cba9a84d32fbf7d2d9f6ddd1fc1ab7dde8e290c1
# Parent 23f60ba52fffccd3916b643e0654209117765908
amd iommu: Fix a xen crash after pci-attach
pci-detach triggers IO page table deallocation if the last passthru
device has been removed from pdev list, and this will result a BUG on
amd systems for next pci-attach. This patch fixes this issue.
Signed-off-by: Wei Wang
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -285,6 +285,7 @@ static int reassign_device( struct domai
struct pci_dev *pdev;
struct amd_iommu *iommu;
int bdf;
+ struct hvm_iommu *t = domain_hvm_iommu(target);
ASSERT(spin_is_locked(&pcidevs_lock));
pdev = pci_get_pdev_by_domain(source, bus, devfn);
@@ -306,6 +307,11 @@ static int reassign_device( struct domai
list_move(&pdev->domain_list, &target->arch.pdev_list);
pdev->domain = target;
+ /* IO page tables might be destroyed after pci-detach the last device
+ * In this case, we have to re-allocate root table for next pci-attach.*/
+ if ( t->root_table == NULL )
+ allocate_domain_resources(t);
+
amd_iommu_setup_domain_device(target, iommu, bdf);
AMD_IOMMU_DEBUG("reassign %x:%x.%x domain %d -> domain %d\n",
bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
++++++ 22873-svm-sr-32bit-sysenter-msrs.patch ++++++
# HG changeset patch
# User Keir Fraser
# Date 1297011789 0
# Node ID 1861627620710a18f21b38c5daf417c3864b4d15
# Parent cba9a84d32fbf7d2d9f6ddd1fc1ab7dde8e290c1
hvm amd: Fix 32bit guest VM save/restore issues associated with SYSENTER MSRs
This patch turn-on SYSENTER MSRs interception for 32bit guest VMs on
AMD CPUs. With it, hvm_svm.guest_sysenter_xx fields always contain the
canonical version of SYSENTER MSRs and are used in guest save/restore.
The data fields in VMCB save area are updated as necessary.
Reported-by: James Harper
Signed-off-by: Wei Huang
--- a/xen/arch/x86/hvm/svm/svm.c
+++ b/xen/arch/x86/hvm/svm/svm.c
@@ -254,9 +254,10 @@ static int svm_vmcb_restore(struct vcpu
hvm_update_guest_cr(v, 2);
hvm_update_guest_cr(v, 4);
- v->arch.hvm_svm.guest_sysenter_cs = c->sysenter_cs;
- v->arch.hvm_svm.guest_sysenter_esp = c->sysenter_esp;
- v->arch.hvm_svm.guest_sysenter_eip = c->sysenter_eip;
+ /* Load sysenter MSRs into both VMCB save area and VCPU fields. */
+ vmcb->sysenter_cs = v->arch.hvm_svm.guest_sysenter_cs = c->sysenter_cs;
+ vmcb->sysenter_esp = v->arch.hvm_svm.guest_sysenter_esp = c->sysenter_esp;
+ vmcb->sysenter_eip = v->arch.hvm_svm.guest_sysenter_eip = c->sysenter_eip;
if ( paging_mode_hap(v->domain) )
{
@@ -452,14 +453,6 @@ static void svm_update_guest_efer(struct
vmcb->efer = (v->arch.hvm_vcpu.guest_efer | EFER_SVME) & ~EFER_LME;
if ( lma )
vmcb->efer |= EFER_LME;
-
- /*
- * In legacy mode (EFER.LMA=0) we natively support SYSENTER/SYSEXIT with
- * no need for MSR intercepts. When EFER.LMA=1 we must trap and emulate.
- */
- svm_intercept_msr(v, MSR_IA32_SYSENTER_CS, lma);
- svm_intercept_msr(v, MSR_IA32_SYSENTER_ESP, lma);
- svm_intercept_msr(v, MSR_IA32_SYSENTER_EIP, lma);
}
static void svm_sync_vmcb(struct vcpu *v)
@@ -1125,6 +1118,21 @@ static int svm_msr_write_intercept(struc
u32 ecx = regs->ecx;
struct vcpu *v = current;
struct vmcb_struct *vmcb = v->arch.hvm_svm.vmcb;
+ int sync = 0;
+
+ switch ( ecx )
+ {
+ case MSR_IA32_SYSENTER_CS:
+ case MSR_IA32_SYSENTER_ESP:
+ case MSR_IA32_SYSENTER_EIP:
+ sync = 1;
+ break;
+ default:
+ break;
+ }
+
+ if ( sync )
+ svm_sync_vmcb(v);
msr_content = (u32)regs->eax | ((u64)regs->edx << 32);
@@ -1136,13 +1144,13 @@ static int svm_msr_write_intercept(struc
goto gpf;
case MSR_IA32_SYSENTER_CS:
- v->arch.hvm_svm.guest_sysenter_cs = msr_content;
+ vmcb->sysenter_cs = v->arch.hvm_svm.guest_sysenter_cs = msr_content;
break;
case MSR_IA32_SYSENTER_ESP:
- v->arch.hvm_svm.guest_sysenter_esp = msr_content;
+ vmcb->sysenter_esp = v->arch.hvm_svm.guest_sysenter_esp = msr_content;
break;
case MSR_IA32_SYSENTER_EIP:
- v->arch.hvm_svm.guest_sysenter_eip = msr_content;
+ vmcb->sysenter_eip = v->arch.hvm_svm.guest_sysenter_eip = msr_content;
break;
case MSR_IA32_DEBUGCTLMSR:
@@ -1190,6 +1198,9 @@ static int svm_msr_write_intercept(struc
break;
}
+ if ( sync )
+ svm_vmload(vmcb);
+
return X86EMUL_OKAY;
gpf:
++++++ 22879-hvm-no-self-set-mem-type.patch ++++++
# HG changeset patch
# User Tim Deegan
# Date 1297071599 0
# Node ID 098c8a6483c9140f82f8d39ddb5e2b7d6e394151
# Parent 7ada6faef565bd8f676ddfaff9c568ca592ee5be
x86/hvm: don't let domains call HVMOP_set_mem_type on themselves.
Signed-off-by: Tim Deegan
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -3043,6 +3043,10 @@ long do_hvm_op(unsigned long op, XEN_GUE
if ( rc != 0 )
return rc;
+ rc = -EPERM;
+ if ( d == current->domain )
+ goto param_fail4;
+
rc = -EINVAL;
if ( !is_hvm_domain(d) )
goto param_fail4;
++++++ 22899-x86-tighten-msr-permissions.patch ++++++
# HG changeset patch
# User Keir Fraser
# Date 1294221435 0
# Node ID 39194f457534e07e5d5cc54376c4df28e0acb63c
# Parent 41a259d7a33dfbea6296cbeaae178823e09b91db
relax vCPU pinned checks
Both writing of certain MSRs and VCPUOP_get_physid make sense also for
dynamically (perhaps temporarily) pinned vcpus.
Likely a couple of other MSR writes (MSR_K8_HWCR, MSR_AMD64_NB_CFG,
MSR_FAM10H_MMIO_CONF_BASE) would make sense to be restricted by an
is_pinned() check too, possibly also some MSR reads.
Signed-off-by: Jan Beulich
# HG changeset patch
# User Keir Fraser
# Date 1297347563 0
# Node ID 5b18a72d292a066d1c2b9fff7e35fc1230cdec85
# Parent 332c1f73a594f6c17d9c252c4efc16e3b59a64ba
x86: tighten conditions under which writing certain MSRs is permitted
MSRs that control physical CPU aspects generally are pointless (and
possibly dangerous) to be written when the writer isn't sufficiently
aware that it's running virtualized.
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1005,7 +1005,7 @@ arch_do_vcpu_op(
struct vcpu_get_physid cpu_id;
rc = -EINVAL;
- if ( !v->domain->is_pinned )
+ if ( !is_pinned_vcpu(v) )
break;
cpu_id.phys_id =
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -2252,7 +2252,7 @@ static int emulate_privileged_op(struct
if ( boot_cpu_data.x86_vendor != X86_VENDOR_AMD ||
boot_cpu_data.x86 < 0x10 || boot_cpu_data.x86 > 0x17 )
goto fail;
- if ( !IS_PRIV(v->domain) )
+ if ( !IS_PRIV(v->domain) || !is_pinned_vcpu(v) )
break;
if ( (rdmsr_safe(MSR_AMD64_NB_CFG, l, h) != 0) ||
(eax != l) ||
@@ -2265,7 +2265,7 @@ static int emulate_privileged_op(struct
if ( boot_cpu_data.x86_vendor != X86_VENDOR_AMD ||
boot_cpu_data.x86 < 0x10 || boot_cpu_data.x86 > 0x17 )
goto fail;
- if ( !IS_PRIV(v->domain) )
+ if ( !IS_PRIV(v->domain) || !is_pinned_vcpu(v) )
break;
if ( (rdmsr_safe(MSR_FAM10H_MMIO_CONF_BASE, l, h) != 0) )
goto fail;
@@ -2287,6 +2287,8 @@ static int emulate_privileged_op(struct
case MSR_IA32_UCODE_REV:
if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
goto fail;
+ if ( !IS_PRIV(v->domain) || !is_pinned_vcpu(v) )
+ break;
if ( rdmsr_safe(regs->ecx, l, h) )
goto fail;
if ( l | h )
@@ -2294,7 +2296,7 @@ static int emulate_privileged_op(struct
break;
case MSR_IA32_MISC_ENABLE:
if ( rdmsr_safe(regs->ecx, l, h) )
- goto invalid;
+ goto fail;
l = guest_misc_enable(l);
if ( eax != l || edx != h )
goto invalid;
@@ -2320,7 +2322,7 @@ static int emulate_privileged_op(struct
case MSR_IA32_THERM_CONTROL:
if ( boot_cpu_data.x86_vendor != X86_VENDOR_INTEL )
goto fail;
- if ( (v->domain->domain_id != 0) || !v->domain->is_pinned )
+ if ( !IS_PRIV(v->domain) || !is_pinned_vcpu(v) )
break;
if ( wrmsr_safe(regs->ecx, eax, edx) != 0 )
goto fail;
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -593,6 +593,8 @@ uint64_t get_cpu_idle_time(unsigned int
#define is_hvm_domain(d) ((d)->is_hvm)
#define is_hvm_vcpu(v) (is_hvm_domain(v->domain))
+#define is_pinned_vcpu(v) ((v)->domain->is_pinned || \
+ cpus_weight((v)->cpu_affinity) == 1)
#define need_iommu(d) ((d)->need_iommu)
void set_vcpu_migration_delay(unsigned int delay);
++++++ 22915-x86-hpet-msi-s3.patch ++++++
# HG changeset patch
# User Wei Gang
# Date 1297680072 0
# Node ID af84691a6cf9423a445f471af02b36b76ddf5314
# Parent 218b5fa834aa91b83e83d18d4b88e53b9788c2e3
x86: Fix S3 resume for HPET MSI IRQ case
Jan Beulich found that for S3 resume on platforms without ARAT feature
but with MSI capable HPET, request_irq() will be called in
hpet_setup_msi_irq() for irq already setup(no release_irq() called
during S3 suspend), so that always falling back to using
legacy_hpet_event.
Fix it by conditional calling request_irq() for 4.1. Planned to split
the S3 resume path from booting path post 4.1, as Jan suggested.
Signed-off-by: Wei Gang
Acked-by: Jan Beulich
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -339,12 +339,20 @@ static int hpet_setup_msi_irq(unsigned i
int ret;
struct msi_msg msg;
struct hpet_event_channel *ch = &hpet_events[irq_to_channel(irq)];
+ irq_desc_t *desc = irq_to_desc(irq);
- irq_desc[irq].handler = &hpet_msi_type;
- ret = request_irq(irq, hpet_interrupt_handler,
- 0, "HPET", ch);
- if ( ret < 0 )
- return ret;
+ if ( desc->handler == &no_irq_type )
+ {
+ desc->handler = &hpet_msi_type;
+ ret = request_irq(irq, hpet_interrupt_handler,
+ 0, "HPET", ch);
+ if ( ret < 0 )
+ return ret;
+ }
+ else if ( desc->handler != &hpet_msi_type )
+ {
+ return -EINVAL;
+ }
msi_compose_msg(NULL, irq, &msg);
hpet_msi_write(irq, &msg);
++++++ 22947-amd-k8-mce-init-all-msrs.patch ++++++
# HG changeset patch
# User Keir Fraser
# Date 1298539999 0
# Node ID 598d1fc295b6e88c6ff226b461553eaea61e2043
# Parent 6d451b9cbeadb922ec7ebcb8ec916a089a341039
amd-k8-mce: remove a stray break statement
This was a leftover of converting from a switch to an if/else
somewhere between 3.4 and 4.0.
It also looks suspicious that MCEQUIRK_K7_BANK0 is not actually used
anywhere. Perhaps amd_k7_mcheck_init() and amd_k8_mcheck_init() were
intended to get (partially) folded?
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/cpu/mcheck/amd_k8.c
+++ b/xen/arch/x86/cpu/mcheck/amd_k8.c
@@ -97,7 +97,6 @@ enum mcheck_type amd_k8_mcheck_init(stru
/* Enable error reporting of all errors */
wrmsrl(MSR_IA32_MC0_CTL + 4 * i, 0xffffffffffffffffULL);
wrmsrl(MSR_IA32_MC0_STATUS + 4 * i, 0x0ULL);
- break;
}
}
++++++ 22949-x86-nmi-pci-serr.patch ++++++
# HG changeset patch
# User Stefano Stabellini
# Date 1298633385 0
# Node ID 54fe1011f86be2ffeaba3b6e883392ea56bbb750
# Parent 2d35823a86e7fbab004125591e56cd14aeaffcb3
x86 NMI: continue in case of PCI SERR erros
Memory parity error is only valid for IBM PC-AT, newer machines use
bit 7 (0x80) of 0x61 port for PCI SERR. While memory errors are
usually reported via MCE.
Rename the memory parity error handler to pci serr handler and
print a warning and continue instead of crashing.
Signed-off-by: Stefano Stabellini
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -3041,23 +3041,12 @@ static void nmi_dom0_report(unsigned int
send_guest_trap(d, 0, TRAP_nmi);
}
-static void mem_parity_error(struct cpu_user_regs *regs)
+static void pci_serr_error(struct cpu_user_regs *regs)
{
- switch ( opt_nmi[0] )
- {
- case 'd': /* 'dom0' */
- nmi_dom0_report(_XEN_NMIREASON_parity_error);
- case 'i': /* 'ignore' */
- break;
- default: /* 'fatal' */
- console_force_unlock();
- printk("\n\nNMI - MEMORY ERROR\n");
- fatal_trap(TRAP_nmi, regs);
- }
+ console_force_unlock();
+ printk("\n\nNMI - PCI system error (SERR)\n");
- outb((inb(0x61) & 0x0f) | 0x04, 0x61); /* clear-and-disable parity check */
- mdelay(1);
- outb((inb(0x61) & 0x0b) | 0x00, 0x61); /* enable parity check */
+ outb((inb(0x61) & 0x0f) | 0x04, 0x61); /* clear-and-disable the PCI SERR error line. */
}
static void io_check_error(struct cpu_user_regs *regs)
@@ -3120,7 +3109,7 @@ asmlinkage void do_nmi(struct cpu_user_r
{
reason = inb(0x61);
if ( reason & 0x80 )
- mem_parity_error(regs);
+ pci_serr_error(regs);
else if ( reason & 0x40 )
io_check_error(regs);
else if ( !nmi_watchdog )
++++++ 22992-x86-fiop-m32i.patch ++++++
# HG changeset patch
# User Keir Fraser
# Date 1299600613 0
# Node ID e93392bd6b66c19c7209a3dbf20177cdc7ef6d0d
# Parent f071d8e9f744acad4af6ff7fe915878106c0e8d8
x86_emulate: Fix emulation of FIMUL m32i.
Need to emit assembler instruction fimull not fimul/fimuls.
Signed-off-by: Keir Fraser
# HG changeset patch
# User Keir Fraser
# Date 1299600895 0
# Node ID 22eb31eb688ad156f0004f669b389250b5e75bfb
# Parent e93392bd6b66c19c7209a3dbf20177cdc7ef6d0d
x86_emulate: FPU 0xda instructions have a 32-bit memory operand, not 64-bit.
Signed-off-by: Keir Fraser
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -2665,35 +2665,35 @@ x86_emulate(
break;
default:
fail_if(modrm >= 0xc0);
- ea.bytes = 8;
+ ea.bytes = 4;
src = ea;
if ( (rc = ops->read(src.mem.seg, src.mem.off, &src.val,
src.bytes, ctxt)) != 0 )
goto done;
switch ( modrm_reg & 7 )
{
- case 0: /* fiadd m64i */
+ case 0: /* fiadd m32i */
emulate_fpu_insn_memsrc("fiaddl", src.val);
break;
- case 1: /* fimul m64i */
- emulate_fpu_insn_memsrc("fimul", src.val);
+ case 1: /* fimul m32i */
+ emulate_fpu_insn_memsrc("fimull", src.val);
break;
- case 2: /* ficom m64i */
+ case 2: /* ficom m32i */
emulate_fpu_insn_memsrc("ficoml", src.val);
break;
- case 3: /* ficomp m64i */
+ case 3: /* ficomp m32i */
emulate_fpu_insn_memsrc("ficompl", src.val);
break;
- case 4: /* fisub m64i */
+ case 4: /* fisub m32i */
emulate_fpu_insn_memsrc("fisubl", src.val);
break;
- case 5: /* fisubr m64i */
+ case 5: /* fisubr m32i */
emulate_fpu_insn_memsrc("fisubrl", src.val);
break;
- case 6: /* fidiv m64i */
+ case 6: /* fidiv m32i */
emulate_fpu_insn_memsrc("fidivl", src.val);
break;
- case 7: /* fidivr m64i */
+ case 7: /* fidivr m32i */
emulate_fpu_insn_memsrc("fidivrl", src.val);
break;
default:
++++++ 22996-x86-alloc_xen_pagetable-no-BUG.patch ++++++
References: bnc#675363
# HG changeset patch
# User Jan Beulich
# Date 1299687299 0
# Node ID 1eeccafe904216589da600cd3e890021fbb3f951
# Parent b972a7f493252530c5ffdcf9b7e2c348f8a4ac32
x86: don't BUG() post-boot in alloc_xen_pagetable()
Instead, propagate the condition to the caller, all of which also get
adjusted to check for that situation.
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4863,8 +4863,11 @@ int map_pages_to_xen(
while ( nr_mfns != 0 )
{
#ifdef __x86_64__
- l3_pgentry_t *pl3e = virt_to_xen_l3e(virt);
- l3_pgentry_t ol3e = *pl3e;
+ l3_pgentry_t ol3e, *pl3e = virt_to_xen_l3e(virt);
+
+ if ( !pl3e )
+ return -ENOMEM;
+ ol3e = *pl3e;
if ( cpu_has_page1gb &&
!(((virt >> PAGE_SHIFT) | mfn) &
@@ -4984,6 +4987,8 @@ int map_pages_to_xen(
#endif
pl2e = virt_to_xen_l2e(virt);
+ if ( !pl2e )
+ return -ENOMEM;
if ( ((((virt>>PAGE_SHIFT) | mfn) & ((1<= (1<
# Date 1299687336 0
# Node ID 5f28dcea13555f7ab948c9cb95de3e79e0fbfc4b
# Parent 1eeccafe904216589da600cd3e890021fbb3f951
x86: run-time callers of map_pages_to_xen() must check for errors
Again, (out-of-memory) errors must not cause hypervisor crashes, and
hence ought to be propagated.
This also adjusts the cache attribute changing loop in
get_page_from_l1e() to not go through an unnecessary iteration. While
this could be considered mere cleanup, it is actually a requirement
for the subsequent now necessary error recovery path.
Also make a few functions static, easing the check for potential
callers needing adjustment.
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -744,8 +744,9 @@ int is_iomem_page(unsigned long mfn)
return (page_get_owner(page) == dom_io);
}
-static void update_xen_mappings(unsigned long mfn, unsigned long cacheattr)
+static int update_xen_mappings(unsigned long mfn, unsigned long cacheattr)
{
+ int err = 0;
#ifdef __x86_64__
bool_t alias = mfn >= PFN_DOWN(xen_phys_start) &&
mfn < PFN_UP(xen_phys_start + (unsigned long)_end - XEN_VIRT_START);
@@ -753,12 +754,14 @@ static void update_xen_mappings(unsigned
XEN_VIRT_START + ((mfn - PFN_DOWN(xen_phys_start)) << PAGE_SHIFT);
if ( unlikely(alias) && cacheattr )
- map_pages_to_xen(xen_va, mfn, 1, 0);
- map_pages_to_xen((unsigned long)mfn_to_virt(mfn), mfn, 1,
+ err = map_pages_to_xen(xen_va, mfn, 1, 0);
+ if ( !err )
+ err = map_pages_to_xen((unsigned long)mfn_to_virt(mfn), mfn, 1,
PAGE_HYPERVISOR | cacheattr_to_pte_flags(cacheattr));
- if ( unlikely(alias) && !cacheattr )
- map_pages_to_xen(xen_va, mfn, 1, PAGE_HYPERVISOR);
+ if ( unlikely(alias) && !cacheattr && !err )
+ err = map_pages_to_xen(xen_va, mfn, 1, PAGE_HYPERVISOR);
#endif
+ return err;
}
int
@@ -770,6 +773,7 @@ get_page_from_l1e(
uint32_t l1f = l1e_get_flags(l1e);
struct vcpu *curr = current;
struct domain *real_pg_owner;
+ bool_t write;
if ( !(l1f & _PAGE_PRESENT) )
return 1;
@@ -820,9 +824,9 @@ get_page_from_l1e(
* contribute to writeable mapping refcounts. (This allows the
* qemu-dm helper process in dom0 to map the domain's memory without
* messing up the count of "real" writable mappings.) */
- if ( (l1f & _PAGE_RW) &&
- ((l1e_owner == pg_owner) || !paging_mode_external(pg_owner)) &&
- !get_page_type(page, PGT_writable_page) )
+ write = (l1f & _PAGE_RW) &&
+ ((l1e_owner == pg_owner) || !paging_mode_external(pg_owner));
+ if ( write && !get_page_type(page, PGT_writable_page) )
goto could_not_pin;
if ( pte_flags_to_cacheattr(l1f) !=
@@ -833,22 +837,36 @@ get_page_from_l1e(
if ( is_xen_heap_page(page) )
{
- if ( (l1f & _PAGE_RW) &&
- ((l1e_owner == pg_owner) || !paging_mode_external(pg_owner)) )
+ if ( write )
put_page_type(page);
put_page(page);
MEM_LOG("Attempt to change cache attributes of Xen heap page");
return 0;
}
- while ( ((y & PGC_cacheattr_mask) >> PGC_cacheattr_base) != cacheattr )
- {
+ do {
x = y;
nx = (x & ~PGC_cacheattr_mask) | (cacheattr << PGC_cacheattr_base);
- y = cmpxchg(&page->count_info, x, nx);
- }
+ } while ( (y = cmpxchg(&page->count_info, x, nx)) != x );
+
+ if ( unlikely(update_xen_mappings(mfn, cacheattr) != 0) )
+ {
+ cacheattr = y & PGC_cacheattr_mask;
+ do {
+ x = y;
+ nx = (x & ~PGC_cacheattr_mask) | cacheattr;
+ } while ( (y = cmpxchg(&page->count_info, x, nx)) != x );
+
+ if ( write )
+ put_page_type(page);
+ put_page(page);
- update_xen_mappings(mfn, cacheattr);
+ MEM_LOG("Error updating mappings for mfn %lx (pfn %lx,"
+ " from L1 entry %" PRIpte ") for %d",
+ mfn, get_gpfn_from_mfn(mfn),
+ l1e_get_intpte(l1e), l1e_owner->domain_id);
+ return 0;
+ }
}
return 1;
@@ -1980,6 +1998,21 @@ static int mod_l4_entry(l4_pgentry_t *pl
#endif
+static int cleanup_page_cacheattr(struct page_info *page)
+{
+ uint32_t cacheattr =
+ (page->count_info & PGC_cacheattr_mask) >> PGC_cacheattr_base;
+
+ if ( likely(cacheattr == 0) )
+ return 0;
+
+ page->count_info &= ~PGC_cacheattr_mask;
+
+ BUG_ON(is_xen_heap_page(page));
+
+ return update_xen_mappings(page_to_mfn(page), 0);
+}
+
void put_page(struct page_info *page)
{
unsigned long nx, x, y = page->count_info;
@@ -1993,8 +2026,10 @@ void put_page(struct page_info *page)
if ( unlikely((nx & PGC_count_mask) == 0) )
{
- cleanup_page_cacheattr(page);
- free_domheap_page(page);
+ if ( cleanup_page_cacheattr(page) == 0 )
+ free_domheap_page(page);
+ else
+ MEM_LOG("Leaking pfn %lx", page_to_mfn(page));
}
}
@@ -2446,21 +2481,6 @@ int get_page_type_preemptible(struct pag
return __get_page_type(page, type, 1);
}
-void cleanup_page_cacheattr(struct page_info *page)
-{
- uint32_t cacheattr =
- (page->count_info & PGC_cacheattr_mask) >> PGC_cacheattr_base;
-
- if ( likely(cacheattr == 0) )
- return;
-
- page->count_info &= ~PGC_cacheattr_mask;
-
- BUG_ON(is_xen_heap_page(page));
-
- update_xen_mappings(page_to_mfn(page), 0);
-}
-
int new_guest_cr3(unsigned long mfn)
{
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -429,6 +429,7 @@ static int setup_compat_m2p_table(struct
l3_pgentry_t *l3_ro_mpt = NULL;
l2_pgentry_t *l2_ro_mpt = NULL;
struct page_info *l1_pg;
+ int err = 0;
smap = info->spfn & (~((1UL << (L2_PAGETABLE_SHIFT - 2)) -1));
@@ -479,24 +480,25 @@ static int setup_compat_m2p_table(struct
memflags = MEMF_node(phys_to_nid(i << PAGE_SHIFT));
l1_pg = mfn_to_page(alloc_hotadd_mfn(info));
- map_pages_to_xen(rwva,
- page_to_mfn(l1_pg),
- 1UL << PAGETABLE_ORDER,
- PAGE_HYPERVISOR);
+ err = map_pages_to_xen(rwva, page_to_mfn(l1_pg),
+ 1UL << PAGETABLE_ORDER,
+ PAGE_HYPERVISOR);
+ if ( err )
+ break;
memset((void *)rwva, 0x55, 1UL << L2_PAGETABLE_SHIFT);
/* NB. Cannot be GLOBAL as the ptes get copied into per-VM space. */
l2e_write(&l2_ro_mpt[l2_table_offset(va)], l2e_from_page(l1_pg, _PAGE_PSE|_PAGE_PRESENT));
}
#undef CNT
#undef MFN
- return 0;
+ return err;
}
/*
* Allocate and map the machine-to-phys table.
* The L3 for RO/RWRW MPT and the L2 for compatible MPT should be setup already
*/
-int setup_m2p_table(struct mem_hotadd_info *info)
+static int setup_m2p_table(struct mem_hotadd_info *info)
{
unsigned long i, va, smap, emap;
unsigned int n, memflags;
@@ -550,11 +552,13 @@ int setup_m2p_table(struct mem_hotadd_in
else
{
l1_pg = mfn_to_page(alloc_hotadd_mfn(info));
- map_pages_to_xen(
+ ret = map_pages_to_xen(
RDWR_MPT_VIRT_START + i * sizeof(unsigned long),
page_to_mfn(l1_pg),
1UL << PAGETABLE_ORDER,
PAGE_HYPERVISOR);
+ if ( ret )
+ goto error;
memset((void *)(RDWR_MPT_VIRT_START + i * sizeof(unsigned long)),
0x55, 1UL << L2_PAGETABLE_SHIFT);
@@ -891,13 +895,13 @@ void cleanup_frame_table(struct mem_hota
flush_tlb_all();
}
-/* Should we be paraniod failure in map_pages_to_xen? */
static int setup_frametable_chunk(void *start, void *end,
struct mem_hotadd_info *info)
{
unsigned long s = (unsigned long)start;
unsigned long e = (unsigned long)end;
unsigned long mfn;
+ int err;
ASSERT(!(s & ((1 << L2_PAGETABLE_SHIFT) - 1)));
ASSERT(!(e & ((1 << L2_PAGETABLE_SHIFT) - 1)));
@@ -905,14 +909,17 @@ static int setup_frametable_chunk(void *
for ( ; s < e; s += (1UL << L2_PAGETABLE_SHIFT))
{
mfn = alloc_hotadd_mfn(info);
- map_pages_to_xen(s, mfn, 1UL << PAGETABLE_ORDER, PAGE_HYPERVISOR);
+ err = map_pages_to_xen(s, mfn, 1UL << PAGETABLE_ORDER,
+ PAGE_HYPERVISOR);
+ if ( err )
+ return err;
}
memset(start, -1, s - (unsigned long)start);
return 0;
}
-int extend_frame_table(struct mem_hotadd_info *info)
+static int extend_frame_table(struct mem_hotadd_info *info)
{
unsigned long cidx, nidx, eidx, spfn, epfn;
@@ -933,12 +940,16 @@ int extend_frame_table(struct mem_hotadd
while ( cidx < eidx )
{
+ int err;
+
nidx = find_next_bit(pdx_group_valid, eidx, cidx);
if ( nidx >= eidx )
nidx = eidx;
- setup_frametable_chunk(pdx_to_page(cidx * PDX_GROUP_COUNT ),
+ err = setup_frametable_chunk(pdx_to_page(cidx * PDX_GROUP_COUNT ),
pdx_to_page(nidx * PDX_GROUP_COUNT),
info);
+ if ( err )
+ return err;
cidx = find_next_zero_bit(pdx_group_valid, eidx, nidx);
}
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -301,8 +301,6 @@ int free_page_type(struct page_info *pag
int preemptible);
int _shadow_mode_refcounts(struct domain *d);
-void cleanup_page_cacheattr(struct page_info *page);
-
int is_iomem_page(unsigned long mfn);
struct domain *page_get_owner_and_reference(struct page_info *page);
++++++ 23020-x86-cpuidle-ordering.patch ++++++
# HG changeset patch
# User Liu, Jinsong
# Date 1299782132 0
# Node ID 22cc047eb146e00667e62ed13f35005f145f20d5
# Parent c8947c24536a0cdc19c30ec3e435d82f85e38c4d
x86: Fix cpuidle bug
Before invoking C3, bus master disable / flush cache should be the
last step; After resume from C3, bus master enable should be the first
step;
Signed-off-by: Liu, Jinsong
Acked-by: Wei Gang
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -339,6 +339,19 @@ static void acpi_processor_idle(void)
case ACPI_STATE_C3:
/*
+ * Before invoking C3, be aware that TSC/APIC timer may be
+ * stopped by H/W. Without carefully handling of TSC/APIC stop issues,
+ * deep C state can't work correctly.
+ */
+ /* preparing APIC stop */
+ lapic_timer_off();
+
+ /* Get start time (ticks) */
+ t1 = inl(pmtmr_ioport);
+ /* Trace cpu idle entry */
+ TRACE_4D(TRC_PM_IDLE_ENTRY, cx->idx, t1, exp, pred);
+
+ /*
* disable bus master
* bm_check implies we need ARB_DIS
* !bm_check implies we need cache flush
@@ -367,20 +380,18 @@ static void acpi_processor_idle(void)
ACPI_FLUSH_CPU_CACHE();
}
- /*
- * Before invoking C3, be aware that TSC/APIC timer may be
- * stopped by H/W. Without carefully handling of TSC/APIC stop issues,
- * deep C state can't work correctly.
- */
- /* preparing APIC stop */
- lapic_timer_off();
-
- /* Get start time (ticks) */
- t1 = inl(pmtmr_ioport);
- /* Trace cpu idle entry */
- TRACE_4D(TRC_PM_IDLE_ENTRY, cx->idx, t1, exp, pred);
/* Invoke C3 */
acpi_idle_do_entry(cx);
+
+ if ( power->flags.bm_check && power->flags.bm_control )
+ {
+ /* Enable bus master arbitration */
+ spin_lock(&c3_cpu_status.lock);
+ acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0);
+ c3_cpu_status.count--;
+ spin_unlock(&c3_cpu_status.lock);
+ }
+
/* Get end time (ticks) */
t2 = inl(pmtmr_ioport);
@@ -391,15 +402,6 @@ static void acpi_processor_idle(void)
TRACE_6D(TRC_PM_IDLE_EXIT, cx->idx, t2,
irq_traced[0], irq_traced[1], irq_traced[2], irq_traced[3]);
- if ( power->flags.bm_check && power->flags.bm_control )
- {
- /* Enable bus master arbitration */
- spin_lock(&c3_cpu_status.lock);
- if ( c3_cpu_status.count-- == num_online_cpus() )
- acpi_set_register(ACPI_BITREG_ARB_DISABLE, 0);
- spin_unlock(&c3_cpu_status.lock);
- }
-
/* Re-enable interrupts */
local_irq_enable();
/* recovering APIC */
++++++ 23030-x86-hpet-init.patch ++++++
# HG changeset patch
# User Jan Beulich
# Date 1299935942 0
# Node ID 87aa1277eae02fbafab3da4276fb9f9c7a8bdf26
# Parent a8fee4ad3ad0650e7a5cc0fb253c6a0ada1ac583
x86/HPET: fix initialization order
At least the legacy path can enter its interrupt handler callout while
initialization is still in progress - that handler checks whether
->event_handler is non-NULL, and hence all other initialization must
happen before setting this field.
Do the same to the MSI initialization just in case (and to keep the
code in sync).
Signed-off-by: Jan Beulich
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -575,8 +575,9 @@ void hpet_broadcast_init(void)
1000000000ul, 32);
hpet_events[i].shift = 32;
hpet_events[i].next_event = STIME_MAX;
- hpet_events[i].event_handler = handle_hpet_broadcast;
spin_lock_init(&hpet_events[i].lock);
+ wmb();
+ hpet_events[i].event_handler = handle_hpet_broadcast;
}
if ( num_hpets_used < num_possible_cpus() )
@@ -613,10 +614,11 @@ void hpet_broadcast_init(void)
legacy_hpet_event.mult = div_sc((unsigned long)hpet_rate, 1000000000ul, 32);
legacy_hpet_event.shift = 32;
legacy_hpet_event.next_event = STIME_MAX;
- legacy_hpet_event.event_handler = handle_hpet_broadcast;
legacy_hpet_event.idx = 0;
legacy_hpet_event.flags = 0;
spin_lock_init(&legacy_hpet_event.lock);
+ wmb();
+ legacy_hpet_event.event_handler = handle_hpet_broadcast;
for_each_possible_cpu(i)
per_cpu(cpu_bc_channel, i) = &legacy_hpet_event;
++++++ 23034-x86-arch_set_info_guest-DoS.patch ++++++
References: bnc#679344
# HG changeset patch
# User Tim Deegan
# Date 1300121989 0
# Node ID c79aae866ad8397e129b5801f8f97f604743a7c2
# Parent 84bacd800bf88e37434d49547ad8224be46e2a52
x86_64: fix error checking in arch_set_info_guest()
Cannot specify user mode execution without specifying user-mode
pagetables.
Signed-off-by: Tim Deegan
Acked-by: Keir Fraser
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -780,6 +780,11 @@ int arch_set_info_guest(
v->arch.guest_table_user = pagetable_from_pfn(cr3_pfn);
}
+ else if ( !(flags & VGCF_in_kernel) )
+ {
+ destroy_gdt(v);
+ return -EINVAL;
+ }
}
else
{
++++++ 23039-csched-constrain-cpu.patch ++++++
# HG changeset patch
# User Jan Beulich
# Date 1300123162 0
# Node ID c40da47621d8cb06445e32aa87eba049b1aa5370
# Parent 39f5947b1576803b3617a8ab678d0273af25cb6d
_csched_cpu_pick(): don't return CPUs outside vCPU's affinity mask
This fixes a fairly blatant bug I introduced in c/s 20377:cff23354d026
- I wonder how this went unnoticed for so long.
Signed-off-by: Jan Beulich
--- a/xen/common/sched_credit.c
+++ b/xen/common/sched_credit.c
@@ -441,6 +441,7 @@ _csched_cpu_pick(struct vcpu *vc, bool_t
if ( ( (weight_cpu < weight_nxt) ^ sched_smt_power_savings )
&& (weight_cpu != weight_nxt) )
{
+ cpus_and(nxt_idlers, cpus, nxt_idlers);
cpu = cycle_cpu(CSCHED_PCPU(nxt)->idle_bias, nxt_idlers);
if ( commit )
CSCHED_PCPU(nxt)->idle_bias = cpu;
++++++ 23061-amd-iommu-resume.patch ++++++
# HG changeset patch
# User Jan Beulich
# Date 1300468552 0
# Node ID 12f7c7ac7f19e122fa83c16c8c6d9a6700ddc409
# Parent b59e98bc6ff12f0ea436fd1a2defecaaffbfffff
amd-iommu: remove a stray __init
This function is being called on the resume path.
Signed-off-by: Jan Beulich
--- a/xen/drivers/passthrough/amd/iommu_init.c
+++ b/xen/drivers/passthrough/amd/iommu_init.c
@@ -118,7 +118,7 @@ static void register_iommu_cmd_buffer_in
writel(entry, iommu->mmio_base+IOMMU_CMD_BUFFER_BASE_HIGH_OFFSET);
}
-static void __init register_iommu_event_log_in_mmio_space(struct amd_iommu *iommu)
+static void register_iommu_event_log_in_mmio_space(struct amd_iommu *iommu)
{
u64 addr_64, addr_lo, addr_hi;
u32 power_of2_entries;
++++++ 23103-x86-pirq-guest-eoi-check.patch ++++++
# HG changeset patch
# User Keir Fraser
# Date 1301132521 0
# Node ID 48dac730a93b27ff60a340564e9a7afd7f9385f4
# Parent 8f001d864fefac689b7662bc9979eaddf4fd6e9c
x86: __pirq_guest_eoi() must check it is called for a fully
guest-bound irq before accessing desc->action.
Signed-off-by: Keir Fraser
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -1015,6 +1015,12 @@ static void __pirq_guest_eoi(struct doma
if ( desc == NULL )
return;
+ if ( !(desc->status & IRQ_GUEST) )
+ {
+ spin_unlock_irq(&desc->lock);
+ return;
+ }
+
action = (irq_guest_action_t *)desc->action;
irq = desc - irq_desc;
++++++ 23127-vtd-bios-settings.patch ++++++
# HG changeset patch
# User Allen Kay
# Date 1301755765 -3600
# Node ID 1046830079376a4b29fcad0cd037a834e808ed06
# Parent 89c23f58aa986092da0c9a7dfac1c41befbe1f3f
[VTD] check BIOS settings before enabling interrupt remapping or x2apic
Check flags field in ACPI DMAR structure before enabling interrupt
remapping or x2apic. This allows platform vendors to disable
interrupt remapping or x2apic features if on board BIOS does not
support them.
Signed-off-by: Allen Kay
--- a/xen/arch/x86/apic.c
+++ b/xen/arch/x86/apic.c
@@ -532,7 +532,7 @@ static void resume_x2apic(void)
mask_8259A();
mask_IO_APIC_setup(ioapic_entries);
- iommu_enable_IR();
+ iommu_enable_x2apic_IR();
__enable_x2apic();
restore_IO_APIC_setup(ioapic_entries);
@@ -752,7 +752,7 @@ int lapic_suspend(void)
local_irq_save(flags);
disable_local_APIC();
- iommu_disable_IR();
+ iommu_disable_x2apic_IR();
local_irq_restore(flags);
return 0;
}
@@ -1000,7 +1000,7 @@ void x2apic_setup(void)
mask_8259A();
mask_IO_APIC_setup(ioapic_entries);
- if ( iommu_enable_IR() )
+ if ( iommu_enable_x2apic_IR() )
{
if ( x2apic_enabled )
panic("Interrupt remapping could not be enabled while "
--- a/xen/drivers/passthrough/vtd/dmar.c
+++ b/xen/drivers/passthrough/vtd/dmar.c
@@ -46,6 +46,7 @@ LIST_HEAD(acpi_rmrr_units);
LIST_HEAD(acpi_atsr_units);
LIST_HEAD(acpi_rhsa_units);
+static int __read_mostly dmar_flags;
static u64 igd_drhd_address;
u8 dmar_host_address_width;
@@ -682,6 +683,7 @@ static int __init acpi_parse_dmar(struct
int ret = 0;
dmar = (struct acpi_table_dmar *)table;
+ dmar_flags = dmar->flags;
if ( !iommu_enabled )
{
@@ -771,3 +773,22 @@ int __init acpi_dmar_init(void)
{
return parse_dmar_table(acpi_parse_dmar);
}
+
+int platform_supports_intremap(void)
+{
+ unsigned int flags = 0;
+
+ flags = DMAR_INTR_REMAP;
+ return ((dmar_flags & flags) == DMAR_INTR_REMAP);
+}
+
+int platform_supports_x2apic(void)
+{
+ unsigned int flags = 0;
+
+ if (!cpu_has_x2apic)
+ return 0;
+
+ flags = DMAR_INTR_REMAP | DMAR_X2APIC_OPT_OUT;
+ return ((dmar_flags & flags) == DMAR_INTR_REMAP);
+}
--- a/xen/drivers/passthrough/vtd/extern.h
+++ b/xen/drivers/passthrough/vtd/extern.h
@@ -88,5 +88,7 @@ void vtd_ops_preamble_quirk(struct iommu
void vtd_ops_postamble_quirk(struct iommu* iommu);
void me_wifi_quirk(struct domain *domain, u8 bus, u8 devfn, int map);
void pci_vtd_quirk(struct pci_dev *pdev);
+int platform_supports_intremap(void);
+int platform_supports_x2apic(void);
#endif // _VTD_EXTERN_H_
--- a/xen/drivers/passthrough/vtd/intremap.c
+++ b/xen/drivers/passthrough/vtd/intremap.c
@@ -725,6 +725,13 @@ int enable_intremap(struct iommu *iommu,
ASSERT(ecap_intr_remap(iommu->ecap) && iommu_intremap);
+ if ( !platform_supports_intremap() )
+ {
+ dprintk(XENLOG_ERR VTDPREFIX,
+ "Platform firmware does not support interrupt remapping\n");
+ return -EINVAL;
+ }
+
ir_ctrl = iommu_ir_ctrl(iommu);
sts = dmar_readl(iommu->reg, DMAR_GSTS_REG);
@@ -812,10 +819,10 @@ out:
#ifndef __i386__
/*
- * This function is used to enable Interrutp remapping when
+ * This function is used to enable Interrupt remapping when
* enable x2apic
*/
-int iommu_enable_IR(void)
+int iommu_enable_x2apic_IR(void)
{
struct acpi_drhd_unit *drhd;
struct iommu *iommu;
@@ -823,6 +830,9 @@ int iommu_enable_IR(void)
if ( !iommu_supports_eim() )
return -1;
+ if ( !platform_supports_x2apic() )
+ return -1;
+
for_each_drhd_unit ( drhd )
{
struct qi_ctrl *qi_ctrl = NULL;
@@ -873,7 +883,7 @@ int iommu_enable_IR(void)
* This function is used to disable Interrutp remapping when
* suspend local apic
*/
-void iommu_disable_IR(void)
+void iommu_disable_x2apic_IR(void)
{
struct acpi_drhd_unit *drhd;
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -1937,7 +1937,7 @@ static int init_vtd_hw(void)
if ( enable_intremap(iommu, 0) != 0 )
{
dprintk(XENLOG_WARNING VTDPREFIX,
- "Failed to enable Interrupt Remapping!\n");
+ "Interrupt Remapping not enabled\n");
break;
}
}
--- a/xen/drivers/passthrough/vtd/iommu.h
+++ b/xen/drivers/passthrough/vtd/iommu.h
@@ -22,6 +22,10 @@
#include
+/* DMAR Flags bits */
+#define DMAR_INTR_REMAP 0x1
+#define DMAR_X2APIC_OPT_OUT 0x2
+
/*
* Intel IOMMU register specification per version 1.0 public spec.
*/
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -60,8 +60,8 @@ struct iommu {
int iommu_setup(void);
int iommu_supports_eim(void);
-int iommu_enable_IR(void);
-void iommu_disable_IR(void);
+int iommu_enable_x2apic_IR(void);
+void iommu_disable_x2apic_IR(void);
int iommu_add_device(struct pci_dev *pdev);
int iommu_remove_device(struct pci_dev *pdev);
++++++ 23153-x86-amd-clear-DramModEn.patch ++++++
# HG changeset patch
# User Wei Huang
# Date 1302076891 -3600
# Node ID 8fb61c9ebe499b576687907d164da07802414925
# Parent 97763efc41f9b664cf6f7db653c9c3f51e50b358
x86, amd, MTRR: correct DramModEn bit of SYS_CFG MSR
Some buggy BIOS might set SYS_CFG DramModEn bit to 1, which can cause
unexpected behavior on AMD platforms. This patch clears DramModEn bit
if it is 1.
Signed-off-by: Wei Huang
--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -325,6 +325,32 @@ static void check_disable_c1e(unsigned i
on_each_cpu(disable_c1e, NULL, 1);
}
+/*
+ * BIOS is expected to clear MtrrFixDramModEn bit. According to AMD BKDG :
+ * "The MtrrFixDramModEn bit should be set to 1 during BIOS initalization of
+ * the fixed MTRRs, then cleared to 0 for operation."
+ */
+static void check_syscfg_dram_mod_en(void)
+{
+ uint64_t syscfg;
+ static bool_t printed = 0;
+
+ if (!((boot_cpu_data.x86_vendor == X86_VENDOR_AMD) &&
+ (boot_cpu_data.x86 >= 0x0f)))
+ return;
+
+ rdmsrl(MSR_K8_SYSCFG, syscfg);
+ if (!(syscfg & K8_MTRRFIXRANGE_DRAM_MODIFY))
+ return;
+
+ if (!test_and_set_bool(printed))
+ printk(KERN_ERR "MTRR: SYSCFG[MtrrFixDramModEn] not "
+ "cleared by BIOS, clearing this bit\n");
+
+ syscfg &= ~K8_MTRRFIXRANGE_DRAM_MODIFY;
+ wrmsrl(MSR_K8_SYSCFG, syscfg);
+}
+
static void __devinit init_amd(struct cpuinfo_x86 *c)
{
u32 l, h;
@@ -587,6 +613,8 @@ static void __devinit init_amd(struct cp
set_cpuidmask(c);
+ check_syscfg_dram_mod_en();
+
start_svm(c);
}
++++++ 23154-x86-amd-iorr-no-rdwr.patch ++++++
# HG changeset patch
# User Wei Huang
# Date 1302076933 -3600
# Node ID 42fa70e0761bbb0596618ca5323664f31a2faa76
# Parent 8fb61c9ebe499b576687907d164da07802414925
x86, amd, MTRR: remove k8_enable_fixed_iorrs()
AMD64 defines two special bits (bit 3 and 4) RdMem and WrMem in fixed
MTRR type. Their values are supposed to be 0 after BIOS hands the
control to OS according to AMD BKDG. Unless OS specificially turn them
on, they are kept 0 all the time. As a result, k8_enable_fixed_iorrs()
is unnecessary and removed from upstream kernel (see
https://patchwork.kernel.org/patch/11425/). This patch does the same
thing.
Signed-off-by: Wei Huang
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -116,20 +116,6 @@ void mtrr_wrmsr(unsigned msr, unsigned a
}
/**
- * Enable and allow read/write of extended fixed-range MTRR bits on K8 CPUs
- * see AMD publication no. 24593, chapter 3.2.1 for more information
- */
-static inline void k8_enable_fixed_iorrs(void)
-{
- unsigned lo, hi;
-
- rdmsr(MSR_K8_SYSCFG, lo, hi);
- mtrr_wrmsr(MSR_K8_SYSCFG, lo
- | K8_MTRRFIXRANGE_DRAM_ENABLE
- | K8_MTRRFIXRANGE_DRAM_MODIFY, hi);
-}
-
-/**
* Checks and updates an fixed-range MTRR if it differs from the value it
* should have. If K8 extenstions are wanted, update the K8 SYSCFG MSR also.
* see AMD publication no. 24593, chapter 7.8.1, page 233 for more information
@@ -144,10 +130,6 @@ static void set_fixed_range(int msr, int
rdmsr(msr, lo, hi);
if (lo != msrwords[0] || hi != msrwords[1]) {
- if (boot_cpu_data.x86_vendor == X86_VENDOR_AMD &&
- boot_cpu_data.x86 == 15 &&
- ((msrwords[0] | msrwords[1]) & K8_MTRR_RDMEM_WRMEM_MASK))
- k8_enable_fixed_iorrs();
mtrr_wrmsr(msr, msrwords[0], msrwords[1]);
*changed = TRUE;
}
++++++ 23199-amd-iommu-unmapped-intr-fault.patch ++++++
# HG changeset patch
# User Wei Wang
# Date 1302610857 -3600
# Node ID dbd98ab2f87facba8117bb881fa2ea5dfdb92960
# Parent 697ac895c11c6d5d82524de56796cee98fded2a5
amd iommu: Unmapped interrupt should generate IO page faults.
This helps us to debug interrupt issues.
Signed-off-by: Wei Wang
--- a/xen/drivers/passthrough/amd/iommu_map.c
+++ b/xen/drivers/passthrough/amd/iommu_map.c
@@ -309,8 +309,9 @@ void amd_iommu_set_intremap_table(u32 *d
set_field_in_reg_u32(0xB, entry,
IOMMU_DEV_TABLE_INT_TABLE_LENGTH_MASK,
IOMMU_DEV_TABLE_INT_TABLE_LENGTH_SHIFT, &entry);
- /* ignore unmapped interrupts */
- set_field_in_reg_u32(IOMMU_CONTROL_ENABLED, entry,
+
+ /* unmapped interrupt results io page faults*/
+ set_field_in_reg_u32(IOMMU_CONTROL_DISABLED, entry,
IOMMU_DEV_TABLE_INT_TABLE_IGN_UNMAPPED_MASK,
IOMMU_DEV_TABLE_INT_TABLE_IGN_UNMAPPED_SHIFT, &entry);
set_field_in_reg_u32(int_valid ? IOMMU_CONTROL_ENABLED :
++++++ 23200-amd-iommu-intremap-sync.patch ++++++
References: bnc#680824
# HG changeset patch
# User Wei Wang
# Date 1302611179 -3600
# Node ID 995a0c01a076e9c4fb124c090bc146a10d76bc7b
# Parent dbd98ab2f87facba8117bb881fa2ea5dfdb92960
AMD IOMMU: Fix an interrupt remapping issue
Some device could generate bogus interrupts if an IO-APIC RTE and an
iommu interrupt remapping entry are not consistent during 2 adjacent
64bits IO-APIC RTE updates. For example, if the 2nd operation updates
destination bits in RTE for SATA device and unmask it, in some case,
SATA device will assert ioapic pin to generate interrupt immediately
using new destination but iommu could still translate it into the old
destination, then dom0 would be confused. To fix that, we sync up
interrupt remapping entry with IO-APIC IRE on every 32 bits operation
and forward IOAPIC RTE updates after interrupt.
Signed-off-by: Wei Wang
Acked-by: Jan Beulich
--- a/xen/drivers/passthrough/amd/iommu_intr.c
+++ b/xen/drivers/passthrough/amd/iommu_intr.c
@@ -116,8 +116,7 @@ void invalidate_interrupt_table(struct a
static void update_intremap_entry_from_ioapic(
int bdf,
struct amd_iommu *iommu,
- struct IO_APIC_route_entry *ioapic_rte,
- unsigned int rte_upper, unsigned int value)
+ struct IO_APIC_route_entry *ioapic_rte)
{
unsigned long flags;
u32* entry;
@@ -129,28 +128,26 @@ static void update_intremap_entry_from_i
req_id = get_intremap_requestor_id(bdf);
lock = get_intremap_lock(req_id);
- /* only remap interrupt vector when lower 32 bits in ioapic ire changed */
- if ( likely(!rte_upper) )
- {
- delivery_mode = rte->delivery_mode;
- vector = rte->vector;
- dest_mode = rte->dest_mode;
- dest = rte->dest.logical.logical_dest;
- spin_lock_irqsave(lock, flags);
- offset = get_intremap_offset(vector, delivery_mode);
- entry = (u32*)get_intremap_entry(req_id, offset);
+ delivery_mode = rte->delivery_mode;
+ vector = rte->vector;
+ dest_mode = rte->dest_mode;
+ dest = rte->dest.logical.logical_dest;
- update_intremap_entry(entry, vector, delivery_mode, dest_mode, dest);
- spin_unlock_irqrestore(lock, flags);
+ spin_lock_irqsave(lock, flags);
- if ( iommu->enabled )
- {
- spin_lock_irqsave(&iommu->lock, flags);
- invalidate_interrupt_table(iommu, req_id);
- flush_command_buffer(iommu);
- spin_unlock_irqrestore(&iommu->lock, flags);
- }
+ offset = get_intremap_offset(vector, delivery_mode);
+ entry = (u32*)get_intremap_entry(req_id, offset);
+ update_intremap_entry(entry, vector, delivery_mode, dest_mode, dest);
+
+ spin_unlock_irqrestore(lock, flags);
+
+ if ( iommu->enabled )
+ {
+ spin_lock_irqsave(&iommu->lock, flags);
+ invalidate_interrupt_table(iommu, req_id);
+ flush_command_buffer(iommu);
+ spin_unlock_irqrestore(&iommu->lock, flags);
}
}
@@ -198,7 +195,8 @@ int __init amd_iommu_setup_ioapic_remapp
spin_lock_irqsave(lock, flags);
offset = get_intremap_offset(vector, delivery_mode);
entry = (u32*)get_intremap_entry(req_id, offset);
- update_intremap_entry(entry, vector, delivery_mode, dest_mode, dest);
+ update_intremap_entry(entry, vector,
+ delivery_mode, dest_mode, dest);
spin_unlock_irqrestore(lock, flags);
if ( iommu->enabled )
@@ -216,16 +214,17 @@ int __init amd_iommu_setup_ioapic_remapp
void amd_iommu_ioapic_update_ire(
unsigned int apic, unsigned int reg, unsigned int value)
{
- struct IO_APIC_route_entry ioapic_rte = { 0 };
- unsigned int rte_upper = (reg & 1) ? 1 : 0;
+ struct IO_APIC_route_entry old_rte = { 0 };
+ struct IO_APIC_route_entry new_rte = { 0 };
+ unsigned int rte_lo = (reg & 1) ? reg - 1 : reg;
int saved_mask, bdf;
struct amd_iommu *iommu;
- *IO_APIC_BASE(apic) = reg;
- *(IO_APIC_BASE(apic)+4) = value;
-
if ( !iommu_intremap )
+ {
+ __io_apic_write(apic, reg, value);
return;
+ }
/* get device id of ioapic devices */
bdf = ioapic_bdf[IO_APIC_ID(apic)];
@@ -234,30 +233,49 @@ void amd_iommu_ioapic_update_ire(
{
AMD_IOMMU_DEBUG(
"Fail to find iommu for ioapic device id = 0x%x\n", bdf);
+ __io_apic_write(apic, reg, value);
return;
}
- if ( rte_upper )
- return;
- /* read both lower and upper 32-bits of rte entry */
- *IO_APIC_BASE(apic) = reg;
- *(((u32 *)&ioapic_rte) + 0) = *(IO_APIC_BASE(apic)+4);
- *IO_APIC_BASE(apic) = reg + 1;
- *(((u32 *)&ioapic_rte) + 1) = *(IO_APIC_BASE(apic)+4);
+ /* save io-apic rte lower 32 bits */
+ *((u32 *)&old_rte) = __io_apic_read(apic, rte_lo);
+ saved_mask = old_rte.mask;
+
+ if ( reg == rte_lo )
+ {
+ *((u32 *)&new_rte) = value;
+ /* read upper 32 bits from io-apic rte */
+ *(((u32 *)&new_rte) + 1) = __io_apic_read(apic, reg + 1);
+ }
+ else
+ {
+ *((u32 *)&new_rte) = *((u32 *)&old_rte);
+ *(((u32 *)&new_rte) + 1) = value;
+ }
/* mask the interrupt while we change the intremap table */
- saved_mask = ioapic_rte.mask;
- ioapic_rte.mask = 1;
- *IO_APIC_BASE(apic) = reg;
- *(IO_APIC_BASE(apic)+4) = *(((int *)&ioapic_rte)+0);
- ioapic_rte.mask = saved_mask;
+ if ( !saved_mask )
+ {
+ old_rte.mask = 1;
+ __io_apic_write(apic, rte_lo, *((u32 *)&old_rte));
+ }
- update_intremap_entry_from_ioapic(
- bdf, iommu, &ioapic_rte, rte_upper, value);
+ /* Update interrupt remapping entry */
+ update_intremap_entry_from_ioapic(bdf, iommu, &new_rte);
+
+ /* Forward write access to IO-APIC RTE */
+ __io_apic_write(apic, reg, value);
+
+ /* For lower bits access, return directly to avoid double writes */
+ if ( reg == rte_lo )
+ return;
/* unmask the interrupt after we have updated the intremap table */
- *IO_APIC_BASE(apic) = reg;
- *(IO_APIC_BASE(apic)+4) = *(((u32 *)&ioapic_rte)+0);
+ if ( !saved_mask )
+ {
+ old_rte.mask = saved_mask;
+ __io_apic_write(apic, rte_lo, *((u32 *)&old_rte));
+ }
}
static void update_intremap_entry_from_msi_msg(
++++++ 23228-x86-conditional-write_tsc.patch ++++++
References: bnc#623680
# HG changeset patch
# User Keir Fraser
# Date 1302853928 -3600
# Node ID 1329d99b4f161b7617a667f601077cc92559f248
# Parent b5165fb66b56d9438d77b475eaa9db67318d1ea1
x86: don't write_tsc() non-zero values on CPUs updating only the lower 32 bits
This means suppressing the uses in time_calibration_tsc_rendezvous(),
cstate_restore_tsc(), and synchronize_tsc_slave(), and fixes a boot
hang of Linux Dom0 when loading processor.ko on such systems that
have support for C states above C1.
Signed-off-by: Jan Beulich
Signed-off-by: Keir Fraser
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -943,3 +943,7 @@ void cpuidle_disable_deep_cstate(void)
hpet_disable_legacy_broadcast();
}
+bool_t cpuidle_using_deep_cstate(void)
+{
+ return xen_cpuidle && max_cstate > (local_apic_timer_c2_ok ? 2 : 1);
+}
--- a/xen/arch/x86/hpet.c
+++ b/xen/arch/x86/hpet.c
@@ -632,6 +632,9 @@ void hpet_disable_legacy_broadcast(void)
u32 cfg;
unsigned long flags;
+ if ( !legacy_hpet_event.shift )
+ return;
+
spin_lock_irqsave(&legacy_hpet_event.lock, flags);
legacy_hpet_event.flags |= HPET_EVT_DISABLE;
--- a/xen/arch/x86/time.c
+++ b/xen/arch/x86/time.c
@@ -21,6 +21,7 @@
#include
#include
#include
+#include
#include
#include
#include
@@ -175,7 +176,6 @@ static inline struct time_scale scale_re
* cpu_mask that denotes the CPUs that needs timer interrupt coming in as
* IPIs in place of local APIC timers
*/
-extern int xen_cpuidle;
static cpumask_t pit_broadcast_mask;
static void smp_send_timer_broadcast_ipi(void)
@@ -718,6 +718,8 @@ void cstate_restore_tsc(void)
if ( boot_cpu_has(X86_FEATURE_NONSTOP_TSC) )
return;
+ ASSERT(boot_cpu_has(X86_FEATURE_TSC_RELIABLE));
+
stime_delta = read_platform_stime() - t->stime_master_stamp;
if ( stime_delta < 0 )
stime_delta = 0;
@@ -1416,6 +1418,63 @@ void init_percpu_time(void)
}
}
+/*
+ * On certain older Intel CPUs writing the TSC MSR clears the upper 32 bits.
+ * Obviously we must not use write_tsc() on such CPUs.
+ *
+ * Additionally, AMD specifies that being able to write the TSC MSR is not an
+ * architectural feature (but, other than their manual says, also cannot be
+ * determined from CPUID bits).
+ */
+static void __init tsc_check_writability(void)
+{
+ const char *what = NULL;
+ uint64_t tsc;
+
+ /*
+ * If all CPUs are reported as synchronised and in sync, we never write
+ * the TSCs (except unavoidably, when a CPU is physically hot-plugged).
+ * Hence testing for writability is pointless and even harmful.
+ */
+ if ( boot_cpu_has(X86_FEATURE_TSC_RELIABLE) )
+ return;
+
+ rdtscll(tsc);
+ if ( wrmsr_safe(MSR_IA32_TSC, 0, 0) == 0 )
+ {
+ uint64_t tmp, tmp2;
+ rdtscll(tmp2);
+ write_tsc(tsc | (1ULL << 32));
+ rdtscll(tmp);
+ if ( ABS((s64)tmp - (s64)tmp2) < (1LL << 31) )
+ what = "only partially";
+ }
+ else
+ {
+ what = "not";
+ }
+
+ /* Nothing to do if the TSC is fully writable. */
+ if ( !what )
+ {
+ /*
+ * Paranoia - write back original TSC value. However, APs get synced
+ * with BSP as they are brought up, so this doesn't much matter.
+ */
+ write_tsc(tsc);
+ return;
+ }
+
+ printk(XENLOG_WARNING "TSC %s writable\n", what);
+
+ /* time_calibration_tsc_rendezvous() must not be used */
+ setup_clear_cpu_cap(X86_FEATURE_CONSTANT_TSC);
+
+ /* cstate_restore_tsc() must not be used (or do nothing) */
+ if ( !boot_cpu_has(X86_FEATURE_NONSTOP_TSC) )
+ cpuidle_disable_deep_cstate();
+}
+
/* Late init function (after all CPUs are booted). */
int __init init_xen_time(void)
{
@@ -1432,6 +1491,8 @@ int __init init_xen_time(void)
setup_clear_cpu_cap(X86_FEATURE_TSC_RELIABLE);
}
+ tsc_check_writability();
+
/* If we have constant-rate TSCs then scale factor can be shared. */
if ( boot_cpu_has(X86_FEATURE_CONSTANT_TSC) )
{
@@ -1486,7 +1547,7 @@ static int disable_pit_irq(void)
* XXX dom0 may rely on RTC interrupt delivery, so only enable
* hpet_broadcast if FSB mode available or if force_hpet_broadcast.
*/
- if ( xen_cpuidle && !boot_cpu_has(X86_FEATURE_ARAT) )
+ if ( cpuidle_using_deep_cstate() && !boot_cpu_has(X86_FEATURE_ARAT) )
{
hpet_broadcast_init();
if ( !hpet_broadcast_is_available() )
--- a/xen/include/xen/cpuidle.h
+++ b/xen/include/xen/cpuidle.h
@@ -83,7 +83,10 @@ struct cpuidle_governor
void (*reflect) (struct acpi_processor_power *dev);
};
+extern int xen_cpuidle;
extern struct cpuidle_governor *cpuidle_current_governor;
+
+bool_t cpuidle_using_deep_cstate(void);
void cpuidle_disable_deep_cstate(void);
#define CPUIDLE_DRIVER_STATE_START 1
++++++ 32on64-extra-mem.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -2,7 +2,7 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/python/xen/xend/XendDomainInfo.py
+++ xen-4.0.1-testing/tools/python/xen/xend/XendDomainInfo.py
-@@ -2919,7 +2919,7 @@ class XendDomainInfo:
+@@ -2917,7 +2917,7 @@ class XendDomainInfo:
self.guest_bitsize = self.image.getBitSize()
# Make sure there's enough RAM available for the domain
++++++ change_home_server.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -2,7 +2,7 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/python/xen/xend/XendDomainInfo.py
+++ xen-4.0.1-testing/tools/python/xen/xend/XendDomainInfo.py
-@@ -3135,6 +3135,11 @@ class XendDomainInfo:
+@@ -3134,6 +3134,11 @@ class XendDomainInfo:
self._cleanup_phantom_devs(paths)
self._cleanupVm()
++++++ cpupools-core-fixup.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -11,7 +11,7 @@
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
-@@ -1585,6 +1585,7 @@ int continue_hypercall_on_cpu(int cpu, v
+@@ -1590,6 +1590,7 @@ int continue_hypercall_on_cpu(int cpu, v
v->arch.schedule_tail = continue_hypercall_on_cpu_helper;
v->arch.continue_info = info;
@@ -19,7 +19,7 @@
}
else
{
-@@ -1595,8 +1596,7 @@ int continue_hypercall_on_cpu(int cpu, v
+@@ -1600,8 +1601,7 @@ int continue_hypercall_on_cpu(int cpu, v
info->func = func;
info->data = data;
@@ -39,7 +39,7 @@
static struct csched_private *csched_priv0 = NULL;
static void csched_tick(void *_cpu);
-@@ -1517,11 +1516,13 @@ static void csched_tick_resume(struct sc
+@@ -1518,11 +1517,13 @@ static void csched_tick_resume(struct sc
}
}
++++++ cpupools-core.patch ++++++
++++ 678 lines (skipped)
++++ between old-versions/11.3/UPDATES/all/xen/cpupools-core.patch
++++ and 11.3/xen/cpupools-core.patch
++++++ cve-2011-1583-4.0.patch ++++++
diff -r dbbc61c48da4 tools/libxc/xc_dom_bzimageloader.c
--- a/tools/libxc/xc_dom_bzimageloader.c Wed Apr 13 09:48:17 2011 +0100
+++ b/tools/libxc/xc_dom_bzimageloader.c Thu Apr 21 11:57:06 2011 +0100
@@ -68,8 +68,29 @@ static int xc_try_bzip2_decode(
for ( ; ; )
{
ret = BZ2_bzDecompress(&stream);
- if ( (stream.avail_out == 0) || (ret != BZ_OK) )
+ if ( ret == BZ_STREAM_END )
{
+ xc_dom_printf("BZIP2: Saw data stream end\n");
+ retval = 0;
+ break;
+ }
+ if ( ret != BZ_OK )
+ {
+ xc_dom_printf("BZIP2: error %d", ret);
+ free(out_buf);
+ goto bzip2_cleanup;
+ }
+
+ if ( stream.avail_out == 0 )
+ {
+ /* Protect against output buffer overflow */
+ if ( outsize > INT_MAX / 2 )
+ {
+ xc_dom_printf("BZIP2: output buffer overflow\n");
+ free(out_buf);
+ goto bzip2_cleanup;
+ }
+
tmp_buf = realloc(out_buf, outsize * 2);
if ( tmp_buf == NULL )
{
@@ -83,16 +104,18 @@ static int xc_try_bzip2_decode(
stream.avail_out = (outsize * 2) - outsize;
outsize *= 2;
}
-
- if ( ret != BZ_OK )
+ else if ( stream.avail_in == 0 )
{
- if ( ret == BZ_STREAM_END )
- {
- xc_dom_printf("BZIP2: Saw data stream end\n");
- retval = 0;
- break;
- }
- xc_dom_printf("BZIP2: error\n");
+ /*
+ * If there is output buffer available then this indicates
+ * that BZ2_bzDecompress would like more input data to be
+ * provided. However our complete input buffer is in
+ * memory and provided upfront so if avail_in is zero this
+ * actually indicates a truncated input.
+ */
+ xc_dom_printf("BZIP2: not enough input\n");
+ free(out_buf);
+ goto bzip2_cleanup;
}
}
@@ -187,31 +210,14 @@ static int xc_try_lzma_decode(
for ( ; ; )
{
ret = lzma_code(&stream, action);
- if ( (stream.avail_out == 0) || (ret != LZMA_OK) )
+ if ( ret == LZMA_STREAM_END )
{
- tmp_buf = realloc(out_buf, outsize * 2);
- if ( tmp_buf == NULL )
- {
- xc_dom_printf("LZMA: Failed to realloc memory\n");
- free(out_buf);
- goto lzma_cleanup;
- }
- out_buf = tmp_buf;
-
- stream.next_out = out_buf + outsize;
- stream.avail_out = (outsize * 2) - outsize;
- outsize *= 2;
+ xc_dom_printf("LZMA: Saw data stream end\n");
+ retval = 0;
+ break;
}
-
if ( ret != LZMA_OK )
{
- if ( ret == LZMA_STREAM_END )
- {
- xc_dom_printf("LZMA: Saw data stream end\n");
- retval = 0;
- break;
- }
-
switch ( ret )
{
case LZMA_MEM_ERROR:
@@ -245,7 +251,32 @@ static int xc_try_lzma_decode(
}
xc_dom_printf("%s: LZMA decompression error %s\n",
__FUNCTION__, msg);
- break;
+ free(out_buf);
+ goto lzma_cleanup;
+ }
+
+ if ( stream.avail_out == 0 )
+ {
+ /* Protect against output buffer overflow */
+ if ( outsize > INT_MAX / 2 )
+ {
+ xc_dom_printf("LZMA: output buffer overflow\n");
+ free(out_buf);
+ goto lzma_cleanup;
+ }
+
+ tmp_buf = realloc(out_buf, outsize * 2);
+ if ( tmp_buf == NULL )
+ {
+ xc_dom_printf("LZMA: Failed to realloc memory");
+ free(out_buf);
+ goto lzma_cleanup;
+ }
+ out_buf = tmp_buf;
+
+ stream.next_out = out_buf + outsize;
+ stream.avail_out = (outsize * 2) - outsize;
+ outsize *= 2;
}
}
@@ -314,18 +345,18 @@ struct setup_header {
extern struct xc_dom_loader elf_loader;
-static unsigned int payload_offset(struct setup_header *hdr)
+static int check_magic(struct xc_dom_image *dom, const void *magic, size_t len)
{
- unsigned int off;
+ if (len > dom->kernel_size)
+ return 0;
- off = (hdr->setup_sects + 1) * 512;
- off += hdr->payload_offset;
- return off;
+ return (memcmp(dom->kernel_blob, magic, len) == 0);
}
static int xc_dom_probe_bzimage_kernel(struct xc_dom_image *dom)
{
struct setup_header *hdr;
+ uint64_t payload_offset, payload_length;
int ret;
if ( dom->kernel_blob == NULL )
@@ -358,10 +389,30 @@ static int xc_dom_probe_bzimage_kernel(s
return -EINVAL;
}
- dom->kernel_blob = dom->kernel_blob + payload_offset(hdr);
- dom->kernel_size = hdr->payload_length;
- if ( memcmp(dom->kernel_blob, "\037\213", 2) == 0 )
+ /* upcast to 64 bits to avoid overflow */
+ /* setup_sects is u8 and so cannot overflow */
+ payload_offset = (hdr->setup_sects + 1) * 512;
+ payload_offset += hdr->payload_offset;
+ payload_length = hdr->payload_length;
+
+ if ( payload_offset >= dom->kernel_size )
+ {
+ xc_dom_panic(XC_INVALID_KERNEL, "%s: payload offset overflow",
+ __FUNCTION__);
+ return -EINVAL;
+ }
+ if ( (payload_offset + payload_length) > dom->kernel_size )
+ {
+ xc_dom_panic(XC_INVALID_KERNEL, "%s: payload length overflow",
+ __FUNCTION__);
+ return -EINVAL;
+ }
+
+ dom->kernel_blob = dom->kernel_blob + payload_offset;
+ dom->kernel_size = payload_length;
+
+ if ( check_magic(dom, "\037\213", 2) )
{
ret = xc_dom_try_gunzip(dom, &dom->kernel_blob, &dom->kernel_size);
if ( ret == -1 )
@@ -372,7 +423,7 @@ static int xc_dom_probe_bzimage_kernel(s
return -EINVAL;
}
}
- else if ( memcmp(dom->kernel_blob, "\102\132\150", 3) == 0 )
+ else if ( check_magic(dom, "\102\132\150", 3) )
{
ret = xc_try_bzip2_decode(dom, &dom->kernel_blob, &dom->kernel_size);
if ( ret < 0 )
@@ -383,7 +434,7 @@ static int xc_dom_probe_bzimage_kernel(s
return -EINVAL;
}
}
- else if ( memcmp(dom->kernel_blob, "\135\000", 2) == 0 )
+ else if ( check_magic(dom, "\135\000", 2) )
{
ret = xc_try_lzma_decode(dom, &dom->kernel_blob, &dom->kernel_size);
if ( ret < 0 )
++++++ del_usb_xend_entry.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -2,13 +2,13 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/python/xen/xend/XendDomainInfo.py
+++ xen-4.0.1-testing/tools/python/xen/xend/XendDomainInfo.py
-@@ -1312,8 +1312,15 @@ class XendDomainInfo:
+@@ -1310,8 +1310,15 @@ class XendDomainInfo:
frontpath = self.getDeviceController(deviceClass).frontendPath(dev)
backpath = xstransact.Read(frontpath, "backend")
thread.start_new_thread(self.getDeviceController(deviceClass).finishDeviceCleanup, (backpath, path))
-
- rc = self.getDeviceController(deviceClass).destroyDevice(devid, force)
-+ if deviceClass =='vusb':
++ if deviceClass =='vusb':
+ dev = self.getDeviceController(deviceClass).convertToDeviceNumber(devid)
+ state = self.getDeviceController(deviceClass).readBackend(dev, 'state')
+ if state == '1':
++++++ hv_tools.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -48,7 +48,7 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/python/xen/xend/image.py
+++ xen-4.0.1-testing/tools/python/xen/xend/image.py
-@@ -839,6 +839,7 @@ class HVMImageHandler(ImageHandler):
+@@ -842,6 +842,7 @@ class HVMImageHandler(ImageHandler):
self.apic = int(vmConfig['platform'].get('apic', 0))
self.acpi = int(vmConfig['platform'].get('acpi', 0))
@@ -56,7 +56,7 @@
self.guest_os_type = vmConfig['platform'].get('guest_os_type')
self.memory_sharing = int(vmConfig['memory_sharing'])
try:
-@@ -966,6 +967,7 @@ class HVMImageHandler(ImageHandler):
+@@ -973,6 +974,7 @@ class HVMImageHandler(ImageHandler):
log.debug("target = %d", mem_mb)
log.debug("vcpus = %d", self.vm.getVCpuCount())
log.debug("vcpu_avail = %li", self.vm.getVCpuAvail())
@@ -64,7 +64,7 @@
log.debug("acpi = %d", self.acpi)
log.debug("apic = %d", self.apic)
-@@ -975,6 +977,7 @@ class HVMImageHandler(ImageHandler):
+@@ -982,6 +984,7 @@ class HVMImageHandler(ImageHandler):
target = mem_mb,
vcpus = self.vm.getVCpuCount(),
vcpu_avail = self.vm.getVCpuAvail(),
++++++ ioemu-disable-emulated-ide-if-pv.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -1,7 +1,5 @@
-Index: xen-4.0.1-testing/tools/ioemu-qemu-xen/qemu-xen.h
-===================================================================
---- xen-4.0.1-testing.orig/tools/ioemu-qemu-xen/qemu-xen.h
-+++ xen-4.0.1-testing/tools/ioemu-qemu-xen/qemu-xen.h
+--- a/tools/ioemu-qemu-xen/qemu-xen.h
++++ b/tools/ioemu-qemu-xen/qemu-xen.h
@@ -1,6 +1,8 @@
#ifndef QEMU_XEN_H
#define QEMU_XEN_H
@@ -20,10 +18,8 @@
int xenstore_parse_disable_pf_config(void);
int xenstore_fd(void);
void xenstore_process_event(void *opaque);
-Index: xen-4.0.1-testing/tools/ioemu-qemu-xen/vl.c
-===================================================================
---- xen-4.0.1-testing.orig/tools/ioemu-qemu-xen/vl.c
-+++ xen-4.0.1-testing/tools/ioemu-qemu-xen/vl.c
+--- a/tools/ioemu-qemu-xen/vl.c
++++ b/tools/ioemu-qemu-xen/vl.c
@@ -5827,10 +5827,10 @@ int main(int argc, char **argv, char **e
if ((msg = xenbus_read(XBT_NIL, "domid", &domid_s)))
fprintf(stderr,"Can not read our own domid: %s\n", msg);
@@ -37,10 +33,8 @@
#endif /* CONFIG_STUBDOM */
/* we always create the cdrom drive, even if no disk is there */
-Index: xen-4.0.1-testing/tools/ioemu-qemu-xen/xenstore.c
-===================================================================
---- xen-4.0.1-testing.orig/tools/ioemu-qemu-xen/xenstore.c
-+++ xen-4.0.1-testing/tools/ioemu-qemu-xen/xenstore.c
+--- a/tools/ioemu-qemu-xen/xenstore.c
++++ b/tools/ioemu-qemu-xen/xenstore.c
@@ -397,7 +397,7 @@ static const char *xenstore_get_guest_uu
#define PT_PCI_POWER_MANAGEMENT_DEFAULT 0
int direct_pci_msitranslate;
++++++ multi-xvdp.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -34,7 +34,7 @@
xc = xen.lowlevel.xc.xc()
xoptions = XendOptions.instance()
-@@ -3304,20 +3304,27 @@ class XendDomainInfo:
+@@ -3303,20 +3303,27 @@ class XendDomainInfo:
# This is a file, not a device. pygrub can cope with a
# file if it's raw, but if it's QCOW or other such formats
# used through blktap, then we need to mount it first.
@@ -76,7 +76,7 @@
try:
blcfg = bootloader(blexec, fn, self, False,
-@@ -3325,11 +3332,11 @@ class XendDomainInfo:
+@@ -3324,11 +3331,11 @@ class XendDomainInfo:
finally:
if mounted:
log.info("Unmounting %s from %s." %
++++++ snapshot-without-pv-fix.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -8,16 +8,6 @@
drives_table[], otherwise the disk in qemu will just stay opened,not
showing up in drives_table[].
-
-Signed-off-by: Li Dongyang
----
- tools/blktap/drivers/blktapctrl.c | 81 +++++++++++++++++++++++++++++++++-
- tools/blktap/lib/blkif.c | 23 ++++++++++
- tools/blktap/lib/blktaplib.h | 5 ++
- tools/blktap/lib/xenbus.c | 69 +++++++++++++++++++++++++++++
- tools/ioemu-qemu-xen/hw/xen_blktap.c | 49 +++++++++++++++-----
- 5 files changed, 213 insertions(+), 14 deletions(-)
-
Index: xen-4.0.1-testing/tools/blktap/drivers/blktapctrl.c
===================================================================
--- xen-4.0.1-testing.orig/tools/blktap/drivers/blktapctrl.c
++++++ snapshot-xend.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -690,15 +690,18 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/python/xen/xend/XendDomainInfo.py
+++ xen-4.0.1-testing/tools/python/xen/xend/XendDomainInfo.py
-@@ -507,7 +507,6 @@ class XendDomainInfo:
+@@ -505,10 +505,7 @@ class XendDomainInfo:
+ log.warn("Cannot restore CPU affinity")
+
self._setSchedParams()
- self._storeVmDetails()
+- self._storeVmDetails()
self._createChannels()
- self._createDevices()
- self._storeDomDetails()
+- self._storeDomDetails()
self._endRestore()
except:
-@@ -2383,7 +2382,7 @@ class XendDomainInfo:
+ log.exception('VM resume failed')
+@@ -2383,7 +2380,7 @@ class XendDomainInfo:
return self.getDeviceController(deviceClass).reconfigureDevice(
devid, devconfig)
@@ -707,7 +710,7 @@
"""Create the devices for a vm.
@raise: VmError for invalid devices
-@@ -2432,7 +2431,7 @@ class XendDomainInfo:
+@@ -2432,7 +2429,7 @@ class XendDomainInfo:
if self.image:
@@ -716,12 +719,13 @@
#if have pass-through devs, need the virtual pci slots info from qemu
self.pci_device_configure_boot()
-@@ -3048,7 +3047,7 @@ class XendDomainInfo:
+@@ -3048,7 +3045,8 @@ class XendDomainInfo:
self._introduceDomain()
self.image = image.create(self, self.info)
if self.image:
- self.image.createDeviceModel(True)
+ self._createDevices(True)
++ self._storeVmDetails()
self._storeDomDetails()
self._registerWatches()
self.refreshShutdown()
++++++ tools-xc_kexec.diff ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -27,15 +27,18 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/xcutils/Makefile
+++ xen-4.0.1-testing/tools/xcutils/Makefile
-@@ -14,7 +14,7 @@ include $(XEN_ROOT)/tools/Rules.mk
+@@ -14,9 +14,9 @@ include $(XEN_ROOT)/tools/Rules.mk
CFLAGS += -Werror
CFLAGS += $(CFLAGS_libxenctrl) $(CFLAGS_libxenguest) $(CFLAGS_libxenstore)
-PROGRAMS = xc_restore xc_save readnotes lsevtchn
+PROGRAMS = xc_restore xc_save readnotes lsevtchn xc_kexec
- LDLIBS = $(LDFLAGS_libxenctrl) $(LDFLAGS_libxenguest) $(LDFLAGS_libxenstore)
+-LDLIBS = $(LDFLAGS_libxenctrl) $(LDFLAGS_libxenguest) $(LDFLAGS_libxenstore)
++LDLIBS = $(LDFLAGS_libxenctrl) $(LDFLAGS_libxenguest) $(LDFLAGS_libxenstore) -lbz2
+ .PHONY: all
+ all: build
@@ -27,6 +27,11 @@ build: $(PROGRAMS)
$(PROGRAMS): %: %.o
$(CC) $(CFLAGS) $(LDFLAGS) $^ $(LDLIBS) -o $@
++++++ x86-show-page-walk-early.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:12.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:12.000000000 +0200
@@ -21,7 +21,7 @@
printk("%p ", _p(*stk++));
--- a/xen/arch/x86/x86_32/mm.c
+++ b/xen/arch/x86/x86_32/mm.c
-@@ -122,6 +122,8 @@ void __init paging_init(void)
+@@ -123,6 +123,8 @@ void __init paging_init(void)
#undef CNT
#undef MFN
@@ -64,7 +64,7 @@
unmap_domain_page(l1t);
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
-@@ -725,6 +725,8 @@ void __init paging_init(void)
+@@ -739,6 +739,8 @@ void __init paging_init(void)
#undef CNT
#undef MFN
@@ -117,7 +117,7 @@
}
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
-@@ -443,6 +443,8 @@ TYPE_SAFE(unsigned long,mfn);
+@@ -441,6 +441,8 @@ TYPE_SAFE(unsigned long,mfn);
#define SHARED_M2P_ENTRY (~0UL - 1UL)
#define SHARED_M2P(_e) ((_e) == SHARED_M2P_ENTRY)
++++++ xen-config.diff ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:13.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:13.000000000 +0200
@@ -18,17 +18,12 @@
===================================================================
--- xen-4.0.1-testing.orig/tools/libxc/Makefile
+++ xen-4.0.1-testing/tools/libxc/Makefile
-@@ -169,10 +169,10 @@ zlib-options = $(shell \
- fi) | grep $(1))
- endif
+@@ -174,7 +174,7 @@ xc_dom_bzimageloader.opic: CFLAGS += $(c
--xc_dom_bzimageloader.o: CFLAGS += $(call zlib-options,D)
--xc_dom_bzimageloader.opic: CFLAGS += $(call zlib-options,D)
-+#xc_dom_bzimageloader.o: CFLAGS += $(call zlib-options,D)
-+#xc_dom_bzimageloader.opic: CFLAGS += $(call zlib-options,D)
-
--libxenguest.so.$(MAJOR).$(MINOR): LDFLAGS += $(call zlib-options,l)
-+#libxenguest.so.$(MAJOR).$(MINOR): LDFLAGS += $(call zlib-options,l)
+ libxenguest.so.$(MAJOR).$(MINOR): LDFLAGS += $(call zlib-options,l)
libxenguest.so.$(MAJOR).$(MINOR): $(GUEST_PIC_OBJS) libxenctrl.so
- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenguest.so.$(MAJOR) $(SHLIB_CFLAGS) -o $@ $(GUEST_PIC_OBJS) -lz -lxenctrl $(PTHREAD_LIBS)
+- $(CC) $(CFLAGS) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenguest.so.$(MAJOR) $(SHLIB_CFLAGS) -o $@ $(GUEST_PIC_OBJS) -lz -lxenctrl $(PTHREAD_LIBS)
++ $(CC) $(CFLAGS) $(LDFLAGS) -Wl,$(SONAME_LDFLAG) -Wl,libxenguest.so.$(MAJOR) $(SHLIB_CFLAGS) -o $@ $(GUEST_PIC_OBJS) -lz -lxenctrl $(PTHREAD_LIBS) $(call zlib-options,l)
+
+ -include $(DEPS)
++++++ xenconsole-no-multiple-connections.patch ++++++
Index: xen-4.0.1-testing/tools/console/client/main.c
===================================================================
--- xen-4.0.1-testing.orig/tools/console/client/main.c
+++ xen-4.0.1-testing/tools/console/client/main.c
@@ -95,6 +95,7 @@ static int get_pty_fd(struct xs_handle *
* Assumes there is already a watch set in the store for this path. */
{
struct timeval tv;
+ struct flock lock;
fd_set watch_fdset;
int xs_fd = xs_fileno(xs), pty_fd = -1;
int start, now;
@@ -121,6 +122,12 @@ static int get_pty_fd(struct xs_handle *
if (pty_fd == -1)
err(errno, "Could not open tty `%s'",
pty_path);
+ memset(&lock, 0, sizeof(lock));
+ lock.l_type = F_WRLCK;
+ lock.l_whence = SEEK_SET;
+ if (fcntl(pty_fd, F_SETLK, &lock) != 0)
+ err(errno, "Could not lock tty '%s'",
+ pty_path);
free(pty_path);
}
}
++++++ xend-domain-lock.patch ++++++
--- /var/tmp/diff_new_pack.ZKlwWp/_old 2011-05-10 11:08:13.000000000 +0200
+++ /var/tmp/diff_new_pack.ZKlwWp/_new 2011-05-10 11:08:13.000000000 +0200
@@ -94,7 +94,7 @@
XendTask.log_progress(0, 30, self._constructDomain)
XendTask.log_progress(31, 60, self._initDomain)
-@@ -2989,6 +2991,11 @@ class XendDomainInfo:
+@@ -2987,6 +2989,11 @@ class XendDomainInfo:
self._stateSet(DOM_STATE_HALTED)
self.domid = None # Do not push into _stateSet()!
@@ -106,7 +106,7 @@
finally:
self.refresh_shutdown_lock.release()
-@@ -4498,6 +4505,74 @@ class XendDomainInfo:
+@@ -4497,6 +4504,74 @@ class XendDomainInfo:
def has_device(self, dev_class, dev_uuid):
return (dev_uuid in self.info['%s_refs' % dev_class.lower()])
++++++ xend-validate-nic-model.patch ++++++
Do not invoke qemu with unsupported NIC model
Unlike xmdomain.cfg config format, libvirt does not distinguish between
'model' and 'type' in its vif XML, which is correct IMO since netfront
(pv NIC) is just another NIC model. libvirt maps a model of type netfront
to 'type=netfront', and all other models to 'type=ioemu,model=user-val'.
For example
libvirt vif XML xmdomain.cfg
<model type='netfrond'/> vif=[ 'type=netfrond' ]
<model type='user-val'/> vif=[ 'type=ioemu,model=user-val' ]
In the latter case, qemu-dm is invoked with
-net nic,vlan=1,macaddr=00:16:3e:3b:c6:0d,model=user-val,bridge=br0
which causes qemu-dm to exit with failure if user-val is not a
supported NIC model. Since monitoring qemu-dm exit is asynchronous
wrt domain creation, it appears the domain was successfully created from
the client's (xm, libvirt, etc.) perspective when in fact it was not.
This patch simply checks that the specifed NIC model is supported by
qemu-dm, and raises a VmError if not. qemu will report supported models
when invoked with '-net nic,model=?'. It will also list supported models
when invoked with an unsupported one.
Signed-off-by: Jim Fehlig
Index: xen-4.0.1-testing/tools/python/xen/xend/image.py
===================================================================
--- xen-4.0.1-testing.orig/tools/python/xen/xend/image.py
+++ xen-4.0.1-testing/tools/python/xen/xend/image.py
@@ -50,6 +50,9 @@ MAX_GUEST_CMDLINE = 1024
sentinel_path_prefix = '/var/run/xend/dm-'
sentinel_fifos_inuse = { }
+supported_nic_models = ['ne2k_pci', 'i82551', 'i82557b', 'i82559er',
+ 'rtl8139', 'e1000', 'pcnet', 'virtio']
+
def cleanup_stale_sentinel_fifos():
for path in glob.glob(sentinel_path_prefix + '*.fifo'):
if path in sentinel_fifos_inuse: continue
@@ -925,6 +928,10 @@ class HVMImageHandler(ImageHandler):
raise VmError("MAC address not specified or generated.")
bridge = devinfo.get('bridge', None)
model = devinfo.get('model', 'rtl8139')
+ if model not in supported_nic_models:
+ raise VmError("Emulation of NIC model '%s' is not supported" %
+ model)
+
ret.append("-net")
net = "nic,vlan=%d,macaddr=%s,model=%s" % (nics, mac, model)
if bridge:
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
Remember to have fun...
--
To unsubscribe, e-mail: opensuse-commit+unsubscribe@opensuse.org
For additional commands, e-mail: opensuse-commit+help@opensuse.org