Integrate with libaudit.so for auditing of important operations.
libvirtd gains a couple of config entries for auditing. By
default it will enable auditing, if its enabled on the host.
It can be configured to force exit if auditing is disabled
on the host. It will can also send audit messages via libvirt
internal logging API
Places requiring audit reporting can use the VIR_AUDIT
macro to report data. This is a no-op unless auditing is
enabled
* autobuild.sh, mingw32-libvirt.spec.in: Disable audit
on mingw
* configure.ac: Add check for libaudit
* daemon/libvirtd.aug, daemon/libvirtd.conf,
daemon/test_libvirtd.aug, daemon/libvirtd.c: Add config
options to enable auditing
* include/libvirt/virterror.h, src/util/virterror.c: Add
VIR_FROM_AUDIT source
* libvirt.spec.in: Enable audit
* src/util/virtaudit.h, src/util/virtaudit.c: Simple internal
API for auditing messages
This patch series focuses on xendConfigVersion 2 (xm_internal) and 3
(xend_internal), but leaves out changes for xenapi drivers.
See this link for more details about vcpu_avail for xm usage.
http://lists.xensource.com/archives/html/xen-devel/2009-11/msg01061.html
This relies on the fact that def->maxvcpus can be at most 32 with xen.
* src/xen/xend_internal.c (xenDaemonParseSxpr)
(sexpr_to_xend_domain_info, xenDaemonFormatSxpr): Use vcpu_avail
when current vcpus is less than maximum.
* src/xen/xm_internal.c (xenXMDomainConfigParse)
(xenXMDomainConfigFormat): Likewise.
* tests/xml2sexprdata/xml2sexpr-pv-vcpus.sexpr: New file.
* tests/sexpr2xmldata/sexpr2xml-pv-vcpus.sexpr: Likewise.
* tests/sexpr2xmldata/sexpr2xml-pv-vcpus.xml: Likewise.
* tests/xmconfigdata/test-paravirt-vcpu.cfg: Likewise.
* tests/xmconfigdata/test-paravirt-vcpu.xml: Likewise.
* tests/xml2sexprtest.c (mymain): New test.
* tests/sexpr2xmltest.c (mymain): Likewise.
* tests/xmconfigtest.c (mymain): Likewise.
* src/qemu/qemu_conf.c (qemuParseCommandLineSmp): Distinguish
between vcpus and maxvcpus, for new enough qemu.
* tests/qemuargv2xmltest.c (mymain): Add new test.
* tests/qemuxml2argvtest.c (mymain): Likewise.
* tests/qemuxml2xmltest.c (mymain): Likewise.
* tests/qemuxml2argvdata/qemuxml2argv-smp.args: New file.
Although this patch adds a distinction between maximum vcpus and
current vcpus in the XML, the values should be identical for all
drivers at this point. Only in subsequent per-driver patches will
a distinction be made.
In general, virDomainGetInfo should prefer the current vcpus.
* src/conf/domain_conf.h (_virDomainDef): Adjust vcpus to unsigned
short, to match virDomainGetInfo limit. Add maxvcpus member.
* src/conf/domain_conf.c (virDomainDefParseXML)
(virDomainDefFormat): parse and print out vcpu details.
* src/xen/xend_internal.c (xenDaemonParseSxpr)
(xenDaemonFormatSxpr): Manage both vcpu numbers, and require them
to be equal for now.
* src/xen/xm_internal.c (xenXMDomainConfigParse)
(xenXMDomainConfigFormat): Likewise.
* src/phyp/phyp_driver.c (phypDomainDumpXML): Likewise.
* src/openvz/openvz_conf.c (openvzLoadDomains): Likewise.
* src/openvz/openvz_driver.c (openvzDomainDefineXML)
(openvzDomainCreateXML, openvzDomainSetVcpusInternal): Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainDumpXML, vboxDomainDefineXML):
Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainDumpXML): Likewise.
* src/xenapi/xenapi_utils.c (createVMRecordFromXml): Likewise.
* src/esx/esx_vmx.c (esxVMX_ParseConfig, esxVMX_FormatConfig):
Likewise.
* src/qemu/qemu_conf.c (qemuBuildSmpArgStr)
(qemuParseCommandLineSmp, qemuParseCommandLine): Likewise.
* src/qemu/qemu_driver.c (qemudDomainHotplugVcpus): Likewise.
* src/opennebula/one_conf.c (xmlOneTemplate): Likewise.
Note - this wrapping is completely mechanical; the old API will
function identically, since the new API validates that the exact
same flags are provided by the old API. On a per-driver basis,
it may make sense to have the old API pass a different set of flags,
but that should be done in the per-driver patch that implements
the full range of flag support in the new API.
* src/esx/esx_driver.c (esxDomainSetVcpus, escDomainGetMaxVpcus):
Move guts...
(esxDomainSetVcpusFlags, esxDomainGetVcpusFlags): ...to new
functions.
(esxDriver): Trivially support the new API.
* src/openvz/openvz_driver.c (openvzDomainSetVcpus)
(openvzDomainSetVcpusFlags, openvzDomainGetMaxVcpus)
(openvzDomainGetVcpusFlags, openvzDriver): Likewise.
* src/phyp/phyp_driver.c (phypDomainSetCPU)
(phypDomainSetVcpusFlags, phypGetLparCPUMAX)
(phypDomainGetVcpusFlags, phypDriver): Likewise.
* src/qemu/qemu_driver.c (qemudDomainSetVcpus)
(qemudDomainSetVcpusFlags, qemudDomainGetMaxVcpus)
(qemudDomainGetVcpusFlags, qemuDriver): Likewise.
* src/test/test_driver.c (testSetVcpus, testDomainSetVcpusFlags)
(testDomainGetMaxVcpus, testDomainGetVcpusFlags, testDriver):
Likewise.
* src/vbox/vbox_tmpl.c (vboxDomainSetVcpus)
(vboxDomainSetVcpusFlags, virDomainGetMaxVcpus)
(virDomainGetVcpusFlags, virDriver): Likewise.
* src/xen/xen_driver.c (xenUnifiedDomainSetVcpus)
(xenUnifiedDomainSetVcpusFlags, xenUnifiedDomainGetMaxVcpus)
(xenUnifiedDomainGetVcpusFlags, xenUnifiedDriver): Likewise.
* src/xenapi/xenapi_driver.c (xenapiDomainSetVcpus)
(xenapiDomainSetVcpusFlags, xenapiDomainGetMaxVcpus)
(xenapiDomainGetVcpusFlags, xenapiDriver): Likewise.
(xenapiError): New helper macro.
Factors common checks (such as nonzero vcpu count) up front, but
drivers will still need to do additional flag checks.
* src/libvirt.c (virDomainSetVcpusFlags, virDomainGetVcpusFlags):
New functions.
(virDomainSetVcpus, virDomainGetMaxVcpus): Refer to new API.
API agreed on in
https://www.redhat.com/archives/libvir-list/2010-September/msg00456.html,
but modified for enum names to be consistent with virDomainDeviceModifyFlags.
* include/libvirt/libvirt.h.in (virDomainVcpuFlags)
(virDomainSetVcpusFlags, virDomainGetVcpusFlags): New
declarations.
* src/libvirt_public.syms: Export new symbols.
In the table built for traffic coming from the VM going to the host make the following changes:
- don't ACCEPT the packets but do a 'RETURN' and let the host-specific firewall rules in subsequent rules evaluate whether the traffic is allowed to enter
- use the '-m state' in the rules as everywhere else
ESX(i) uses UTF-8, but a Windows based GSX server writes
Windows-1252 encoded VMX files.
Add a test case to ensure that libxml2 provides Windows-1252
to UTF-8 conversion.
Since bugs due to double-closed file descriptors are difficult to track down in a multi-threaded system, I am introducing the VIR_CLOSE(fd) macro to help avoid mistakes here.
There are lots of places where close() is being used. In this patch I am only cleaning up usage of close() in src/conf where the problems were.
I also dare to declare close() as being deprecated in libvirt code base (HACKING).
Over root-squashing nfs, when virFileOperation() is called as uid==0,
it may fail with EACCES, but also with EPERM, due to
virFileOperationNoFork()'s failed attemp to chown a writable file.
qemudDomainSaveFlag() should expect this case, too.
qemudOpenAsUID is intended to open a file with the credentials of a
specified uid. Current implementation fails if the file is accessible to
one of uid's groups but not owned by uid.
This patch replaces the supplementary group list that the child process
inherited from libvirtd with the default group list of uid.
* docs/formatdomain.html.in: Add memtune element details, added min_guarantee
* src/libvirt.c: Update virDomainGetMemoryParameters api description, make
it more clear that the user first needs to call the api to get the number
of parameters supported and then call again to get the values.
* tools/virsh.pod: Add usage of new command memtune in virsh manpage
This introduces new attribute to filesystem element
to support customizable access mode for mount type.
Valid accessmode are: passthrough, mapped and squash.
Usage:
<filesystem type='mount' accessmode='passthrough'>
<source dir='/export/to/guest'/>
<target dir='mount_tag'/>
</filesystem>
passthrough is the default model if not specified, that's
also the current behaviour.
The following filter transition from a filter allowing incoming TCP connections
<rule action='accept' direction='in' priority='401'>
<tcp/>
</rule>
<rule action='accept' direction='out' priority='500'>
<tcp/>
</rule>
to one that does not allow them
<rule action='drop' direction='in' priority='401'>
<tcp/>
</rule>
<rule action='accept' direction='out' priority='500'>
<tcp/>
</rule>
did previously not cut off existing (ssh) connections but only prevented newly initiated ones. The attached patch allows to cut off existing connections as well, thus enforcing what the filter is showing.
I had only tested with a configuration where the physical interface is connected to the bridge where the filters are applied. This patch now also solves a filtering problem where the physical interface is not connected to the bridge, but the bridge is given an IP address and the host routes between bridge and physical interface. Here the filters drop non-allowed traffic on the outgoing side on the host.
Explicitly raising a nice error in the case user tries to migrate a
guest with assigned host devices is much better than waiting for a
mysterious error with no clue for the reason.
When only some host CPUs given to cpuBaseline contain <vendor> element,
baseline CPU should not contain it. Otherwise the result would not be
compatible with the host CPUs without vendor. CPU vendors are still
taken into account when computing baseline CPU, it's just removed from
the result.
Recent CPU models were specified using invalid vendor element
<vendor>NAME</vendor>, which was silently ignored due to a bug in the
code which was parsing it.
'make -C src rpcgen' is supposed to be idempotent. But commit
f928f43b7b mistakently manually edited a generated file rather
than fixing the upstream file.
* src/remote/remote_protocol.x (remote_memory_param_value): Use
correct spelling of enum values.
* src/remote/remote_protocol.c: Regenerate.
This enables support for nested SVM using the regular CPU
model/features block. If the CPU model or features include
'svm', then the '-enable-nesting' flag will be added to the
QEMU command line. Latest out of tree patches for nested
'vmx', no longer require the '-enable-nesting' flag. They
instead just look at the cpu features. Several of the models
already include svm support, but QEMU was just masking out
the svm bit silently. So this will enable SVM on such
models
* src/qemu/qemu_conf.h: flag for -enable-nesting
* src/qemu/qemu_conf.c: Use -enable-nesting if VMX or SVM are in
the CPUID
* src/cpu/cpu.h, src/cpu/cpu.c: API to check for a named feature
* src/cpu/cpu_x86.c: x86 impl of feature check
* src/libvirt_private.syms: Add cpuHasFeature
* src/qemuhelptest.c: Add nesting flag where required
* src/xen/sexpr.c: Ensure () are escaped in sexpr2string
* tests/sexpr2xmldata/sexpr2xml-boot-grub.sexpr,
tests/sexpr2xmldata/sexpr2xml-boot-grub.xml,
tests/xml2sexprdata/xml2sexpr-boot-grub.sexpr,
tests/xml2sexprdata/xml2sexpr-boot-grub.xml: Data files to
check escaping
* tests/sexpr2xmltest.c, tests/xml2sexprtest.c: Add boot-grub
escaping test case
This is from a bug report and conversation on IRC where Soren reported that while a filter update is occurring on one or more VMs (due to a rule having been edited for example), a deadlock can occur when a VM referencing a filter is started.
The problem is caused by the two locking sequences of
qemu driver, qemu domain, filter # for the VM start operation
filter, qemu_driver, qemu_domain # for the filter update operation
that obviously don't lock in the same order. The problem is the 2nd lock sequence. Here the qemu_driver lock is being grabbed in qemu_driver:qemudVMFilterRebuild()
The following solution is based on the idea of trying to re-arrange the 2nd sequence of locks as follows:
qemu_driver, filter, qemu_driver, qemu_domain
and making the qemu driver recursively lockable so that a second lock can occur, this would then lead to the following net-locking sequence
qemu_driver, filter, qemu_domain
where the 2nd qemu_driver lock has been ( logically ) eliminated.
The 2nd part of the idea is that the sequence of locks (filter, qemu_domain) and (qemu_domain, filter) becomes interchangeable if all code paths where filter AND qemu_domain are locked have a preceding qemu_domain lock that basically blocks their concurrent execution
So, the following code paths exist towards qemu_driver:qemudVMFilterRebuild where we now want to put a qemu_driver lock in front of the filter lock.
-> nwfilterUndefine() [ locks the filter ]
-> virNWFilterTestUnassignDef()
-> virNWFilterTriggerVMFilterRebuild()
-> qemudVMFilterRebuild()
-> nwfilterDefine()
-> virNWFilterPoolAssignDef() [ locks the filter ]
-> virNWFilterTriggerVMFilterRebuild()
-> qemudVMFilterRebuild()
-> nwfilterDriverReload()
-> virNWFilterPoolLoadAllConfigs()
->virNWFilterPoolObjLoad()
-> virNWFilterPoolAssignDef() [ locks the filter ]
-> virNWFilterTriggerVMFilterRebuild()
-> qemudVMFilterRebuild()
-> nwfilterDriverStartup()
-> virNWFilterPoolLoadAllConfigs()
->virNWFilterPoolObjLoad()
-> virNWFilterPoolAssignDef() [ locks the filter ]
-> virNWFilterTriggerVMFilterRebuild()
-> qemudVMFilterRebuild()
Qemu is not the only driver using the nwfilter driver, but also the UML driver calls into it. Therefore qemuVMFilterRebuild() can be exchanged with umlVMFilterRebuild() along with the driver lock of qemu_driver that can now be a uml_driver. Further, since UML and Qemu domains can be running on the same machine, the triggering of a rebuild of the filter can touch both types of drivers and their domains.
In the patch below I am now extending each nwfilter callback driver with functions for locking and unlocking the (VM) driver (UML, QEMU) and introduce new functions for locking all registered callback drivers and unlocking them. Then I am distributing the lock-all-cbdrivers/unlock-all-cbdrivers call into the above call paths. The last shown callpath starting with nwfilterDriverStart() is problematic since it is initialize before the Qemu and UML drives are and thus a lock in the path would result in a NULL pointer attempted to be locked -- the call to virNWFilterTriggerVMFilterRebuild() is never called, so we never lock either the qemu_driver or the uml_driver in that path. Therefore, only the first 3 paths now receive calls to lock and unlock all callback drivers. Now that the locks are distributed where it matters I can remove the qemu_driver and uml_driver lock from qemudVMFilterRebuild() and umlVMFilterRebuild() and not requiring the recursive locks.
For now I want to put this out as an RFC patch. I have tested it by 'stretching' the critical section after the define/undefine functions each lock the filter so I can (easily) concurrently execute another VM operation (suspend,start). That code is in this patch and if you want you can de-activate it. It seems to work ok and operations are being blocked while the update is being done.
I still also want to verify the other assumption above that locking filter and qemu_domain always has a preceding qemu_driver lock.
* include/libvirt/libvirt.h.in: some of the function type description
were broken so they could not be automatically documented
* src/util/event.c docs/apibuild.py: event.c exports one public API
so it needs to be scanned too, avoid a few warnings
Make use of the existing <filesystem> element to support plan9fs
filesystem passthrough in the QEMU driver
<filesystem type='mount'>
<source dir='/export/to/guest'/>
<target dir='/import/from/host'/>
</filesystem>
NB, the target is not actually a directory, it is merely a arbitrary
string tag that is exported to the guest as a hint for where to mount
it.
Add proper documentation to the new VIR_DOMAIN_MEMORY_* macros in
libvirt.h.in to placate apibuild.py.
Mark args as unused in for libvirt_virDomain{Get,Set}MemoryParameters
in the Python bindings and add both to the libvirtMethods array.
Update remote_protocol-structs to placate make syntax-check.
Undo unintended modifications in vboxDomainGetInfo.
Update the function table of the VirtualBox and XenAPI drivers.
Adding parsing code for memory tunables in the domain xml file
also change the internal define structures used for domain memory
informations
Adds a new specific test
Public api to set/get memory tunables supported by the hypervisors.
dv:
* some cleanups in libvirt.c
* adding extra checks in libvirt.c new entry points
v4:
* Move exporting public API to this patch
* Add unsigned int flags to the public api for future extensions
v3:
* Add domainGetMemoryParamters and NULL in all the driver interface
v2:
* Initialize domainSetMemoryParameters to NULL in all the driver
interface structure.
Some features provided by the recently added CPU models were mentioned
twice for each model. This was a result of automatic generation of the
XML from qemu's CPU configuration file without noticing this redundancy.