Libvirt uses a domain name to fill in owner_name in sanlock_options in
virLockManagerSanlockAcquire. Unfortunately, owner_name is limited to
SANLK_NAME_LEN characters (including trailing '\0'), which means domains
with longer names fail to start when sanlock is enabled. However, we can
truncate the name when setting owner_name as explained by sanlock's
author:
Setting sanlk_options or the owner_name is unnecessary, and has very
little to no benefit. If you do provide something in owner_name, it can
be anything, sanlock doesn't care or use it.
If you run the command "sanlock status", the output will display a list
of clients connected to the sanlock daemon. This client list is
displayed as "pid owner_name" if the client has provided an owner_name
via sanlk_options. This debugging output is the only usage of
owner_name, so its only benefit is to potentially provide a more human
friendly output for debugging purposes.
Only tested on v7 but the v8 equivalent seems pretty obvious.
XEN_CAP_REGEX already accepts more than it should (e.g. x86_64p or x86_32be)
but I have stuck with the existing pattern.
With this I can create a guest from:
<domain type='xen'>
<name>libvirt-test</name>
<uuid>6343998e-9eda-11e3-98f6-77252a7d02f3</uuid>
<memory>393216</memory>
<currentMemory>393216</currentMemory>
<vcpu>1</vcpu>
<os>
<type arch='armv7l' machine='xenpv'>linux</type>
<kernel>/boot/vmlinuz-arm-native</kernel>
<cmdline>console=hvc0 earlyprintk debug root=/dev/xvda1</cmdline>
</os>
<clock offset='utc'/>
<on_poweroff>destroy</on_poweroff>
<on_reboot>restart</on_reboot>
<on_crash>destroy</on_crash>
<devices>
<disk type='block' device='disk'>
<source dev='/dev/marilith-n0/debian-disk'/>
<target dev='xvda1'/>
</disk>
<interface type='bridge'>
<mac address='8e:a7:8e:3c:f4:f6'/>
<source bridge='xenbr0'/>
</interface>
</devices>
</domain>
Using virsh create and I can destroy it too.
Currently virsh console fails with:
Connected to domain libvirt-test
Escape character is ^]
error: internal error: cannot find character device <null>
I haven't investigated yet.
Signed-off-by: Ian Campbell <ian.campbell@citrix.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
According to commit b4e0299d if networkAllocateActualDevice() was
successful, it will *always* allocate an iface->data.network.actual,
so we can use this during networkReleaseActualDevice() to know if
there is really anything to undo. We were properly using this
information to only decrement the network connections counter if it
had previously been incremented, but we were unconditionally
unplugging bandwidth and calling the "unplugged" network hook for
*all* interfaces (during qemuProcessStop()) whether they had been
previously plugged or not. This caused problems if a domain failed to
start at some time prior to all interfaces being allocated. (I
encountered this when an interface had a bandwidth floor set but no
inbound QoS).
This patch changes both the call to networkUnplugBandwidth() and the
call to networkRunHook() to only be called if there was a previous
call to "plug" for the same interface.
networkAllocateActualDevice() is called for *all* interfaces, not just
those with type='network'. In that case, it will jump down to its
validate: label immediately, without allocating anything. After
validation is done, two counters are potentially updated (one for the
network, and one for any particular physical device that is chosen),
and then networkRunHook() is called.
This patch refactors that code a slight bit so that networkRunHook()
doesn't get called if netdef is NULL (i.e. type != network) and to
place the conditional increment of dev->connections inside the "if
(netdef)" as well - dev can never be non-null if netdef is null
(because "dev" is the pointer to a device in a network's pool of
devices), so this doesn't have any functional effect, it just makes
the code clearer.
While running virscsitest, it was found that valgrind pointed out the following
memory leak:
==320== 5 bytes in 1 blocks are definitely lost in loss record 4 of 37
==320== at 0x4A069EE: malloc (vg_replace_malloc.c:270)
==320== by 0x3E6CE81171: strdup (strdup.c:43)
==320== by 0x4CB28DF: virStrdup (virstring.c:554)
==320== by 0x4CAC987: virSCSIDeviceSetUsedBy (virscsi.c:289)
==320== by 0x402321: test2 (virscsitest.c:100)
==320== by 0x403231: virtTestRun (testutils.c:199)
==320== by 0x402121: mymain (virscsitest.c:180)
==320== by 0x4039AD: virtTestMain (testutils.c:782)
==320== by 0x3E6CE1ED1C: (below main) (libc-start.c:226)
==320==
Introduced by commit fd243fc.
Signed-off-by: Ján Tomko <jtomko@redhat.com>
Consider dozen of LXC domains, each of them having this type of interface:
<interface type='network'>
<mac address='52:54:00:a7:05:4b'/>
<source network='default'/>
</interface>
When starting these domain in parallel, all workers may meet in
virNetDevVethCreate() where a race starts. Race over allocating veth
pairs because allocation requires two steps:
1) find first nonexistent '/sys/class/net/vnet%d/'
2) run 'ip link add ...' command
Now consider two threads. Both of them find N as the first unused veth
index but only one of them succeeds allocating it. The other one fails.
For such cases, we are running the allocation in a loop with 10 rounds.
However this is very flaky synchronization. It should be rather used
when libvirt is competing with other process than when libvirt threads
fight each other. Therefore, internally we should use mutex to serialize
callers, and do the allocation in loop (just in case we are competing
with a different process). By the way we have something similar already
since 1cf97c87.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Running ./autobuild.sh detected a mingw failure:
CCLD libvirt.la
Cannot export virCgroupGetPercpuStats: symbol not defined
Cannot export virCgroupSetOwner: symbol not defined
* src/util/vircgroup.c (virCgroupGetPercpuStats)
(virCgroupSetOwner): Implement stubs.
Signed-off-by: Eric Blake <eblake@redhat.com>
The shutdown handler may restart a domain when handling a reboot
event or when <on_*> is set to 'restart'. Restarting consists of
calling libxlVmCleanup followed by libxlVmStart. libxlVmStart will
emit a VIR_DOMAIN_EVENT_STARTED event, but the SHUTDOWN event is
not emitted until exiting the shutdown handler, after the STARTED
event.
This patch changes the logic a bit to queue the event at the start
of the shutdown action, ensuring it is queued before any subsequent
events that may be generated while executing the shutdown action.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
The network hook script gets called whenever an interface is plugged
into or unplugged from a network, but even though the full XML of both
the network and the domain is included, there is no reasonable way to
determine what exact resources the plugged interface is using:
1) Prior to a recent patch which modified the status XML of interfaces
to include the information about actual hardware resources used, it
would be possible to scan through the domain XML output sent to the
hook, and from there find the correct interface, but that interface
definition would not include any runtime info (e.g. bandwidth or vlan
taken from a portgroup, or which physdev was used in case of a macvtap
network).
2) After the patch modifying the status XML of interfaces, the network
name would no longer be included in the domain XML, so it would be
completely impossible to determine which interface was the one being
plugged.
To solve that problem, this patch includes a single <interface>
element at the beginning of the XML sent to the network hook for
"plugged" and "unplugged" (just inside <hookData>) that is the status
XML of the interface being plugged. This XML will include all info
gathered from the chosen network and portgroup.
NB: due to hardcoded spaces in all of the device *Format() functions,
the <interface> element inside the <hookData> will be indented by 6
spaces rather than 2. I had intended to fix this, but it turns out
that to make virDomainNetDefFormat() indentation relative, I would
have to do the same to virDomainDeviceInfoFormat(), and that function
is called from 19 places - making that a prerequisite of this patch
would cause too many merge difficulties if we needed to backport
network hooks, so I chose to ignore the problem here and fix the
problem for *all* devices in a followup later.
Until now, the "live" XML status of an <interface type='network'>
device would always show the network information, rather than the
exact hardware device that was used. It would also show the name of
any portgroup the interface belonged to, rather than providing the
configuration that was derived from that portgroup. As an example,
given the following network definition:
[A]
<network>
<name>testnet</name>
<forward type='bridge' dev='p4p1_0'>
<interface dev='p4p1_0'/>
<interface dev='p4p1_1'/>
<interface dev='p4p1_2'/>
<interface dev='p4p1_3'/>
</forward>
<portgroup name='admin'>
<bandwidth>
<inbound average='1000' peak='5000' burst='1024'/>
<outbound average='128' peak='256' burst='256'/>
</bandwidth>
</portgroup>
</network>
and the following domain <interface>:
[B]
<interface type='network'>
<source network='testnet' portgroup='admin'/>
</interface>
the output of "virsh dumpxml $domain" while the domain was running
would yield something like this:
[C]
<interface type='network'>
<source network='testnet' portgroup='admin'/>
<target dev='macvtap0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
In order to learn the exact bandwidth information of the interface, a
management application would need to retrieve the XML for testnet,
then search for the portgroup named "admin". Even worse, there was no
simple and standard way to learn which host physdev the macvtap0
device is attached to.
Internally, libvirt has always kept this information in the
virDomainDef that is held in memory, as well as storing it in the
(libvirt-internal-only) domain status XML (in
/var/run/libvirt/qemu/$domain.xml). In order to not confuse the runtime
"actual state" with the config of the device, it's internally stored
like this:
[D]
<interface type='network'>
<source network='testnet' portgroup='admin'/>
<actual type='direct'>
<source dev='p4p1_0' mode='bridge'/>
<bandwidth>
<inbound average='1000' peak='5000' burst='1024'/>
<outbound average='128' peak='256' burst='256'/>
</bandwidth>
</actual>
<target dev='macvtap0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
This was never exposed outside of libvirt though, because I thought it
would be too awkward for a management application to need to look in
two places for the same information, but I also wasn't sure that it
would be okay to overwrite the config info (in this case "<source
network='testnet' portgroup='admin'/>") with the actual runtime info
(everything inside <actual> above).
Now we have a need for this information to be made available to
management applications (in particular, so that a network "plugged"
hook will have full information about the device that is being plugged
in), so it's time to take the leap and decide that it is acceptable
for the config info to be replaced with actual runtime state (but
*only* when reporting domain live status, *not* when saving state in
/var/run/libvirt/qemu/$domain.xml - that remains the same so that
there is no loss of information). That is what this patch does - once
applied, the output of "virsh dumpxml $domain" when the domain is
running will contain something like this:
[E]
<interface type='direct'>
<source dev='p4p1_0' mode='bridge'/>
<bandwidth>
<inbound average='1000' peak='5000' burst='1024'/>
<outbound average='128' peak='256' burst='256'/>
</bandwidth>
<target dev='macvtap0'/>
<alias name='net0'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
</interface>
In effect, everything that is internally stored within <actual> is
moved up a level to where a management application will expect
it. This means that the management application will only look in a
single place to learn - the type of interface in use, the name of the
physdev (if relevant), the <bandwidth>, <vlan>, and <virtualport>
settings in use.
The potential downside is that a management app looking at this output
will not see that the physdev 'p4p1_0' was actually allocated from the
network named 'testnet', or that the bandwidth numbers were taken from
the portgroup 'admin'. However, if they are interested in that info,
they can always get the "inactive" XML for the domain.
An example of where this could cause problems is in virt-manager's
network device display, which shows the status of the device, but
allows you to edit that status info and save it as the new
config. Previously virt-manager would always display the information
in example [C] above, and allow editing that. With this patch, it will
instead display what is in [E] and allow editing it directly, which
could lead to some confusion. I would suggest that virt-manager have
an "edit" button which would change the display from the "live" xml to
the "inactive" xml, so that editing would be done on that; such a
change would both handle the new situation, and also be compatible
with older releases.
This function is currently only called from one place, but in a
subsequent patch will be called from a 2nd place.
The new function exactly replicates the original behavior of the part
of virDomainActualNetDefFormat() that it replaces, but takes a
virDomainNetDefPtr instead of virDomainActualNetDefPtr, and uses the
virDomainNetGetActual*() functions whenever possible, rather than
reaching into def->data.network.actual - this is to be sure that we
are reporting exactly what is being used internally, just in case
there are any discrepancies (there shouldn't be).
This moves the call to virNetDevBandwidthFormat() in
virDomainNetDefFormat() to be called right after the call to
virNetDevVPortProfileFormat(), so that a single chunk of that function
can be placed inside an if that conditionally calls
virDomainActualNetDefContentsFormat() instead (next patch). The
re-ordering necessitates modifying a couple of test data files.
Other *Format() functions (e.g. virNetDevBandwidthFormat()) return
with no action when called with a NULL *Def pointer. This makes
virNetDevVlanFormat() consistent with that behavior.
In practice, if a virDomainNetDef has a virDomainActualNetDef
allocated, the ActualNetDef will *always* contain the bandwidth and
vlan data from the NetDef (unless there was also a portgroup involved
- see networkAllocateActualDevice()).
However, virDomainNetGetActual(Bandwidth|Vlan)() were coded to make it
appear as if it might be possible to have a valid bandwidth/vlan in
the NetDef, but a NULL in the ActualNetDef. Believing this un-truth
could lead to writing unnecessarily defensive code when dealing with
the virDomainGetActual*() functions, so this patch makes it more
obvious:
If there is an ActualNetDef, it will always have a copy of the
various appropriate bits from its parent NetDef, and the
virDomainGetActual* function will *always* return the data from the
ActualNetDef, not from the NetDef.
The reason for this effective-NOP patch is that a subsequent patch to
change virDomainNetDefFormat will rely on the above rule.
These timeout values make librados/librbd return -ETIMEDOUT when a
operation is blocking due to a failing/unreachable Ceph cluster.
By having the operations time out libvirt will not block.
The libxl driver was ignoring the <on_*> domain event configuration,
causing e.g. a domain to be rebooted even when on_reboot is set to
destroy.
This patch honors the <on_*> configuration in the shutdown event
handler.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
This function is needed for user namespaces, where we need to chmod()
the cgroup to the initial uid/gid such that systemd is allowed to
use the cgroup.
Signed-off-by: Richard Weinberger <richard@nod.at>
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Add a virStringSearch method to virstring.{c,h} which performs
a regex match against a string and returns the matching substrings.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
Systemd does not forget about the cases, where client service needs to
wait for daemon service to initialize and start accepting new clients.
Setting a dependency in client is not enough as systemd doesn't know
when the daemon has initialized itself and started accepting new
clients. However, it offers a mechanism to solve this. The daemon needs
to call a special systemd function by which the daemon tells "I'm ready
to accept new clients". This is exactly what we need with
libvirtd-guests (client) and libvirtd (daemon). So now, with this
change, libvirt-guests.service is invoked not any sooner than
libvirtd.service calls the systemd notify function.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
https://bugzilla.redhat.com/show_bug.cgi?id=1031696
When creating a new domain, we let systemd know about it by calling
CreateMachine() function via dbus. Systemd then creates a scope and
places domain into it. However, later when the host is shutting
down, systemd computes the shutdown order to see what processes can
be shut down in parallel. And since we were not setting
dependencies at all, the slices (and thus domains) were most likely
killed before libvirt-guests.service. So user domains that had to
be saved, shut off, whatever were in fact killed. This problem can
be solved by letting systemd know that scopes we're creating must
not be killed before libvirt-guests.service.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Commit 6515889 broke the build on FreeBSD:
In function `qemuDomainGetCPUStats':
/../../src/qemu/qemu_driver.c:16102:
undefined reference to `virCgroupGetDomainTotalCpuStats'
https://bugzilla.redhat.com/show_bug.cgi?id=1038363
If a domain has a different maximum for persistent and live maxmem
or max vcpus, then it is possible to hit cases where libvirt
refuses to adjust the current values or gets halfway through
the adjustment before failing. Better is to determine up front
if the change is possible for all requested flags.
Based on an idea by Geoff Franks.
* src/qemu/qemu_driver.c (qemuDomainSetMemoryFlags): Compute
correct maximum if both live and config are being set.
(qemuDomainSetVcpusFlags): Likewise.
Signed-off-by: Eric Blake <eblake@redhat.com>
The virDomainGetRootFilesystem method can be generalized to allow
any filesystem path to be obtained.
While doing this, start a new test case for purpose of testing various
helper methods in the domain_conf.{c,h} files, such as this one.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
The virCgroupXXX APIs' return value must be checked for
being less than 0, not equal to 0.
An VIR_ERR_OPERATION_INVALID error must also be raised
when the VM is not running to prevent a crash on NULL
priv->cgroup field.
Signed-off-by: Daniel P. Berrange <berrange@redhat.com>
And provide domain summary stat in that case, for lxc backend.
Use case is a container inheriting all devices from the host,
e.g. when doing application containerization.
Destroying a suspended domain needs special action.
We cannot simply terminate all process because they are frozen.
Do deal with that we send them SIGKILL and thaw them.
Upon wakeup the process sees the pending signal and dies immediately.
Signed-off-by: Richard Weinberger <richard@nod.at>
IN6ADDR_ANY_INIT does not seem to be working as expected on MinGW:
error: missing braces around initializer [-Werror=missing-braces]
.sin6_addr = IN6ADDR_ANY_INIT,
Use the in6addr_any variable instead.
Reported by Daniel P. Berrange.
Currently, networkRunHook() is called in networkAllocateActualDevice and
friends. These functions, however, doesn't necessarily work on networks,
For example, if domain's interface is defined in this fashion:
<interface type='bridge'>
<mac address='52:54:00:0b:3b:16'/>
<source bridge='virbr1'/>
<model type='rtl8139'/>
<address type='pci' domain='0x0000' bus='0x00' slot='0x09' function='0x0'/>
</interface>
The networkAllocateActualDevice jumps directly onto 'validate' label as
the interface is not type of 'network'. Hence, @network is left
initialized to NULL and networkRunHook(network, ...) is called. One of
the things that the hook function does is dereference @network. Soupir.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Dumping a domain's core can take considerable time. Use the
recently added job functions and unlock the virDomainObj while
dumping core.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
Saving domain memory and cpu state can take considerable time.
Use the recently added job functions and unlock the virDomainObj
while saving the domain.
Signed-off-by: Jim Fehlig <jfehlig@suse.com>
When explicitly destroying a domain (libxlDomainDestroyFlags), or
handling an out-of-band domain shutdown event, cleanup the domain
in the context of a job. Introduce libxlVmCleanupJob to wrap
libxlVmCleanup in a job block.