At least the simple trace backend works by spawning a helper thread,
and setting up an atexit() handler that coordinates completion with
the helper thread. But since atexit registrations survive fork() but
helper threads do not, this means that qemu-nbd configured to use the
simple trace will deadlock waiting for a thread that no longer exists
when it has daemonized.
Better is to follow the example of vl.c: don't call any setup
functions that might spawn helper threads until we are in the final
process that will be doing the work worth tracing.
Tested by configuring with --enable-trace-backends=simple, then running
qemu-nbd --fork --trace=nbd_\*,file=qemu-nbd.trace -f raw -r README.rst
followed by `nbdinfo nbd://localhost`, and observing that the trace
file is now created without hanging.
Reported-by: Thomas Huth <thuth@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Message-ID: <20250227220625.870246-2-eblake@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Instead of comment
"Keep this type consistent with the nbd-server-start arguments", we
can simply merge these things.
Note that each field of new base already has "since" tag, equal in both
original copies. So "since" information is saved.
Signed-off-by: Vladimir Sementsov-Ogievskiy <vsementsov@yandex-team.ru>
Message-ID: <20250219191914.440451-1-vsementsov@yandex-team.ru>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
Test 162 recently started failing for me for no obvious reasons (I
did not spot any suspicious commits in this area), but looking in
the 162.out.bad log file, there was a suspicious message at the end:
qemu-nbd: Cannot lock pid file: Resource temporarily unavailable
And indeed, the test starts the NBD server two times, without stopping
the first server before running the second one, so the second one can
indeed fail to lock the PID file. Thus let's make sure to stop the
first server before the test continues with the second one. With this
change, the test works fine for me again.
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250225070650.387638-1-thuth@redhat.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
This test depends on TarFile.addfile() to add tar member header without
writing the member data, which we write ourself using qemu-nbd. Python
3.13 changed the function in a backward incompatible way[1] to require a
file object for tarinfo with non-zero size, breaking the test:
-[{"name": "vm.ovf", "offset": 512, "size": 6}, {"name": "disk", "offset": 1536, "size": 393216}]
+Traceback (most recent call last):
+ File "/home/stefanha/qemu/tests/qemu-iotests/302", line 118, in <module>
+ tar.addfile(disk)
+ ~~~~~~~~~~~^^^^^^
+ File "/usr/lib64/python3.13/tarfile.py", line 2262, in addfile
+ raise ValueError("fileobj not provided for non zero-size regular file")
+ValueError: fileobj not provided for non zero-size regular file
The new behavior makes sense for most users, but breaks our unusual
usage. Fix the test to add the member header directly using public but
undocumented attributes. This is more fragile but the test works again.
This also fixes a bug in the previous code - when calling addfile()
without a fileobject, tar.offset points to the start of the member data
instead of the end.
[1] https://github.com/python/cpython/pull/117988
Signed-off-by: Nir Soffer <nirsof@gmail.com>
Message-ID: <20250228195708.48035-1-nirsof@gmail.com>
Reviewed-by: Eric Blake <eblake@redhat.com>
Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
Signed-off-by: Eric Blake <eblake@redhat.com>
When running checkpatch.pl on a commit adding a file without
SPDX tag we get:
Undefined subroutine &main::WARNING called at ./scripts/checkpatch.pl line 1694.
The WARNING level is reported by the WARN() method. Fix the typo.
Fixes: fa4d79c64d ("scripts: mandate that new files have SPDX-License-Identifier")
Signed-off-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Reviewed-by: Markus Armbruster <armbru@redhat.com>
Message-ID: <20250303172508.93234-1-philmd@linaro.org>
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
* Convert more avocado tests to the functional framework
* Fix a problem with the check-patch/check-dco CI jobs
* Replace the ppc64 e500 functional test with a better one
* Test retrieval of machine class properties
# -----BEGIN PGP SIGNATURE-----
#
# iQJFBAABCAAvFiEEJ7iIR+7gJQEY8+q5LtnXdP5wLbUFAme+5TcRHHRodXRoQHJl
# ZGhhdC5jb20ACgkQLtnXdP5wLbUJZw//bgiGaTFI7Uzp7XgQyedVD5UJ6UiySNn8
# 58pEBjq8Q4gsFsckM4wp0BV3iRfy/EHncUd/bTTsBgrjF2T0+SBZFxkzO5Qw3l2U
# 5Qi158/9rteyKoTTz+WtlzbXY8hW7o2O0YriPwZDqAtWXXHGVOjTnXGqT3ZA6xM/
# SV9q4ZzTjpSSpBq8UMSx2BkRaTsIQ2K9guDWYr1mTAOuP+AlzP5XRIcCyF4SuSzM
# 2VRCaGbHcHrZyyJP9D5JbRebIhwifl7OfXH/iaVpXRWot2pkRdA9zOv0Mxg/4qIl
# VoUPBLxSIBov39i+9uVgBnwiBLObj+EU7T+qXJ1FoBe3WfjVaXEp6Nkj1/T3+Jn5
# lKJGxgqX4xp7RvmLFQBS1/rA6buLco4H/IuUu1PgzGXtzZs78ZRLsC4cV8iMVKzi
# 0xFiK7nBxgYiSdDNMyh/kILwSB4zExhzGe40dz4MDyCThtDK1HZpuPRC4PiJAiH2
# DlTT8O9uo9DVhwZqco1A0+m/Q2yCrF+wTte3AfB663RCjvYQKbRXUDYdu1hwC24K
# 6HQJ9M00FFM8H6YD3LY1bnN/wOTiuZ6zWcLP3bquOPIjmC0ogYkW054F3Mx+lmWk
# 3qOAjKOmznz7pTc+AvbX98FrKY58D2wJTuRjIMBWxFJQLOX/yIkQcfWPl3YPCT/a
# AZf9kGVE2/g=
# =KMhF
# -----END PGP SIGNATURE-----
# gpg: Signature made Wed 26 Feb 2025 17:56:07 HKT
# gpg: using RSA key 27B88847EEE0250118F3EAB92ED9D774FE702DB5
# gpg: issuer "thuth@redhat.com"
# gpg: Good signature from "Thomas Huth <th.huth@gmx.de>" [full]
# gpg: aka "Thomas Huth <thuth@redhat.com>" [full]
# gpg: aka "Thomas Huth <huth@tuxfamily.org>" [full]
# gpg: aka "Thomas Huth <th.huth@posteo.de>" [unknown]
# Primary key fingerprint: 27B8 8847 EEE0 2501 18F3 EAB9 2ED9 D774 FE70 2DB5
* tag 'pull-request-2025-02-26' of https://gitlab.com/thuth/qemu:
tests/functional: Replace the ppc64 e500 advent calendar test
gitlab: use --refetch in check-patch/check-dco jobs
tests/functional: Bump some arm test timeouts
tests/functional: Convert the x86_64 replay avocado tests
tests/functional: Convert the aarch64 replay avocado tests
tests/functional: Convert the s390x replay avocado tests
tests/functional: Convert the alpha replay avocado tests
tests/functional: Convert the arm replay avocado tests
tests/functional: Convert the m68k replay avocado tests
tests/functional: Convert the microblaze replay avocado tests
tests/functional: Convert the ppc64 replay avocado tests
tests/functional: Convert the or1k replay avocado tests
tests/functional: Convert the 32-bit ppc replay avocado tests
tests/functional: Convert the sparc replay avocado test
tests/functional: Convert the xtensa replay test to the functional framework
tests/functional: Provide a proper name for the VMs in the replay tests
tests/qtest/qom-test: Test retrieval of machine class properties
tests/functional: Have microblaze tests inherit common parent class
Signed-off-by: Stefan Hajnoczi <stefanha@redhat.com>
While SPDX-License-Identifier is a well known SPDX tag, there are a
great many more besides that[1]. These are mostly focused on making
machine readable metadata available to the 'reuse' tool and similar.
They cover concepts like author names, copyright owners, and much
more. It is even possible to define source file line groups and apply
different SPDX tags to regions of code within a file.
At this time we're only interested in adopting SPDX for recording the
file global licensing info, so detect & reject any other SPDX metadata.
If we want to explicitly collect extra data in SPDX format, we can
evaluate each data item on its merits when someone wants to propose it
at a later date.
[1] https://spdx.github.io/spdx-spec/v2.2.2/file-tags/https://spdx.github.io/spdx-spec/v2.2.2/file-information/
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
We expect all new code to be contributed with the "GPL-2.0-or-later"
license tag. Divergence is permitted if the new file is derived from
pre-existing code under a different license, whether from elsewhere
in QEMU codebase, or outside.
Issue a warning if the declared license is not "GPL-2.0-or-later",
and an error if the license is not one of the handful of the
expected licenses to prevent unintended proliferation. The warning
asks users to explain their unusual choice of license in the commit
message.
Reviewed-by: Peter Maydell <peter.maydell@linaro.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Going forward we want all newly created source files to have an
SPDX-License-Identifier tag present.
Initially mandate this for C, Python, Perl, Shell source files,
as well as JSON (QAPI) and Makefiles, while encouraging users
to consider it for other file types.
Reviewed-by: Brian Cain <bcain@quicinc.com>
Reviewed-by: Philippe Mathieu-Daudé <philmd@linaro.org>
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
nvme_rw_complete_cb() is the only remaining user of nvme_aio_err(), so
open code the status code setting instead.
Reviewed-by: Jesper Wendel Devantier <foss@defmacro.it>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
The nvme_aio_err() does not handle Verify, Compare, Copy and other misc
commands and defaults to setting the error status code to Internal
Device Error. For some of these commands, we know better, so set it
explicitly.
For the commands using the nvme_misc_cb() callback (Copy, Flush, ...),
if no status code has explicitly been set by the lower handlers, default
to Internal Device Error as previously.
Reviewed-by: Jesper Wendel Devantier <foss@defmacro.it>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
The Command Abort Requested status code should only be set if the
command was explicitly cancelled due to an Abort command. Or, in the
case the cancel was due to Submission Queue deletion, set the status
code to Command Aborted due to SQ Deletion.
Reviewed-by: Jesper Wendel Devantier <foss@defmacro.it>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
The controller incorrectly allows a zoned namespace to be attached even
if CS.CSS is configured to only support the NVM command set for I/O
queues.
Rework handling of namespace command sets in general by attaching
supported namespaces when the controller is started instead of, like
now, statically when realized.
Reviewed-by: Jesper Wendel Devantier <foss@defmacro.it>
Signed-off-by: Klaus Jensen <k.jensen@samsung.com>
If the agent is set to daemonize but for whatever reason fails to
init the channel, the error message is lost. Worse, the agent
daemonizes needlessly and returns success. For instance:
# qemu-ga -m virtio-serial \
-p /dev/nonexistent_device \
-f /run/qemu-ga.pid \
-t /run \
-d
# echo $?
0
This makes it needlessly hard for init scripts to detect a
failure in qemu-ga startup. Though, they shouldn't pass '-d' in
the first place.
Let's open the channel first and only after that become a daemon.
Related bug: https://bugs.gentoo.org/810628
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Konstantin Kostiuk <kkostiuk@redhat.com>
Message-ID: <7a42b0cbda5c7e01cf76bc1b29a1210cd018fa78.1736261360.git.mprivozn@redhat.com>
Signed-off-by: Konstantin Kostiuk <kkostiuk@redhat.com>
Current logic on return value ('ret' variable) in main() is error
prone. The variable is initialized to EXIT_SUCCESS and then set
to EXIT_FAILURE on error paths. This makes it very easy to forget
to set the variable to indicate error when adding new error path,
as is demonstrated by handling of initialize_agent() failure.
It's simply lacking setting of the variable.
There's just one case where success should be indicated: when
dumping the config ('-D' cmd line argument).
To resolve this, initialize the variable to failure value and set
it explicitly to success value in that one specific case.
Signed-off-by: Michal Privoznik <mprivozn@redhat.com>
Reviewed-by: Ján Tomko <jtomko@redhat.com>
Reviewed-by: Konstantin Kostiuk <kkostiuk@redhat.com>
Message-ID: <8a28265f50177a8dc4c10fcf4146e85a7fd748ee.1736261360.git.mprivozn@redhat.com>
Signed-off-by: Konstantin Kostiuk <kkostiuk@redhat.com>
Zhaoxin CPUs (including vendors "Shanghai" and "Centaurhauls") handle the
CMPLegacy bit similarly to Intel CPUs. Therefore, this commit masks the
CMPLegacy bit in CPUID[0x80000001].ECX for Zhaoxin CPUs, just as it is done
for Intel CPUs.
AMD uses the CMPLegacy bit (CPUID[0x80000001].ECX.bit1) along with other CPUID
information to enumerate platform topology (e.g., the number of logical
processors per package). However, for Intel and other CPUs that follow Intel's
behavior, CPUID[0x80000001].ECX.bit1 is reserved.
- Impact on Intel and similar CPUs:
This change has no effect on Intel and similar CPUs, as the goal is to
accurately emulate CPU CPUID information.
- Impact on Linux Guests running on Intel (and similar) vCPUs:
During boot, Linux checks if the CPU supports Hyper-Threading. For the Linux
kernel before v6.9, if it detects X86_FEATURE_CMP_LEGACY, it assumes
Hyper-Threading is not supported. For Intel and similar vCPUs, if the
CMPLegacy bit is not masked in CPUID[0x80000001].ECX, Linux will incorrectly
assume that Hyper-Threading is not supported, even if the vCPU does support it.
Signed-off-by: EwanHai <ewanhai-oc@zhaoxin.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20250113074413.297793-5-ewanhai-oc@zhaoxin.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Zhaoxin currently uses two vendors: "Shanghai" and "Centaurhauls".
It is important to note that the latter now belongs to Zhaoxin. Therefore,
this patch replaces CPUID_VENDOR_VIA with CPUID_VENDOR_ZHAOXIN1.
The previous CPUID_VENDOR_VIA macro was only defined but never used in
QEMU, making this change straightforward.
Additionally, the IS_ZHAOXIN_CPU macro has been added to simplify the
checks for Zhaoxin CPUs.
Signed-off-by: EwanHai <ewanhai-oc@zhaoxin.com>
Reviewed-by: Zhao Liu <zhao1.liu@intel.com>
Link: https://lore.kernel.org/r/20250113074413.297793-2-ewanhai-oc@zhaoxin.com
Signed-off-by: Paolo Bonzini <pbonzini@redhat.com>
Replace the advent calendar test with a buildroot image built with
qemu_ppc64_e5500_defconfig. Unlike the advent calendar image, this
newer buildroot image supports networking, too. Thus boot a ppce500
machine from kernel and disk, test network and poweroff.
Add '-no-shutdown' to the command line to avoid exiting from QEMU
as it seems to bother the functional framework.
Signed-off-by: Cédric Le Goater <clg@redhat.com>
Message-ID: <20250226065013.196052-1-clg@redhat.com>
Reviewed-by: Thomas Huth <thuth@redhat.com>
[thuth: Add some wording about network support to the commit message]
Signed-off-by: Thomas Huth <thuth@redhat.com>
When gitlab initializes the repo checkout for a CI job, it will have
done a shallow clone with only partial history. Periodically the objects
that are omitted cause trouble with the check-patch/check-dco jobs. This
is exhibited as reporting strange errors being unable to fetch certain
objects that are known to exist.
Passing the --refetch flag to 'git fetch' causes it to not assume the
local checkout has all common objects and thus re-fetch everything that
is needed. This appears to solve the check-patch/check-dco job failures.
Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
Acked-by: Michael S. Tsirkin <mst@redhat.com>
Message-ID: <20250225110525.2209854-1-berrange@redhat.com>
Signed-off-by: Thomas Huth <thuth@redhat.com>
On my local machine, for a debug build, sbsaref_alpine takes
nearly 900s:
$ (cd build/x86 && ./pyvenv/bin/meson test --setup thorough --suite func-thorough func-aarch64-aarch64_sbsaref_alpine
)
1/1 qemu:func-thorough+func-aarch64-thorough+thorough / func-aarch64-aarch64_sbsaref_alpine
OK 896.90s
arm_aspeed_rainier can also run close to its current timeout:
6/44 qemu:func-thorough+func-arm-thorough+thorough / func-arm-arm_aspeed_rainier
OK 215.75s
and arm_aspeed_ast2500 and arm_aspeed_ast2600 can go over:
13/44 qemu:func-thorough+func-arm-thorough+thorough / func-arm-arm_aspeed_ast2600
OK 792.94s
27/44 qemu:func-thorough+func-arm-thorough+thorough / func-arm-arm_aspeed_ast2500
TIMEOUT 480.01s
The sx1 test fails not on the overall meson timeout but on the
60 second timeout in some of the subtests.
Bump all these timeouts up a bit.
Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
Reviewed-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250221140640.786341-1-peter.maydell@linaro.org>
Signed-off-by: Thomas Huth <thuth@redhat.com>
Put the tests into a separate file now (in the functional framework,
each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-13-thuth@redhat.com>
Put the tests into a separate file now (in the functional framework,
each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-12-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-11-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-10-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-9-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-8-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-7-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-6-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-5-thuth@redhat.com>
While we're at it, change the machine from SS-20 to SS-10 to
increase the test coverage a little bit (SS-20 is already
tested in the test_sparc_sun4m.py file).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-4-thuth@redhat.com>
Put the tests into a separate file now (since in the functional
framework, each file is run with one specific qemu-system-* binary).
Signed-off-by: Thomas Huth <thuth@redhat.com>
Message-ID: <20250218152744.228335-3-thuth@redhat.com>