Comments

21 Comments

Double-thanks :D

karma

This passes all the F33 rootful podman system tests (in upstream CI)

Confirmed, all issues fixed with these two updates, only flakes remain.

Testing with upstream podman's integration tests showed the issue is fixed.

BZ#1967536 selinux denies "search" of "dma_heap"
BZ#1973808 SELinux is preventing /usr/bin/file from 'search' accesses on the directory /dev/dma_heap/system.

nothing to do with this package

Okay, you would know best. I'll flip karma positive then, since I didn't notice anything else but the dma_heap problem.

The dma_heap, problem should be fixed by updating to the latest selinux-policy package.

So I guess we're still waiting for the selinux-policy update then? The testing I ran was on a fully updated VM (as of a few hours ago).

Using podman CI's hack/get_ci_vm.sh, manually updating the VM with all latest packages (including kernel), installing this update, then running the podman int podman fedora-34 root host tests, I see an improvement. Gone are the plethora of Failed to decode the keys errors. Testing did produce some other failures, but I believe them to be flakes.

@dwalsh what is this suppose to address?

Using podman CI's hack/get_ci_vm.sh, manually updating the VM with all latest packages (including kernel), installing this update, then running the podman int podman fedora-33 root host tests, I'm still seeing this all over the place:

[BeforeEach] Podman exec
  /var/tmp/go/src/github.com/containers/podman/test/e2e/exec_test.go:21
[It] podman exec --privileged with user
  /var/tmp/go/src/github.com/containers/podman/test/e2e/exec_test.go:311
Running: /var/tmp/go/src/github.com/containers/podman/bin/podman --storage-opt vfs.imagestore=/tmp/podman/imagecachedir --root /tmp/podman_test992079153/crio --runroot /tmp/podman_test992079153/crio-run --runtime crun --conmon /usr/bin/conmon --cni-config-dir /etc/cni/net.d --cgroup-manager systemd --tmpdir /tmp/podman_test992079153 --events-backend file --storage-driver vfs run --privileged --user=bin --rm quay.io/libpod/alpine:latest sh -c grep ^CapBnd /proc/self/status | cut -f 2
Error: open /dev/dma_heap: permission denied

FWIW: The vast majority (maybe all) of test failures involve the podman --privileged argument.

Okay, I tried building a custom nginx container and running it (rootless) while curling from it, and erasing/re-installing packages (container-selinux especially). It seems to behave and I do not see that scriptlet failure anymore, so it's most definitely happening for users w/o any container storage. This is something that should be fixed but isn't worth holding up the release. I'll file a separate BZ for it.

I also tried but failed to reproduce the issue described in BZ#1962008

BZ#1962008 [podman][systemd] /usr/lib/systemd/system/cni-dhcp.service wrong executable

...so on a freshly installed F33 VM (never run any containers before) the SELinux label update on upgrade fails:

[root@localhost ~]# dnf upgrade ...big list of download URLs...
...cut...
  Running scriptlet: container-selinux-2:2.162.2-2.fc33.noarch                    4/8
  Upgrading        : container-selinux-2:2.162.2-2.fc33.noarch                    4/8
  Running scriptlet: container-selinux-2:2.162.2-2.fc33.noarch                    4/8
Deprecated, use selabel_lookup

  Cleanup          : container-selinux-2:2.160.2-1.fc33.noarch                    5/8
  Running scriptlet: container-selinux-2:2.160.2-1.fc33.noarch                    5/8
Fixing Rootless SELinux labels in homedir
warning: %triggerpostun(container-selinux-2:2.162.2-2.fc33.noarch) scriptlet failed, exit status 255

Error in <unknown> scriptlet in rpm package container-selinux

I'm guessing it's failing due to not finding any $HOME/.local/share/containers. Maybe a simple fix?

grumble...grumble...grumble...@lsm5 I'm still only seeing podman-3.2.0-4. I tried dnf clean all but no love. Downloading the files manually and will try that way...

karma

Manually ran the latest upstream rootfull integration tests: 97.1408148% of them passed, and it's too much work to go through and try to understand each failure. Many are failures vs 'podman play kube' and '--secrets' which are both under active development. I'm assuming the others are similarly due to simple upstream vs released differences. The exact same set of failures also reproduced under F34.

karma

Manually ran the latest upstream rootfull integration tests: 97.1408148% of them passed, and it's too much work to go through and try to understand each failure. Many are failures vs 'podman play kube' and '--secrets' which are both under active development. I'm assuming the others are similarly due to simple upstream vs released differences. The exact same set of failures also reproduced under F33.

I was able to reproduce the same dnsname plugin failures on master w/o these package updates, therefore they must not be involved.

Hit some podman integration test failures with these. Mostly networking-related tests. Running against, and investigating.

Ran the podman integration tests after installing these. Working fine.

User Icon cevich commented & provided feedback on crun-0.18-5.fc34 3 years ago
karma

I ran the podman integration tests on F34 with this updated crun package installed. I never managed to reproduce the minor bug before or after. However, that may just be an artifact of default mount options and/or my reproducer (I didn't try very hard).

User Icon cevich commented & provided feedback on crun-0.18-5.fc33 3 years ago
karma

Ran podman's integration tests without and with this package. With the package, the Podman run with volumes [It] podman run with volumes and suid/dev/exec options passes.