Issue Description
When using generate kube, health checks are not being exported.
Steps to reproduce the issue
tumbleweed:~ # podman pod create healthcheck
tumbleweed:~ # podman create --pod healthcheck --health-cmd='true' registry.opensuse.org/opensuse/tumbleweed sleep infinity
tumbleweed:~ # podman generate kube healthcheck
# Save the output of this file and use kubectl create -f to import
# it into Kubernetes.
#
# Created with podman-4.9.3
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: "2024-03-20T10:13:55Z"
labels:
app: healthcheck
name: healthcheck
spec:
containers:
- command:
- sleep
- infinity
image: registry.opensuse.org/opensuse/tumbleweed:latest
name: hopefulhofstadter
Describe the results you received
If a podman pod containers healthchecks for certain containers, those will not be exported to the kube play yaml configuration file.
Describe the results you expected
Healthchecks are exported into the kube play yaml and can be imported.
podman info output
host:
arch: amd64
buildahVersion: 1.33.5
cgroupControllers:
- memory
- pids
cgroupManager: systemd
cgroupVersion: v2
conmon:
package: conmon-2.1.10-1.2.x86_64
path: /usr/bin/conmon
version: 'conmon version 2.1.10, commit: unknown'
cpuUtilization:
idlePercent: 72.44
systemPercent: 5.54
userPercent: 22.02
cpus: 8
databaseBackend: sqlite
distribution:
distribution: opensuse-tumbleweed
version: "20240311"
eventLogger: journald
freeLocks: 2048
hostname: racetrack-7290
idMappings:
gidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
uidmap:
- container_id: 0
host_id: 1000
size: 1
- container_id: 1
host_id: 100000
size: 65536
kernel: 6.7.7-1-default
linkmode: dynamic
logDriver: journald
memFree: 1190305792
memTotal: 33530646528
networkBackend: netavark
networkBackendInfo:
backend: netavark
dns:
package: aardvark-dns-1.10.0-1.2.x86_64
path: /usr/libexec/podman/aardvark-dns
version: aardvark-dns 1.10.0
package: netavark-1.10.3-1.1.x86_64
path: /usr/libexec/podman/netavark
version: netavark 1.10.3
ociRuntime:
name: crun
package: crun-1.14.4-1.1.x86_64
path: /usr/bin/crun
version: |-
crun version 1.14.4
commit: a220ca661ce078f2c37b38c92e66cf66c012d9c1
rundir: /run/user/1000/crun
spec: 1.0.0
+SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +CRIU +LIBKRUN +WASM:wasmedge +YAJL
os: linux
pasta:
executable: /home/phoenix/bin/pasta
package: Unknown
version: ""
remoteSocket:
exists: false
path: /run/user/1000/podman/podman.sock
security:
apparmorEnabled: false
capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
rootless: true
seccompEnabled: true
seccompProfilePath: /etc/containers/seccomp.json
selinuxEnabled: true
serviceIsRemote: false
slirp4netns:
executable: /usr/bin/slirp4netns
package: slirp4netns-1.2.3-1.1.x86_64
version: |-
slirp4netns version 1.2.3
commit: unknown
libslirp: 4.7.0
SLIRP_CONFIG_VERSION_MAX: 5
libseccomp: 2.5.5
swapFree: 0
swapTotal: 0
uptime: 3h 15m 43.00s (Approximately 0.12 days)
variant: ""
plugins:
authorization: null
log:
- k8s-file
- none
- passthrough
- journald
network:
- bridge
- macvlan
- ipvlan
volume:
- local
registries:
search:
- registry.opensuse.org
- registry.suse.com
- docker.io
store:
configFile: /home/phoenix/.config/containers/storage.conf
containerStore:
number: 0
paused: 0
running: 0
stopped: 0
graphDriverName: overlay
graphOptions: {}
graphRoot: /home/phoenix/.local/share/containers/storage
graphRootAllocated: 999662026752
graphRootUsed: 852297252864
graphStatus:
Backing Filesystem: btrfs
Native Overlay Diff: "true"
Supports d_type: "true"
Supports shifting: "false"
Supports volatile: "true"
Using metacopy: "false"
imageCopyTmpDir: /var/tmp
imageStore:
number: 3
runRoot: /run/user/1000/containers
transientStore: false
volumePath: /home/phoenix/.local/share/containers/storage/volumes
version:
APIVersion: 4.9.3
Built: 1708610040
BuiltTime: Thu Feb 22 14:54:00 2024
GitCommit: ""
GoVersion: go1.21.7
Os: linux
OsArch: linux/amd64
Version: 4.9.3
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
No
Additional environment details
No response
Additional information
No response
Issue Description
When using
generate kube, health checks are not being exported.Steps to reproduce the issue
Describe the results you received
If a podman pod containers healthchecks for certain containers, those will not be exported to the kube play yaml configuration file.
Describe the results you expected
Healthchecks are exported into the kube play yaml and can be imported.
podman info output
Podman in a container
No
Privileged Or Rootless
Privileged
Upstream Latest Release
No
Additional environment details
No response
Additional information
No response