docker-compose logs -f is a useful command to view the logs of containers in the background and follow them to see new entries immediately. It seems that with the latest stable V2.1.0 it's not possible to quit those foreground process with control-c any more:
$ docker-compose logs -f
...
webserver-1 | [Sat Nov 13 12:10:33.814463 2021] [core:notice] [pid 1] AH00094: Command line: 'apache2 -D FOREGROUND'
^C^C^C
It only prints ^C without terminating the process. Even when pressing it multiple times, which usually kills the process immediately, nothing happens. The only thing which works is control-\, which prints a huge stacktrace:
^C^C^C^\SIGQUIT: quit
PC=0x7b950 m=0 sigcode=128
goroutine 0 [idle]:
runtime.futex(0x1491b10, 0x80, 0x0, 0x0, 0x0, 0x0, 0x56a08, 0x18473b0, 0x1a9d8, 0x4c394, ...)
runtime/sys_linux_arm.s:443 +0x1c
runtime.futexsleep(0x1491b10, 0x0, 0xffffffff, 0xffffffff)
runtime/os_linux.go:44 +0x184
runtime.notesleep(0x1491b10)
runtime/lock_futex.go:159 +0xac
runtime.mPark()
runtime/proc.go:1340 +0x20
runtime.stopm()
runtime/proc.go:2301 +0x78
runtime.findrunnable(0x1846000, 0x0)
runtime/proc.go:2960 +0x84c
runtime.schedule()
runtime/proc.go:3169 +0x2bc
runtime.park_m(0x18821c0)
runtime/proc.go:3318 +0x80
runtime.mcall(0x78b1c)
runtime/asm_arm.s:285 +0x5c
goroutine 1 [semacquire, 5 minutes]:
sync.runtime_Semacquire(0x187aea4)
runtime/sema.go:56 +0x34
sync.(*WaitGroup).Wait(0x187aea4)
sync/waitgroup.go:130 +0x84
golang.org/x/sync/errgroup.(*Group).Wait(0x187aea0, 0x1b27380, 0x1bbe440)
golang.org/x/[email protected]/errgroup/errgroup.go:40 +0x24
github.com/docker/compose/v2/pkg/compose.(*composeService).Logs(0x1ae6370, 0xe3c114, 0x180af60, 0x18a0a20, 0x11, 0xe37fc8, 0x187ac60, 0x189d798, 0x0, 0x1, ...)
github.com/docker/compose/v2/pkg/compose/logs.go:56 +0x214
github.com/docker/compose/v2/pkg/api.(*ServiceProxy).Logs(0x19081c0, 0xe3c114, 0x180af60, 0x18a0a20, 0x11, 0xe37fc8, 0x187ac60, 0x189d798, 0x0, 0x1, ...)
github.com/docker/compose/v2/pkg/api/proxy.go:200 +0x6c
github.com/docker/compose/v2/cmd/compose.runLogs(0xe3c114, 0x180af60, 0xe45a9c, 0x19081c0, 0x19bb540, 0x0, 0x1, 0xca45ce, 0x3, 0x0, ...)
github.com/docker/compose/v2/cmd/compose/logs.go:71 +0x180
github.com/docker/compose/v2/cmd/compose.logsCommand.func1(0xe3c114, 0x180af60, 0x189d798, 0x0, 0x1, 0x18000e0, 0xa92b3c)
github.com/docker/compose/v2/cmd/compose/logs.go:50 +0x6c
github.com/docker/compose/v2/cmd/compose.Adapt.func1(0xe3c114, 0x180af60, 0x1927ce0, 0x189d798, 0x0, 0x1, 0x12, 0x588734)
github.com/docker/compose/v2/cmd/compose/compose.go:85 +0x44
github.com/docker/compose/v2/cmd/compose.AdaptCmd.func1(0x1927ce0, 0x189d798, 0x0, 0x1, 0x0, 0x0)
github.com/docker/compose/v2/cmd/compose/compose.go:64 +0x104
github.com/spf13/cobra.(*Command).execute(0x1927ce0, 0x189f720, 0x1, 0x1, 0x1927ce0, 0x189f720)
github.com/spf13/[email protected]/command.go:856 +0x354
github.com/spf13/cobra.(*Command).ExecuteC(0x1c79b80, 0x1c79b80, 0x189f710, 0x3)
github.com/spf13/[email protected]/command.go:974 +0x280
github.com/spf13/cobra.(*Command).Execute(...)
github.com/spf13/[email protected]/command.go:902
github.com/docker/cli/cli-plugins/plugin.RunPlugin(0x19080e0, 0x19271e0, 0xca53b9, 0x5, 0xcaca17, 0xb, 0xe1bba4, 0x6, 0x0, 0x0, ...)
github.com/docker/[email protected]+incompatible/cli-plugins/plugin/plugin.go:51 +0xe8
github.com/docker/cli/cli-plugins/plugin.Run(0xd1f2d8, 0xca53b9, 0x5, 0xcaca17, 0xb, 0xe1bba4, 0x6, 0x0, 0x0, 0x0, ...)
github.com/docker/[email protected]+incompatible/cli-plugins/plugin/plugin.go:64 +0xdc
main.pluginMain()
github.com/docker/compose/v2/cmd/main.go:41 +0x6c
main.main()
github.com/docker/compose/v2/cmd/main.go:74 +0x138
goroutine 33 [chan send, 5 minutes]:
github.com/docker/compose/v2/cmd/formatter.init.0.func1(0x1b199e0)
github.com/docker/compose/v2/cmd/formatter/colors.go:120 +0x224
created by github.com/docker/compose/v2/cmd/formatter.init.0
github.com/docker/compose/v2/cmd/formatter/colors.go:104 +0x1fc
goroutine 8 [chan receive]:
k8s.io/klog/v2.(*loggingT).flushDaemon(0x14914f0)
k8s.io/klog/[email protected]/klog.go:1164 +0x70
created by k8s.io/klog/v2.init.0
k8s.io/klog/[email protected]/klog.go:418 +0x100
goroutine 36 [syscall, 5 minutes]:
os/signal.signal_recv(0xe35dd8)
runtime/sigqueue.go:168 +0x158
os/signal.loop()
os/signal/signal_unix.go:23 +0x14
created by os/signal.Notify.func1.1
os/signal/signal.go:151 +0x34
goroutine 15 [chan receive, 5 minutes]:
github.com/docker/compose/v2/pkg/compose.(*printer).Run(0x1bc60a0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0, 0x0)
github.com/docker/compose/v2/pkg/compose/printer.go:66 +0x94
github.com/docker/compose/v2/pkg/compose.(*composeService).Logs.func2(0x0, 0x0)
github.com/docker/compose/v2/pkg/compose/logs.go:45 +0x44
golang.org/x/sync/errgroup.(*Group).Go.func1(0x187aea0, 0x1bc60b0)
golang.org/x/[email protected]/errgroup/errgroup.go:57 +0x50
created by golang.org/x/sync/errgroup.(*Group).Go
golang.org/x/[email protected]/errgroup/errgroup.go:54 +0x50
trap 0x0
error 0x0
oldmask 0x0
r0 0x1491b10
r1 0x80
r2 0x0
r3 0x0
r4 0x0
r5 0x0
r6 0x0
r7 0xf0
r8 0x7
r9 0x1
r10 0x1491588
fp 0x7
ip 0x1491758
sp 0xbed1843c
lr 0x42fac
pc 0x7b950
cpsr 0xa0000010
fault 0x0
On an older test machine with V1.21.0 it works well using control-c:
$ docker-compose logs -f
nextcloud_redis | 1:M 26 Sep 2021 19:58:26.051 * 100 changes in 300 seconds. Saving...
^CERROR: Aborting.
Any idea why control-c does nothing in the V2.1.0 docker-compose release? It's the first CLI application I see, where this doesn't work. I never even know control-\ before. It seems not a good alternative to me, since it will mess up the terminal with a lot of log entries from go, as shown above.
I have tested this on another Linux machine and even on Windows with MobaXterm, so it seems not an issue of my client. My main client is Manjaro 21.1.6 and Docker-Compose runs on a Raspberry Pi 4 with Raspbian 10 Buster if this is important.