Update to containerd 2.0, buildkit v0.19#48872
Conversation
Dockerfile
Outdated
| # version to pick up bug fixes or new APIs, however, usually the Go packages | ||
| # are built from a commit from the master branch. | ||
| ARG CONTAINERD_VERSION=v1.7.22 | ||
| ARG CONTAINERD_VERSION=v2.0.0 |
There was a problem hiding this comment.
We should probably also have a run of CI with the v1.7 binaries, but let's see what v2.0.0 says first
There was a problem hiding this comment.
Is CI typically run with multiple versions?
Do you have #47335 just for testing?
There was a problem hiding this comment.
We don't run with multiple versions, as we're already rate-limited by GitHub actions for too many jobs, but we should probably have at least a run of CI with either combination; I opened #47335 to verify if our existing code would work fine with containerd 2.0, because the moment we push packages of containerd 2.0, every installation will use it, including older versions of docker engine (unless people explicitly pin to 1.7).
The reverse may also be true; a user may upgrade docker and potentially use an 1.7 version of containerd; we can restrict that in our packaging, but it might be good to know if it works regardless.
4669b9c to
db67583
Compare
vendor.mod
Outdated
| ) | ||
|
|
||
| require ( | ||
| github.com/containerd/containerd v1.7.23 // indirect |
There was a problem hiding this comment.
of course I was curious what's bringing in v1.7.23;
go mod graph | grep ' github.com/containerd/containerd@'
github.com/docker/docker github.com/containerd/[email protected]
github.com/Microsoft/[email protected] github.com/containerd/[email protected]
github.com/containerd/[email protected] github.com/containerd/[email protected]
github.com/moby/[email protected] github.com/containerd/[email protected]And it's probably the circular dependency that's making it pop up;
go mod graph | grep ' github.com/moby/[email protected]'
github.com/docker/docker github.com/moby/[email protected]There was a problem hiding this comment.
I need to see about getting the nydus one done, at least that one is causing it show up in vendor. I'll look for more, its not a big deal to have 1.7.23 as indirect for awhile though
There was a problem hiding this comment.
Ah, yes, I was looking at the nydus dependency at some point, and looking if we can remove the dependency altogether for our vendoring; this was some first attempt at making the compressions more modular, but I didn't go beyond those steps (move them to separate packages);
There was a problem hiding this comment.
For nydus we are just waiting for a tag but can bump to a digest
There was a problem hiding this comment.
Confirmed updating nydus will drop containerd 1.7 from vendor
| schema1Converter, err = schema1.NewConverter(p.is.ContentStore, fetcher) | ||
| if err != nil { | ||
| return nil, err | ||
| } |
There was a problem hiding this comment.
Looks like stopProgress() should be called here; not sure why we don't use a defer for that though (I see a defer is defined on line 569, but before that, it's all manual)
builder/builder-next/adapters/containerimage/pull.go:498:2: lostcancel: the stopProgress function is not used on all paths (possible context leak) (govet)
pctx, stopProgress := context.WithCancel(ctx)
^
builder/builder-next/adapters/containerimage/pull.go:529:4: lostcancel: this return statement may be reached without using the stopProgress var defined on line 498 (govet)
return nil, err
^
|
Some other linting issues as well; we should look if we already can update those before updating the dependencies (haven't checked) |
1fe4f6a to
ea3c0d3
Compare
560e4b9 to
304f7b8
Compare
|
Very unrelated to this PR probably, and It's probably been like this since forever (at least I hope so), but I just noticed the odd logs from the swarm test failures; daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on network docker_gwbridge"
time="2025-01-03T01:25:20.201385621Z" level=debug msg="RequestAddress(LocalDefault/172.21.0.0/16, <nil>, map[])"
time="2025-01-03T01:2
daemon.go:318: [d54ef6cf8e68b] :20.201398816Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65533, Sequence: (0xc0000000,
daemon.go:318: [d54ef6cf8e68b] )->(0x0, 2046)->(0x1, 1)->end Curr:2 Serial:false PrefAddress:invalid IP "
time="2025-01-03T01:25:20.255114788Z" level=debug msg="Programming e
daemon.go:318: [d54ef6cf8e68b] ternal connectivity on endpoint" eid=1496005f342a2ae5945c1482f54a10dbfe35b14d95a1dc4980585d4ba0427a59 ep=gateway_ingress-sbox net=docker_gwbridge nid=e7e266153773ead66ef66a14bbf1Notice how logs are split over multiple lines, but it looks like there's some off-by-X issue daemon.go:318: [d54ef6cf8e68b] bug msg="EnableService ingress-sbox START"
time="2025-01-03T01:25:20.264341580Z" level=debug msg="EnableService ingress-sbox DONE"
t
daemon.go:318: [d54ef6cf8e68b] me="2025-01-03T01:25:20.611449826Z" level=debug msg="handling GET request" method=GET module=api request-urlDetails=== FAIL: amd64.integration-cli TestDockerSwarmSuite/TestSwarmJoinPromoteLocked (11.69s)
docker_cli_swarm_test.go:1247: [d14dfce952a8e] joining swarm manager [d54ef6cf8e68b]@0.0.0.0:2477, swarm listen addr 0.0.0.0:2478
docker_cli_swarm_test.go:1247: assertion failed: error is not nil: Error response from daemon: rpc error: code = Unavailable desc = connection error: desc = "transport: authentication handshake failed: credentials: cannot check peer: missing selected ALPN property": [d14dfce952a8e] joining swarm
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on network docker_gwbridge"
time="2025-01-03T01:25:20.201385621Z" level=debug msg="RequestAddress(LocalDefault/172.21.0.0/16, <nil>, map[])"
time="2025-01-03T01:2
daemon.go:318: [d54ef6cf8e68b] :20.201398816Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65533, Sequence: (0xc0000000,
daemon.go:318: [d54ef6cf8e68b] )->(0x0, 2046)->(0x1, 1)->end Curr:2 Serial:false PrefAddress:invalid IP "
time="2025-01-03T01:25:20.255114788Z" level=debug msg="Programming e
daemon.go:318: [d54ef6cf8e68b] ternal connectivity on endpoint" eid=1496005f342a2ae5945c1482f54a10dbfe35b14d95a1dc4980585d4ba0427a59 ep=gateway_ingress-sbox net=docker_gwbridge nid=e7e266153773ead66ef66a14bbf1
daemon.go:318: [d54ef6cf8e68b] e4151e56febf16b72b2aceca7912609569f spanID=3b68b85cf05914f9 traceID=d9bc69240907892866718ec65beee0cd
time="2025-01-03T01:25:20.264325460Z" level=debug msg="EnableService ingress-
daemon.go:318: [d54ef6cf8e68b] box START"
time="2025-01-03T01:25:20.264341580Z" level=debug msg="EnableService ingress-sbox DONE"
time="2025-01-03T01:25:20.611449826Z" level=debug msg="handling GET request"
daemon.go:318: [d54ef6cf8e68b] ethod=GET module=api request-url=/v1.48/swarm spanID=4e6e88179fc39aed status=200 traceID=6f9ddf14bfd2728ad850df5ce3dbc734 vars="map[vers
daemon.go:318: [d54ef6cf8e68b] on:1.48]"
time="2025-01-03T01:25:24.733514060Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw
daemon.go:318: [d54ef6cf8e68b] 83xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:24.733863744Z" level=debug msg="received he
daemon.go:318: [d54ef6cf8e68b] rtbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 4.706165333s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:25:24.733994728Z" level=debug msg="heartbeat successful to manager { }, next heartbeat period: 4.706165333s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.440662518Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id
daemon.go:318: [d54ef6cf8e68b] vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l
daemon.go:318: [d54ef6cf8e68b] m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.441041698Z" level=debug msg="recei
daemon.go:318: [d54ef6cf8e68b] ed heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409l
daemon.go:318: [d54ef6cf8e68b] 0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 5.027937282s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:25:29.441148989Z" level=debug msg="heartbeat succ
daemon.go:318: [d54ef6cf8e68b] ssful to manager { }, next heartbeat period: 5.027937282s" method="(*session).heartbeat" module=node/agen
daemon.go:318: [d54ef6cf8e68b] node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaae
daemon.go:318: [d54ef6cf8e68b] a4z
ess(LocalDefault/172.21.0.0/16, <nil>, map[RequestAddressType:com.docker.network.gateway])"
time="2025-01-03T01:25:20.195069115Z" level=debug msg="Request a
daemon.go:318: [d54ef6cf8e68b] dress PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:invalid IP "
ti
daemon.go:318: [d54ef6cf8e68b] e="2025-01-03T01:25:20.195558217Z" level=debug msg="Assigning address to bridge interface docker_gwbridge: 172.21.0.1/16"
tim
daemon.go:318: [d54ef6cf8e68b] ="2025-01-03T01:25:20.195702245Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-03T01:25:20.196767481Z" le
daemon.go:318: [d54ef6cf8e68b] el=debug msg="/usr/sbin/ip6tables, [--wait -t filter -A DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
time="2
daemon.go:318: [d54ef6cf8e68b] 25-01-03T01:25:20.197696924Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
time="2025-01-03T01:25:20.198597093Z" level=debug msg="/usr/sbin/ip6
daemon.go:318: [d54ef6cf8e68b] ables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on ne
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on network docker_gwbridge"
time="2025-01-03T01:25:20.201385621Z" level=debug msg="Requ
daemon.go:318: [d54ef6cf8e68b] stAddress(LocalDefault/172.21.0.0/16, <nil>, map[])"
time="2025-01-03T01:25:20.201398816Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0
daemon.go:318: [d54ef6cf8e68b] 2046)->(0x1, 1)->end Curr:2 Serial:false PrefAddress:invalid IP "
time="2025-01-03T01:25
daemon.go:318: [d54ef6cf8e68b] 20.255114788Z" level=debug msg="Programming external connectivity on endpoint" eid=1496005f342a2ae5945c1482f54a10dbfe35b14d95a1dc4980585d4ba0427a59 ep=gateway_ingress-sbox net=docker_gwbridge nid=e7e266153773ead66ef66a14bbf16e4151e56febf16b72b2aceca79126
daemon.go:318: [d54ef6cf8e68b] 9569f spanID=3b68b85cf05914f9 traceID=d9bc69240907892866718ec65beee0cd
time="2025-01-03T01:25:20.264325460Z" level=debug msg="E
daemon.go:318: [d54ef6cf8e68b] ableService ingress-sbox START"
time="2025-01-03T01:25:20.264341580Z" level=debug msg="EnableService ingress-sbox DONE"
time="2025-01-03T01:25:20.611449826Z" level=debug msg="handling GET request" method=GET module=api request-url=/v1.48/swarm spanID=4e6e88179fc39aed status=200 traceID=6f9ddf14bfd2728ad850df5ce3dbc734 vars="map[version:1.48]"
time="2025-01-03T01:25:24.733514060Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:24.733863744Z" level=debug msg="
daemon.go:318: [d54ef6cf8e68b] eceived heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 4.706165333s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:25:24.733994728Z" level=debug msg="heartbeat succ
daemon.go:318: [d54ef6cf8e68b] ssful to manager { }, next heartbeat period: 4.706165333s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o40
daemon.go:318: [d54ef6cf8e68b] lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.44066
daemon.go:318: [d54ef6cf8e68b] 518Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent no
daemon.go:318: [d54ef6cf8e68b] e.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.
daemon.go:318: [d54ef6cf8e68b] 41041698Z" level=debug msg="received heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwn
daemon.go:318: [d54ef6cf8e68b] 0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 5.027937282s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:2
daemon.go:318: [d54ef6cf8e68b] :29.441148989Z" level=debug msg="heartbeat successful to manager { }, next heartbeat period: 5.027937282s" method="(*session).hear
daemon.go:318: [d54ef6cf8e68b] beat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383x
daemon.go:318: [d54ef6cf8e68b] ijaaeva4z
ess(LocalDefault/172.21.0.0/16, <nil>, map[RequestAddressType:com.docker.network.gateway])"
time="2025-01-03T01:2
daemon.go:318: [d54ef6cf8e68b] :20.195069115Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(
daemon.go:318: [d54ef6cf8e68b] x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:invalid IP "
time="2025-01-03T01:25:20.195558217Z" level=debug
daemon.go:318: [d54ef6cf8e68b] sg="Assigning address to bridge interface docker_gwbridge: 172.21.0.1/16"
time="2025-01-03T01:25:20.195702245Z" level=debug
daemon.go:318: [d54ef6cf8e68b] msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATIO
daemon.go:318: [d54ef6cf8e68b] -STAGE-2]"
time="2025-01-03T01:25:20.196767481Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -A DOCKER-ISOLATION-STAGE-1 -i
daemon.go:318: [d54ef6cf8e68b] ocker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-03T01:25:20.197696924Z" level=deb
daemon.go:318: [d54ef6cf8e68b] g msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
time="2025-01-03T01:25:20.198597093Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -I DOCKER-ISOL
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on network docker_gwbridge"
time="2025-01-03T01:25:20.201385621Z" level=debug msg="RequestAddress(Loca
daemon.go:318: [d54ef6cf8e68b] Default/172.21.0.0/16, <nil>, map[])"
time="2025-01-03T01:25:20.201398816Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:2 Serial:false PrefAddress:invali
daemon.go:318: [d54ef6cf8e68b] IP "
time="2025-01-03T01:25:20.255114788Z" level=debug msg="Programming external connectivity on endpoint" eid=1496005f342a2ae5945c1482f54a10dbfe35b14d95a1dc4980585d4ba0427a59 ep=gateway_ingress-sbox net=docker_gwbridge nid=e7e266153773ead66ef66a14bbf16e4151e56f
daemon.go:318: [d54ef6cf8e68b] bf16b72b2aceca7912609569f spanID=3b68b85cf05914f9 traceID=d9bc69240907892866718ec65beee0cd
time="2025-01-03T01:25:20.264325460Z" level=debug msg="EnableService ingress-sbox START"
time="2025-01-03T01:25:20.
daemon.go:318: [d54ef6cf8e68b] 64341580Z" level=debug msg="EnableService ingress-sbox DONE"
time="2025-01-03T01:25:20.611449826Z" level=debug msg="handling GET request" method=GET module=api request-url=/v1.48/swarm spanID=4e6e88179fc39a
daemon.go:318: [d54ef6cf8e68b] d status=200 traceID=6f9ddf14bfd2728ad850df5ce3dbc734 vars="map[version:1.48]"
time="2025-01-03T01:25:24.733514060Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=nod
daemon.go:318: [d54ef6cf8e68b] /agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01
daemon.go:318: [d54ef6cf8e68b] 03T01:25:24.733863744Z" level=debug msg="received heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 4.706165333s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:25:24.733994728Z" level=debug msg="heartbeat successful to manager { }, next heartbeat period: 4.706165333s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.440662518Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw
daemon.go:318: [d54ef6cf8e68b] 83xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.441041698Z" level=debug msg="received heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect
daemon.go:318: [d54ef6cf8e68b] next heartbeat in 5.027937282s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:25:29.441148989Z" level=debug msg="heartbeat successful to manage
daemon.go:318: [d54ef6cf8e68b] { }, next heartbeat period: 5.027937282s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
ess(LocalDefault/172.21.0.0/16, <nil>, map[RequestAddressTy
daemon.go:318: [d54ef6cf8e68b] e:com.docker.network.gateway])"
time="2025-01-03T01:25:20.195069115Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1
daemon.go:318: [d54ef6cf8e68b] ->end Curr:0 Serial:false PrefAddress:invalid IP "
time="2025-01-03T01:25:20.195558217Z" level=debug msg="Assigning address to bridge interface docker_gwbridge: 172.21.0.1/16"
time="2025-01-03T01:25:20.195702245Z" level=debug msg="/usr/sbin/ip6tables, [--wait -
daemon.go:318: [d54ef6cf8e68b] filter -C DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
time="2025-01-03T01:25:20.196767481Z" level=debug msg="
daemon.go:318: [d54ef6cf8e68b] usr/sbin/ip6tables, [--wait -t filter -A DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
daemon.go:318: [d54ef6cf8e68b] ime="2025-01-03T01:25:20.197696924Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
time="2025-01-03T01:25:20.1
daemon.go:318: [d54ef6cf8e68b] 8597093Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
time="2025-01-03T01:25:20.201369461Z" level=d
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on network dock
daemon.go:318: [d54ef6cf8e68b] r_gwbridge"
time="2025-01-03T01:25:20.201385621Z" level=debug msg="RequestAddress(LocalDefault/172.21.0.0/16, <nil>, map[])"
time="2025-01-03T01:25:20.201398816Z" level=debug msg="Request address PoolID:172
daemon.go:318: [d54ef6cf8e68b] 21.0.0/16 Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:2 Serial:false PrefAddress:invalid IP "
time="2025-01-03T01:25:20.255114788Z" level=debug msg="Programmin
daemon.go:318: [d54ef6cf8e68b] external connectivity on endpoint" eid=1496005f342a2ae5945c1482f54a10dbfe35b14d95a1dc4980585d4ba0427a59 ep=gateway_ingress-sbox net=docker_gwbridge nid=e7e266153773ead66ef66a14bbf16e4151e56febf16b72b2aceca7912609569f spanID=3b68b85cf
daemon.go:318: [d54ef6cf8e68b] 5914f9 traceID=d9bc69240907892866718ec65beee0cd
time="2025-01-03T01:25:20.264325460Z" level=d
daemon.go:318: [d54ef6cf8e68b] bug msg="EnableService ingress-sbox START"
time="2025-01-03T01:25:20.264341580Z" level=debug msg="EnableService ingress-sbox DONE"
t
daemon.go:318: [d54ef6cf8e68b] me="2025-01-03T01:25:20.611449826Z" level=debug msg="handling GET request" method=GET module=api request-url
daemon.go:318: [d54ef6cf8e68b] /v1.48/swarm spanID=4e6e88179fc39aed status=200 traceID=6f9ddf14bfd2728ad850df5ce3dbc734 vars="map[version:1.48]"
time="2025-01-03T0
daemon.go:318: [d54ef6cf8e68b] :25:24.733514060Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=
daemon.go:318: [d54ef6cf8e68b] wp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-0
daemon.go:318: [d54ef6cf8e68b] T01:25:24.733863744Z" level=debug msg="received heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <ni
daemon.go:318: [d54ef6cf8e68b] > 127.0.0.1:2477}, expect next heartbeat in 4.706165333s" method="(*Dispatcher).Heartbeat"
time="2025-01-03T01:25:24.733994728Z" level=debug msg="heartbeat successful to manager { },
daemon.go:318: [d54ef6cf8e68b] next heartbeat period: 4.706165333s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l
daemon.go:318: [d54ef6cf8e68b] m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.440662518Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=vw
daemon.go:318: [d54ef6cf8e68b] 2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
time="2025-01-03T01:25:29.441041698Z" level=debug msg="received heartbeat fr
daemon.go:318: [d54ef6cf8e68b] m worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 5.027937282s" method="(*Dispatcher).Heartbeat"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:29.441148989Z" level=debug msg="heartbeat successful to manager { }, next heartbeat period: 5.027937282s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 sessio
daemon.go:318: [d54ef6cf8e68b] .id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
ess(LocalDefault/172.21.0.0/16, <nil>, map[RequestAddressType:com.docker.network.gateway])"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.195069115Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65534, Sequence: (0x80000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:0 Serial:false PrefAddress:invalid IP "
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.195558217Z" level=debug msg="Assigning address to bridge interface docker_gwbridge: 172.21.0.1/16"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.195702245Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.196767481Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -A DOCKER-ISOLATION-STAGE-1 -i docker_gwbridge ! -o docker_gwbridge -j DOCKER-ISOLATION-STAGE-2]"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.197696924Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -C DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.198597093Z" level=debug msg="/usr/sbin/ip6tables, [--wait -t filter -I DOCKER-ISOLATION-STAGE-2 -o docker_gwbridge -j DROP]"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201369461Z" level=debug msg="Assigning addresses for endpoint gateway_ingress-sbox's interface on network docker_gwbridge"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201385621Z" level=debug msg="RequestAddress(LocalDefault/172.21.0.0/16, <nil>, map[])"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.201398816Z" level=debug msg="Request address PoolID:172.21.0.0/16 Bits: 65536, Unselected: 65533, Sequence: (0xc0000000, 1)->(0x0, 2046)->(0x1, 1)->end Curr:2 Serial:false PrefAddress:invalid IP "
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.255114788Z" level=debug msg="Programming external connectivity on endpoint" eid=1496005f342a2ae5945c1482f54a10dbfe35b14d95a1dc4980585d4ba0427a59 ep=gateway_ingress-sbox net=docker_gwbridge nid=e7e266153773ead66ef66a14bbf16e4151e56febf16b72b2aceca7912609569f spanID=3b68b85cf05914f9 traceID=d9bc69240907892866718ec65beee0cd
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.264325460Z" level=debug msg="EnableService ingress-sbox START"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.264341580Z" level=debug msg="EnableService ingress-sbox DONE"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:20.611449826Z" level=debug msg="handling GET request" method=GET module=api request-url=/v1.48/swarm spanID=4e6e88179fc39aed status=200 traceID=6f9ddf14bfd2728ad850df5ce3dbc734 vars="map[version:1.48]"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:24.733514060Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:24.733863744Z" level=debug msg="received heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 4.706165333s" method="(*Dispatcher).Heartbeat"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:24.733994728Z" level=debug msg="heartbeat successful to manager { }, next heartbeat period: 4.706165333s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:29.440662518Z" level=debug msg="sending heartbeat to manager { } with timeout 5s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:29.441041698Z" level=debug msg="received heartbeat from worker {[swarm-manager] ww3ks2db16fjax3gcix16o0z4 vwp2o5o409lq0lnrwnx0t3g95 <nil> 127.0.0.1:2477}, expect next heartbeat in 5.027937282s" method="(*Dispatcher).Heartbeat"
daemon.go:318: [d54ef6cf8e68b] time="2025-01-03T01:25:29.441148989Z" level=debug msg="heartbeat successful to manager { }, next heartbeat period: 5.027937282s" method="(*session).heartbeat" module=node/agent node.id=vwp2o5o409lq0lnrwnx0t3g95 session.id=zkkp5l3m9qw383xsijaaeva4z sessionID=zkkp5l3m9qw383xsijaaeva4z
--- FAIL: TestDockerSwarmSuite/TestSwarmJoinPromoteLocked (11.69s) |
|
@thaJeztah not sure about the log formatting weirdness, the swarm failures are due to grpc upgrade. The fix is here moby/swarmkit#3189 |
|
Yeah, in all honesty, I never payed close attention to these because some of these swarm tests were flaky, but this time I noticed the odd way of them being formatted. Could definitely be an issue in one of our test-utilities as well, so perhaps it's really nothing interesting, but thought I'd at least post it, just in case it is 😂 |
304f7b8 to
e9ea544
Compare
vendor.mod
Outdated
| github.com/containerd/stargz-snapshotter/estargz v0.15.1 // indirect | ||
| github.com/containerd/ttrpc v1.2.5 // indirect | ||
| github.com/containernetworking/cni v1.2.2 // indirect | ||
| github.com/containerd/nydus-snapshotter v0.14.1-0.20241016012921-e55ae70fd45a // indirect |
There was a problem hiding this comment.
Looks like they tagged v0.15.0, so we can use a released version; https://github.com/containerd/nydus-snapshotter/releases/tag/v0.15.0
vendor.mod
Outdated
| github.com/containernetworking/plugins v1.5.1 // indirect | ||
| github.com/cyphar/filepath-securejoin v0.3.5 // indirect | ||
| github.com/davecgh/go-spew v1.1.1 // indirect | ||
| github.com/davecgh/go-spew v1.1.2-0.20180830191138-d8f796af33cc // indirect |
There was a problem hiding this comment.
sigh; do we know who decided to bump to master? There's literally NO changes; davecgh/go-spew@v1.1.1...d8f796a
There was a problem hiding this comment.
I suspect this is Nydus as well;
go mod graph | grep ' github.com/davecgh/[email protected]'
github.com/docker/docker github.com/davecgh/[email protected]
github.com/containerd/containerd/[email protected] github.com/davecgh/[email protected]
github.com/containerd/[email protected] github.com/davecgh/[email protected]
github.com/moby/[email protected] github.com/davecgh/[email protected]
vendor.mod
Outdated
| github.com/pmezard/go-difflib v1.0.0 // indirect | ||
| github.com/pmezard/go-difflib v1.0.1-0.20181226105442-5d4384ee4fb2 // indirect |
There was a problem hiding this comment.
Same for this one 😡 who decided to YOLO master for no reason at all ? pmezard/go-difflib@v1.0.0...5d4384e
There was a problem hiding this comment.
Looks like it may be nydus;
go mod graph | grep ' github.com/pmezard/[email protected]'
github.com/docker/docker github.com/pmezard/[email protected]
github.com/containerd/containerd/[email protected] github.com/pmezard/[email protected]
github.com/containerd/[email protected] github.com/pmezard/[email protected]
github.com/moby/[email protected] github.com/pmezard/[email protected]
There was a problem hiding this comment.
No! It's containerd, through cri-api; came in through this PR:
go mod graph | grep ' github.com/davecgh/[email protected]'
github.com/containerd/containerd/v2 github.com/davecgh/[email protected]
k8s.io/[email protected] github.com/davecgh/[email protected]
go mod graph | grep ' github.com/pmezard/[email protected]'
github.com/containerd/containerd/v2 github.com/pmezard/[email protected]
k8s.io/[email protected] github.com/pmezard/[email protected]There was a problem hiding this comment.
And in the cri-api repo it is hard to see the pull request it was brought it in with to see if it was intentional or discussed. We can bump it to a tag in containerd if we want.
There was a problem hiding this comment.
- And that came in through kubernetes/cri-api@1171b1d -> Update kustomize, use canonical json-patch v4 import kubernetes/kubernetes#123339
- Which came in through Update kustomize, use canonical json-patch v4 import kubernetes/kubernetes#123339 (comment)
- Which was because of update dependencies google.golang.org/[email protected] kubernetes-sigs/kustomize#5615
- And feat: drop support for Go 1.17 spf13/viper#1574
Looks like someone else also went down the rabbit-hole, and found the culprit, and reverted, but that's gonna take some time to get out of 😞
There was a problem hiding this comment.
Opened PRs in kubernetes and containerd to get back to released versions;
e9ea544 to
b8c1bab
Compare
b8c1bab to
c96441e
Compare
|
Rebased, marking as ready for review |
vendor.mod
Outdated
| module github.com/docker/docker | ||
|
|
||
| go 1.22.0 | ||
| go 1.23.0 |
|
Needs a rebase |
|
Yes, working my way through the dependencies; I have a branch locally that's rebased, but will push once those other PRs are in. I'm also considering that we should probably squash the containerd + buildkit update commits |
4ef51f6 to
c00a33b
Compare
Signed-off-by: Derek McGowan <[email protected]>
c00a33b to
f006540
Compare
|
Thanks! PR LGTM, but CI failure looks legit; I also saw it fail consistently on my draft PR after updating BuildKit to v0.19.0-rc1 (from previous master). I just checked (BuildKit failures are rather noisy because of the stack dumps), and it looks like I had the exact same failures; |
|
Looks like the failure above is fixed through; I updated my draft PR (#49279) to buildkit v0.19.0-rc2; if that PR goes green, I'll push an update here. |
|
Whoop; other one is greeeen. Let me push that to this PR |
f006540 to
deeff2e
Compare
go version change in vendor.mod was addressed; #48872 (comment)
thaJeztah
left a comment
There was a problem hiding this comment.
CI is happy now, but I just noticed an accidental change; let me fix that up
| Package overlay is a generated protocol buffer package. | ||
| Package overlay is a generated protocol buffer package. | ||
|
|
||
| It is generated from these files: | ||
| drivers/windows/overlay/overlay.proto | ||
| It is generated from these files: | ||
|
|
||
| It has these top-level messages: | ||
| PeerRecord | ||
| drivers/windows/overlay/overlay.proto | ||
|
|
||
| It has these top-level messages: | ||
|
|
||
| PeerRecord |
There was a problem hiding this comment.
I suspect this was an accidental change in formatting (this is a generated file)
Update buildkit version to commit which uses 2.0 Signed-off-by: Derek McGowan <[email protected]>
deeff2e to
0aa8fe0
Compare
There was a problem hiding this comment.
LGTM
we can merge this once "validate" passes; only diff is the formatting changes in the generated file; https://github.com/moby/moby/compare/deeff2eb507a1fa1356c517724bb81294bf73691..0aa8fe0bf949cbccab74414cb091728854be0d7d
|
LOL, what? Can some other failure cause this? https://github.com/moby/moby/actions/runs/12789048126/job/35651730269?pr=48872 Wondering if tests are run in parallel, and those causing allocations to happen between "first check" and "second check" 🤔 cc @vvoland any other ideas? |
|
All green now; bringing this one in 👍 |
containerd v2.0 has been released. This will need to be updated along with buildkit
Depends on
- Description for the changelog