I was struggeling to run the automated test written on docker java apis. I had set up rancher earlier without exposing port 2375, i was able to run the test, however it is started failing recently. I would like to understand is there any way that we can expose demon to connect to moby apis?
Thanks in advance,
Vij
]]>I need two same web applications on diferent URL adresses with own databases. So I have created two compose.yml files (compose-env01.yml and compose-env02.yml) for running two environments (env01 and env02). Each environment include two containers (web and mssql DB).
After build compose-env01.yml and compose-env02.yml are the images for web same for env01 and env02. Also images for mssql are the same for env01 and env02:
See Figure 1 below
Differents betweens compose-env01.yml and compose-env02.yml are only in using ports, as you can see:
compose-env01.yml:
version: "3.8"
services:
web:
build: ./web-mw
isolation: 'default'
hostname: web
deploy:
resources:
limits:
memory: 8G
ports:
- "7001:443"
environment:
"#CONFSETTINGS_PORT_MW#": "7001"
"#CONFSETTINGS_MSSQLSERVERPORT#": "14331"
depends_on:
- mssql
mssql:
build: ./database
isolation: 'default'
hostname: mssql
ports:
- "14011:14331"
expose:
- "14331"
volumes:
- .\volumes\db:C:\sqlbak
storage_opt:
size: '25G'
environment:
SA_PASSWORD: "Your_password"
ACCEPT_EULA: "Y"
MSSQL_PID: "Developer"
ChangeMSSQLServerPort: "TRUE"
MSSQLServerPort: "14331"
compose-env02.yml:
version: "3.8"
services:
web:
build: ./web-mw
isolation: 'default'
hostname: web
deploy:
resources:
limits:
memory: 8G
ports:
- "7002:443"
environment:
"#CONFSETTINGS_PORT_MW#": "7002"
"#CONFSETTINGS_MSSQLSERVERPORT#": "14332"
depends_on:
- mssql
mssql:
build: ./database
isolation: 'default'
hostname: mssql
ports:
- "14012:14332"
expose:
- "14332"
volumes:
- .\volumes\db:C:\sqlbak
storage_opt:
size: '25G'
environment:
SA_PASSWORD: "Your_password"
ACCEPT_EULA: "Y"
MSSQL_PID: "Developer"
ChangeMSSQLServerPort: "TRUE"
MSSQLServerPort: "14332"
When I run containers from env01 using command “docker compose -p env01 -f compose-env01.yml up -d”, both containers start correctly. Then I want to run containers from env02 using “docker compose -p env02 -f compose-env02.yml up -d”. But now will start only container env02-mssql-1 and container env02-web-1 remains in the state Starting:
See Figure 2 below
And this is the problem! This only happens on docker host with OS Windows Server 2022. The same configuration on docker host with OS Windows Server 2019 is OK and all containers from env01 and env02 start correctly.
I was trying run docker compose up also with parameter “–verbose” and with “debug”: true in daemon.json, but it didn’t give me any further information. When starting containers from compose-env02.yml, the container env02-mssql-1 started correctly and the container env02-web-1 didn’t even start to run and no further informations was displayed. So after that I canceled the start of the containers from compose-env02.yml. The state of containers from both environments was now this:
See Figure 3 below
And now a very interesting information:
Now when I shut down the containers from env01, the container env02-web-1 started Immediately automatically after that, as you can see:
See Figure 4 below
Given that, I think it will be probably some network issue. Interestingly, however, the same configuration works on docker host Windows Server 2019.
Figures:
Using versions are:
Docker host with OS Windows Server 2022
Docker version 24.0.4, build 3713ee1
Docker Compose version v2.20.2
So could you please help me anybody with this problem? Or could it be a bug in docker?
]]>My env
= liunx kernel 5.4.0-139-generic Ubuntu arm64
Example:
RUN wget github.com/....../xyz.txt -O xyz.txt
The file we are trying to download can change time to time but the docker engine will consider that the command is unchanged which is not aligned with the state of image generated after the command is executed.
Did community had discussion on this topic in past? Or is this intentional?
]]>CreateContainerErrors when running the docker runtime (v20.10.6) on GKE.
Not sure why I’m getting this or if an upgrade to the latest version of the docker engine would solve the problem. But any insight into the root cause would be greatly appreciated because the logs don’t suggest a reason.
]]>I’m trying to run a container programmatically from golang using API.
The run docker command is like this:docker container run -a stdout -a stderr --stop-timeout 0 --rm -v sourceDir:destDir --network none -m 1GB my-image:latest args1 args2 args3Below is a code that I am trying to make run the container and it doesn’t work as expected
package main import ( "context" "fmt" "os/exec" "github.com/docker/docker/api/types" "github.com/docker/docker/api/types/container" "github.com/docker/docker/client" ) func main() { cli, err := client.NewClientWithOpts() if err != nil { fmt.Println("Unable to create docker client") panic(err) } ctx := context.Background() cont, err := cli.ContainerCreate( ctx, &container.Config{ Image: "my-image:latest", AttachStdout: true, AttachStderr: true, Volumes: map[string]struct{}{ "sourceDir:destDir": {}, }, Entrypoint: []string{ "arg1", "arg2", "arg3", }, StopTimeout: new(int), NetworkDisabled: true, }, nil, nil, nil, "", ) if err != nil { panic(err) } cli.ContainerStart(ctx, cont.ID, types.ContainerStartOptions{}) cli.ContainerRemove(ctx, cont.ID, types.ContainerRemoveOptions{Force: true}) }Any suggestion on what can be wrong?
There are a few issues with the code you had posted:
AttachStdout and AttachStderr fields in the container.Config struct should be set to false, not true. Setting these fields to true will attach the container’s standard output and error streams to the host’s standard output and error streams, respectively.Volumes field in the container.Config struct should be a map of strings to *mount.Mount, not a map of strings to empty structs. You can use the mount.New function from the github.com/docker/docker/api/types/mount package to create a *mount.Mount for the volume.StopTimeout field in the container.Config struct should be set to a pointer to a *int with a value of 0, not a pointer to a *int with a nil value.cli.ContainerStart function should be called with a types.ContainerStartOptions struct as its second argument, not a types.ContainerStartOptions{}.cli.ContainerRemove function should be called with a types.ContainerRemoveOptions struct as its third argument, not a types.ContainerRemoveOptions{Force: true}.Here an updated version of your code with these changes applied:
package main
import (
"context"
"fmt"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/api/types/mount"
"github.com/docker/docker/client"
)
func main() {
cli, err := client.NewClientWithOpts()
if err != nil {
fmt.Println("Unable to create docker client")
panic(err)
}
ctx := context.Background()
cont, err := cli.ContainerCreate(
ctx,
&container.Config{
Image: "my-image:latest",
AttachStdout: false,
AttachStderr: false,
Volumes: map[string]*mount.Mount{
"sourceDir:destDir": mount.New("sourceDir", "destDir", false),
},
Entrypoint: []string{
"arg1", "arg2", "arg3",
},
StopTimeout: new(int),
NetworkDisabled: true,
},
nil,
nil,
nil,
"",
)
if err != nil {
panic(err)
}
cli.ContainerStart(ctx, cont.ID, types.ContainerStartOptions{})
cli.ContainerRemove(ctx, cont.ID, types.ContainerRemoveOptions{Force: true})
}
]]>docker container run -a stdout -a stderr --stop-timeout 0 --rm -v sourceDir:destDir --network none -m 1GB my-image:latest args1 args2 args3
Below is a code that I am trying to make run the container and it doesn’t work as expected
package main
import (
"context"
"fmt"
"os/exec"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
)
func main() {
cli, err := client.NewClientWithOpts()
if err != nil {
fmt.Println("Unable to create docker client")
panic(err)
}
ctx := context.Background()
cont, err := cli.ContainerCreate(
ctx,
&container.Config{
Image: "my-image:latest",
AttachStdout: true,
AttachStderr: true,
Volumes: map[string]struct{}{
"sourceDir:destDir": {},
},
Entrypoint: []string{
"arg1", "arg2", "arg3",
},
StopTimeout: new(int),
NetworkDisabled: true,
},
nil,
nil,
nil,
"",
)
if err != nil {
panic(err)
}
cli.ContainerStart(ctx, cont.ID, types.ContainerStartOptions{})
cli.ContainerRemove(ctx, cont.ID, types.ContainerRemoveOptions{Force: true})
}
Any suggestion on what can be wrong?
Ps: Not happens exceptions. Just not working… I have doubt if volumes and args are mapped correctly
]]>So while you could technically use Moby directly, it would make more sense to just use Docker.
]]>ports:
- "8080:3000"
What happens is: I want only my local network to be able to reach this port. I have figured out that by using the following nftables rules
nft insert rule filter DOCKER-USER position 0 'ip saddr != 192.168.178.0/24 ip daddr 172.19.0.4 tcp dport 3000 jump DOCKER-USER-DROP'
I can get the behavior I want, but this forces me to check what is the internal container’s ip address and the internal port. I’d prefer to specify directly the external port I am trying to protect, but to achieve this I’d need to add this rule in a table similar to DOCKER-USER in the NAT table. The problem, then, is that I think there is not such chain managed by docker itself… or is there? In my debian 11 I have the following:
table ip nat {
[...]
chain DOCKER {
iifname "br-313449142f9f" counter packets 0 bytes 0 return
iifname "docker0" counter packets 0 bytes 0 return
iifname != "br-313449142f9f" meta l4proto tcp tcp dport 8080 counter packets 63 bytes 3700 dnat to 172.19.0.4:3000
iifname != "br-313449142f9f" meta l4proto tcp ip daddr 127.0.0.1 tcp dport 5432 counter packets 0 bytes 0 dnat to 172.19.0.2:5432
iifname != "br-313449142f9f" meta l4proto tcp ip daddr 127.0.0.1 tcp dport 3000 counter packets 0 bytes 0 dnat to 172.19.0.3:3000
}
[...]
}
and I’d expect something like I have in the filter table:
table ip filter {
[...]
chain FORWARD {
type filter hook forward priority filter; policy accept;
counter packets 7591 bytes 931764 jump DOCKER-ISOLATION-STAGE-1
counter packets 7591 bytes 931764 jump DOCKER-USER
oifname "br-313449142f9f" ct state related,established counter packets 6821 bytes 874506 accept
oifname "br-313449142f9f" counter packets 49 bytes 2860 jump DOCKER
iifname "br-313449142f9f" oifname != "br-313449142f9f" counter packets 337 bytes 31358 accept
iifname "br-313449142f9f" oifname "br-313449142f9f" counter packets 0 bytes 0 accept
oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
}
chain DOCKER-USER {
oifname "lo" counter packets 0 bytes 0 jump DOCKER-USER-DENY-INTERNAL
oifname "enp*" counter packets 707 bytes 53558 jump DOCKER-USER-DENY-INTERNAL
counter packets 7207 bytes 908724 return
}
[...]
}
so… how can I get this to work? I am surprised to have found very little documentation about this, when I’d expect this situation (preventing ports to being open to the internet) to be pretty well documented. This makes me think I might be doing something wrong?
Thank you!
docker run with the -v flag to mount directories, then any files the container writes back to the host end up with bad permissions. I have to use sudo to operate on the files.
This happens when using Docker from a Windows Subsystem for Linux (WSL) environment.
]]>Appreciate if any idea why this happened?
]]>You can get started with Moby by running some of the examples assemblies in the [LinuxKit GitHub]repository, or in a browser with Play with Moby.
Play with Moby link is invalid
]]>I’d appreciate any feedback people have on this tool. Is it useful? Does it do what you expect/want?
Thanks in advance
]]>docker volume list? ]]>docker system df shows no data yet in the /var/lib/overlay2 directory there is ~250GB of data. I just stopped docker and wiped that directory and restarted docker and seems to not have caused any issues. But how is all this data in that directory when a docker ps -a shows no images?
Docker version 19.03.12, build 48a66213fe
]]>I tried enabling docker experimental features, yet I get this:
√ ; docker --version
Docker version 19.03.13, build 4484c46
√ ; docker version -f '{{.Server.Experimental}}'
true
√ ; docker app help
docker: 'app' is not a docker command.
See 'docker --help'
✗ 1 ; rpm -qa | rg -i moby
moby-engine-19.03.13-1.ce.git4484c46.fc33.x86_64
√ ; uname -a
Linux nightwatch 5.10.11-200.fc33.x86_64 #1 SMP Wed Jan 27 20:21:22 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
√ ;
Am I doing something wrong there?
]]>I’d say the roadmap still fits well with the current work being done on the project.
For the contributing links, slackarchive seems to have been cut off due to a TOS change for Slack. Was not aware, we should remove the link.
]]>This issue is important for me, but now that I’m looking around, I’m starting to reconsider my desire to get involved. Am I looking at the wrong places or is moby close to its EOL? Thanks!
]]>I am part of a group of student researchers from Clemson University working on a semester project for Dr. Paige Rodeghero (https://paigerodeghero.com/) relating to linter and formatter usage affects on contributor productivity and git blaming scenarios.
We would like to invite you and your contributors to the Moby repository on Github to take part in our short (3-5 minute) anonymous survey: https://clemson.ca1.qualtrics.com/jfe/form/SV_9XIGxd2xq7fsDBP
If there are any questions or concerns relating to this survey, please contact any of the following student researchers:
Ella Kokinda - [email protected]
Yarui Cao - [email protected]
Manavi Sattigarahalli - [email protected]
Best, Ella
]]>Build log looks like this:
Removing bundles/
---> Making bundle: cross (in bundles/cross)
Cross building: bundles/cross/windows/amd64
Building: bundles/cross/windows/amd64/dockerd-dev.exe
GOOS="windows" GOARCH="amd64" GOARM=""
Created binary: bundles/cross/windows/amd64/dockerd-dev.exe
Cloning into '/go/src/github.com/docker/windows-container-utility'...
remote: Enumerating objects: 2, done.
remote: Counting objects: 100% (2/2), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 23 (delta 0), reused 0 (delta 0), pack-reused 21
Unpacking objects: 100% (23/23), done.
Building: bundles/cross/windows/amd64/containerutility.exe
and then it just exits.
Any idea what I might be doing wrong?
]]>Above error triggered me when my colleague asked me if I could find a reason why his docker daemon didn’t start. After a day of searching and debugging I think I found the reason and I have documented this in an open ticket that looks related to this issue, or might actually be exactly this issue (https://github.com/moby/moby/issues/33925). I also found several other tickets touching on the same issue but most of them closed without a solution, and several people talking in personal blogs about this issue, but never a real solution.
I would like to get in touch with someone that can look at this together with me, and see if we can find common ground. I’m not a developer, so I really not able to get a pull request ready for my proposed fix, but I think I’m very well able to explain a developer why I think this bug needs some attention.
Best regards,
Jan Hugo Prins
If i got it well, the docker-compose part is ok : calls to the network_connect API are correctly ordered.
Because I didn’t have time to dig into the docker-py and moby parts of this, today I can just confirm that those priorities are ignored upon container start : https://gist.github.com/jfellus/cfee9efc1e8e1baf9d15314f16a46eca
Networks and containers get connected in the right order, but then when we up/down the said docker-compose project, container get their interfaces created in the lexical order of their network name which is, say, arbitrary…
What I don’t have time to really understand is where this particular decision is made when starting a stopped container that have been connected in a particular order to various networks in a stopped state. Is there any hope to easily code a PR to make the priority settings pass through docker-py, get registered as libnetwork endpoint confs and container interfaces be created in the defined order ?
Thanks !
]]>I would highly encourage to install a currently maintained version (current version is docker 19.03.x at time of writing) (https://docs.docker.com/engine/install/). Older releases are not maintained, and can have unpatched vulnerabilities (in some cases, critical)
]]>docker version command)
That error may mean that you have a very old version of docker installed that not yet has support for multi-stage builds
]]>make BIND_DIR=. shell
and get the below error.
docker build --build-arg=GO_VERSION -f "Dockerfile" --target=dev -t "docker-dev:dry-run-test" .
unknown flag: --target
See 'docker build --help'.
make: *** [build] 错误 125
]]>Thank you.
By curiosity, do you know why they stop following the time-based release process as stated here:
https://github.com/moby/moby/wiki ?
Thanks!
]]>
Yes I know but unfortunately our repo does not support plugins, it seems.
I should create it such as I would tag a container, like
repo.name/pluginname:tag
and push it, right?
This does not work in our private repo.
How can I tar one and restore it?
With docker save it did not work for me, as there is no “normal” image.
Apart from that, can I just copy a folder to the other device?
]]>Managed plugins are special container images. You can create one from a tarball, and push/pull to/from a registry.
]]>