I was struggeling to run the automated test written on docker java apis. I had set up rancher earlier without exposing port 2375, i was able to run the test, however it is started failing recently. I would like to understand is there any way that we can expose demon to connect to moby apis?
Thanks in advance,
Vij
1 post - 1 participant
]]>I need two same web applications on diferent URL adresses with own databases. So I have created two compose.yml files (compose-env01.yml and compose-env02.yml) for running two environments (env01 and env02). Each environment include two containers (web and mssql DB).
After build compose-env01.yml and compose-env02.yml are the images for web same for env01 and env02. Also images for mssql are the same for env01 and env02:
See Figure 1 below
Differents betweens compose-env01.yml and compose-env02.yml are only in using ports, as you can see:
compose-env01.yml:
version: "3.8"
services:
web:
build: ./web-mw
isolation: 'default'
hostname: web
deploy:
resources:
limits:
memory: 8G
ports:
- "7001:443"
environment:
"#CONFSETTINGS_PORT_MW#": "7001"
"#CONFSETTINGS_MSSQLSERVERPORT#": "14331"
depends_on:
- mssql
mssql:
build: ./database
isolation: 'default'
hostname: mssql
ports:
- "14011:14331"
expose:
- "14331"
volumes:
- .\volumes\db:C:\sqlbak
storage_opt:
size: '25G'
environment:
SA_PASSWORD: "Your_password"
ACCEPT_EULA: "Y"
MSSQL_PID: "Developer"
ChangeMSSQLServerPort: "TRUE"
MSSQLServerPort: "14331"
compose-env02.yml:
version: "3.8"
services:
web:
build: ./web-mw
isolation: 'default'
hostname: web
deploy:
resources:
limits:
memory: 8G
ports:
- "7002:443"
environment:
"#CONFSETTINGS_PORT_MW#": "7002"
"#CONFSETTINGS_MSSQLSERVERPORT#": "14332"
depends_on:
- mssql
mssql:
build: ./database
isolation: 'default'
hostname: mssql
ports:
- "14012:14332"
expose:
- "14332"
volumes:
- .\volumes\db:C:\sqlbak
storage_opt:
size: '25G'
environment:
SA_PASSWORD: "Your_password"
ACCEPT_EULA: "Y"
MSSQL_PID: "Developer"
ChangeMSSQLServerPort: "TRUE"
MSSQLServerPort: "14332"
When I run containers from env01 using command “docker compose -p env01 -f compose-env01.yml up -d”, both containers start correctly. Then I want to run containers from env02 using “docker compose -p env02 -f compose-env02.yml up -d”. But now will start only container env02-mssql-1 and container env02-web-1 remains in the state Starting:
See Figure 2 below
And this is the problem! This only happens on docker host with OS Windows Server 2022. The same configuration on docker host with OS Windows Server 2019 is OK and all containers from env01 and env02 start correctly.
I was trying run docker compose up also with parameter “–verbose” and with “debug”: true in daemon.json, but it didn’t give me any further information. When starting containers from compose-env02.yml, the container env02-mssql-1 started correctly and the container env02-web-1 didn’t even start to run and no further informations was displayed. So after that I canceled the start of the containers from compose-env02.yml. The state of containers from both environments was now this:
See Figure 3 below
And now a very interesting information:
Now when I shut down the containers from env01, the container env02-web-1 started Immediately automatically after that, as you can see:
See Figure 4 below
Given that, I think it will be probably some network issue. Interestingly, however, the same configuration works on docker host Windows Server 2019.
Figures:
Using versions are:
Docker host with OS Windows Server 2022
Docker version 24.0.4, build 3713ee1
Docker Compose version v2.20.2
So could you please help me anybody with this problem? Or could it be a bug in docker?
2 posts - 2 participants
]]>2 posts - 2 participants
]]>My env
= liunx kernel 5.4.0-139-generic Ubuntu arm64
2 posts - 2 participants
]]>Example:
RUN wget github.com/....../xyz.txt -O xyz.txt
The file we are trying to download can change time to time but the docker engine will consider that the command is unchanged which is not aligned with the state of image generated after the command is executed.
Did community had discussion on this topic in past? Or is this intentional?
1 post - 1 participant
]]>CreateContainerErrors when running the docker runtime (v20.10.6) on GKE.
Not sure why I’m getting this or if an upgrade to the latest version of the docker engine would solve the problem. But any insight into the root cause would be greatly appreciated because the logs don’t suggest a reason.
1 post - 1 participant
]]>1 post - 1 participant
]]>1 post - 1 participant
]]>docker container run -a stdout -a stderr --stop-timeout 0 --rm -v sourceDir:destDir --network none -m 1GB my-image:latest args1 args2 args3
Below is a code that I am trying to make run the container and it doesn’t work as expected
package main
import (
"context"
"fmt"
"os/exec"
"github.com/docker/docker/api/types"
"github.com/docker/docker/api/types/container"
"github.com/docker/docker/client"
)
func main() {
cli, err := client.NewClientWithOpts()
if err != nil {
fmt.Println("Unable to create docker client")
panic(err)
}
ctx := context.Background()
cont, err := cli.ContainerCreate(
ctx,
&container.Config{
Image: "my-image:latest",
AttachStdout: true,
AttachStderr: true,
Volumes: map[string]struct{}{
"sourceDir:destDir": {},
},
Entrypoint: []string{
"arg1", "arg2", "arg3",
},
StopTimeout: new(int),
NetworkDisabled: true,
},
nil,
nil,
nil,
"",
)
if err != nil {
panic(err)
}
cli.ContainerStart(ctx, cont.ID, types.ContainerStartOptions{})
cli.ContainerRemove(ctx, cont.ID, types.ContainerRemoveOptions{Force: true})
}
Any suggestion on what can be wrong?
Ps: Not happens exceptions. Just not working… I have doubt if volumes and args are mapped correctly
2 posts - 2 participants
]]>2 posts - 2 participants
]]>ports:
- "8080:3000"
What happens is: I want only my local network to be able to reach this port. I have figured out that by using the following nftables rules
nft insert rule filter DOCKER-USER position 0 'ip saddr != 192.168.178.0/24 ip daddr 172.19.0.4 tcp dport 3000 jump DOCKER-USER-DROP'
I can get the behavior I want, but this forces me to check what is the internal container’s ip address and the internal port. I’d prefer to specify directly the external port I am trying to protect, but to achieve this I’d need to add this rule in a table similar to DOCKER-USER in the NAT table. The problem, then, is that I think there is not such chain managed by docker itself… or is there? In my debian 11 I have the following:
table ip nat {
[...]
chain DOCKER {
iifname "br-313449142f9f" counter packets 0 bytes 0 return
iifname "docker0" counter packets 0 bytes 0 return
iifname != "br-313449142f9f" meta l4proto tcp tcp dport 8080 counter packets 63 bytes 3700 dnat to 172.19.0.4:3000
iifname != "br-313449142f9f" meta l4proto tcp ip daddr 127.0.0.1 tcp dport 5432 counter packets 0 bytes 0 dnat to 172.19.0.2:5432
iifname != "br-313449142f9f" meta l4proto tcp ip daddr 127.0.0.1 tcp dport 3000 counter packets 0 bytes 0 dnat to 172.19.0.3:3000
}
[...]
}
and I’d expect something like I have in the filter table:
table ip filter {
[...]
chain FORWARD {
type filter hook forward priority filter; policy accept;
counter packets 7591 bytes 931764 jump DOCKER-ISOLATION-STAGE-1
counter packets 7591 bytes 931764 jump DOCKER-USER
oifname "br-313449142f9f" ct state related,established counter packets 6821 bytes 874506 accept
oifname "br-313449142f9f" counter packets 49 bytes 2860 jump DOCKER
iifname "br-313449142f9f" oifname != "br-313449142f9f" counter packets 337 bytes 31358 accept
iifname "br-313449142f9f" oifname "br-313449142f9f" counter packets 0 bytes 0 accept
oifname "docker0" ct state related,established counter packets 0 bytes 0 accept
oifname "docker0" counter packets 0 bytes 0 jump DOCKER
iifname "docker0" oifname != "docker0" counter packets 0 bytes 0 accept
iifname "docker0" oifname "docker0" counter packets 0 bytes 0 accept
}
chain DOCKER-USER {
oifname "lo" counter packets 0 bytes 0 jump DOCKER-USER-DENY-INTERNAL
oifname "enp*" counter packets 707 bytes 53558 jump DOCKER-USER-DENY-INTERNAL
counter packets 7207 bytes 908724 return
}
[...]
}
so… how can I get this to work? I am surprised to have found very little documentation about this, when I’d expect this situation (preventing ports to being open to the internet) to be pretty well documented. This makes me think I might be doing something wrong?
Thank you!
1 post - 1 participant
]]>1 post - 1 participant
]]>docker run with the -v flag to mount directories, then any files the container writes back to the host end up with bad permissions. I have to use sudo to operate on the files.
This happens when using Docker from a Windows Subsystem for Linux (WSL) environment.
1 post - 1 participant
]]>Appreciate if any idea why this happened?
1 post - 1 participant
]]>You can get started with Moby by running some of the examples assemblies in the [LinuxKit GitHub]repository, or in a browser with Play with Moby.
Play with Moby link is invalid
1 post - 1 participant
]]>I’d appreciate any feedback people have on this tool. Is it useful? Does it do what you expect/want?
Thanks in advance
2 posts - 2 participants
]]>I tried enabling docker experimental features, yet I get this:
√ ; docker --version
Docker version 19.03.13, build 4484c46
√ ; docker version -f '{{.Server.Experimental}}'
true
√ ; docker app help
docker: 'app' is not a docker command.
See 'docker --help'
✗ 1 ; rpm -qa | rg -i moby
moby-engine-19.03.13-1.ce.git4484c46.fc33.x86_64
√ ; uname -a
Linux nightwatch 5.10.11-200.fc33.x86_64 #1 SMP Wed Jan 27 20:21:22 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
√ ;
Am I doing something wrong there?
1 post - 1 participant
]]>This issue is important for me, but now that I’m looking around, I’m starting to reconsider my desire to get involved. Am I looking at the wrong places or is moby close to its EOL? Thanks!
2 posts - 2 participants
]]>I am part of a group of student researchers from Clemson University working on a semester project for Dr. Paige Rodeghero (https://paigerodeghero.com/) relating to linter and formatter usage affects on contributor productivity and git blaming scenarios.
We would like to invite you and your contributors to the Moby repository on Github to take part in our short (3-5 minute) anonymous survey: https://clemson.ca1.qualtrics.com/jfe/form/SV_9XIGxd2xq7fsDBP
If there are any questions or concerns relating to this survey, please contact any of the following student researchers:
Ella Kokinda - [email protected]
Yarui Cao - [email protected]
Manavi Sattigarahalli - [email protected]
Best, Ella
1 post - 1 participant
]]>Build log looks like this:
Removing bundles/
---> Making bundle: cross (in bundles/cross)
Cross building: bundles/cross/windows/amd64
Building: bundles/cross/windows/amd64/dockerd-dev.exe
GOOS="windows" GOARCH="amd64" GOARM=""
Created binary: bundles/cross/windows/amd64/dockerd-dev.exe
Cloning into '/go/src/github.com/docker/windows-container-utility'...
remote: Enumerating objects: 2, done.
remote: Counting objects: 100% (2/2), done.
remote: Compressing objects: 100% (2/2), done.
remote: Total 23 (delta 0), reused 0 (delta 0), pack-reused 21
Unpacking objects: 100% (23/23), done.
Building: bundles/cross/windows/amd64/containerutility.exe
and then it just exits.
Any idea what I might be doing wrong?
1 post - 1 participant
]]>Above error triggered me when my colleague asked me if I could find a reason why his docker daemon didn’t start. After a day of searching and debugging I think I found the reason and I have documented this in an open ticket that looks related to this issue, or might actually be exactly this issue (https://github.com/moby/moby/issues/33925). I also found several other tickets touching on the same issue but most of them closed without a solution, and several people talking in personal blogs about this issue, but never a real solution.
I would like to get in touch with someone that can look at this together with me, and see if we can find common ground. I’m not a developer, so I really not able to get a pull request ready for my proposed fix, but I think I’m very well able to explain a developer why I think this bug needs some attention.
Best regards,
Jan Hugo Prins
1 post - 1 participant
]]>If i got it well, the docker-compose part is ok : calls to the network_connect API are correctly ordered.
Because I didn’t have time to dig into the docker-py and moby parts of this, today I can just confirm that those priorities are ignored upon container start : https://gist.github.com/jfellus/cfee9efc1e8e1baf9d15314f16a46eca
Networks and containers get connected in the right order, but then when we up/down the said docker-compose project, container get their interfaces created in the lexical order of their network name which is, say, arbitrary…
What I don’t have time to really understand is where this particular decision is made when starting a stopped container that have been connected in a particular order to various networks in a stopped state. Is there any hope to easily code a PR to make the priority settings pass through docker-py, get registered as libnetwork endpoint confs and container interfaces be created in the defined order ?
Thanks !
1 post - 1 participant
]]>make BIND_DIR=. shell
and get the below error.
docker build --build-arg=GO_VERSION -f "Dockerfile" --target=dev -t "docker-dev:dry-run-test" .
unknown flag: --target
See 'docker build --help'.
make: *** [build] 错误 125
5 posts - 3 participants
]]>Thanks!
3 posts - 2 participants
]]>1 post - 1 participant
]]>on my Ubuntu 18.04 I installed Moby using apt.
I know how to create an own docker plugin using the go-helper and I can ‘docker plugin install’ a self-made log plugin with its own rootfs.
Intention of all this is to forward the stdout of all containers to an already existing log-system, that is non-standard.
Due to several reasons, I need to make the plugin an external plugin. Such, that it is started externally and discovered by Docker by one of the three methods described here:
(I am aware, that they do not state log-plugins there explicitly)
I created a plugin using the ServeUnix function call, which after starting creates a .sock in /run/docker/plugins
I also created a plugin using the ServeTCP function call, which creates a .spec file containing the target IP address in /var/lib/docker/
When starting a container using this command
docker run --log-driver=logdrivertcp hello-world
it says
docker: Error response from daemon: logger: no log driver named ‘logdrivertcp’ is registered.
Should this be working with Moby?
dev@dev-VirtualBox:/src/co/moby$ docker version
Client:
Version: 3.0.10+azure
API version: 1.40
Go version: go1.12.14
Git commit: 99c5edceb48d64c1aa5d09b8c9c499d431d98bb9
Built: Tue Nov 5 00:55:15 2019
OS/Arch: linux/amd64
Experimental: false
Server:
Engine:
Version: 3.0.10+azure
API version: 1.40 (minimum version 1.12)
Go version: go1.12.14
Git commit: ea84732a77
Built: Fri Jan 24 20:08:11 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: v1.2.11
GitCommit: f772c10a585ced6be8f86e8c58c2b998412dd963
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683
dev@dev-VirtualBox:/src/co/moby$ uname -a
Linux dev-VirtualBox 5.3.0-40-generic #32~18.04.1-Ubuntu SMP Mon Feb 3 14:05:59 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux
6 posts - 2 participants
]]>docker system df. Even after doing a complete prune by deleting all containers, images, volumes, networks, build cache etc. There will be a huge amount left over in the overlay2 directory presumably from artifacts that weren’t cleaned up by an unknown culprit. Is there anyway to identify which images “own” which overlay2 directories? Ideally docker system prune would remove “unowned” overlay2 layers. Currently I have to stop docker. Prune everything, move overlay2 to overlay2bak, start deleting it in the background then resume docker. I have to do this about once a month. If this is the wrong forum I apologize, any help in redirecting me would be greatly appreciated.
ubuntu@ops:~$ sudo su
root@ops:/home/ubuntu# df -H
Filesystem Size Used Avail Use% Mounted on
udev 34G 0 34G 0% /dev
tmpfs 6.7G 1.3M 6.7G 1% /run
/dev/nvme3n1p1 90G 62G 25G 72% /
tmpfs 34G 8.2k 34G 1% /dev/shm
tmpfs 5.3M 0 5.3M 0% /run/lock
tmpfs 34G 0 34G 0% /sys/fs/cgroup
/dev/nvme0n1 79G 16G 60G 22% /mnt/elastic
/dev/nvme2n1 265G 58G 196G 23% /mnt/jenkins_workspace
/dev/nvme1n1 553G 222G 303G 43% /mnt/docker_storage
tmpfs 6.7G 0 6.7G 0% /run/user/113
/dev/loop2 94M 94M 0 100% /snap/core/8268
tmpfs 6.7G 0 6.7G 0% /run/user/1000
/dev/loop1 96M 96M 0 100% /snap/core/8592
overlay 553G 222G 303G 43% /mnt/docker_storage/overlay2/u5fzewgaok0j5jj7n8enbj12j/merged
overlay 553G 222G 303G 43% /mnt/docker_storage/overlay2/ucvj9iv4sf0709zqx4z26ri37/merged
overlay 553G 222G 303G 43% /mnt/docker_storage/overlay2/33nvb1g1ji6uovzea3rdls7ix/merged
root@ops:/home/ubuntu# DOCKER_BUILDKIT=1 docker system df
TYPE TOTAL ACTIVE SIZE RECLAIMABLE
Images 82 0 54.27GB 54.27GB (100%)
Containers 0 0 0B 0B
Local Volumes 0 0 0B 0B
Build Cache 408 48 622MB 0B
14 posts - 5 participants
]]>I would like to have as a result of this behavior. While I upgrade a service I would like to remove from DNS (hide for access) a container but stop or kill it with delay. It needs to have some time to store data and don’t receive new data.
1 post - 1 participant
]]>func main() {
df, err := os.Open("docker/Dockerfile")
defer df.Close()
if err != nil {
panic(err)
return
}
x,err:=ioutil.ReadAll(df)
if err!=nil{
log.Println(err)
return
}
buf := new(bytes.Buffer)
tw := tar.NewWriter(buf)
defer tw.Close()
tarHeader := &tar.Header{
Name: "Dockerfile",
Size: int64(len(x)),
}
err = tw.WriteHeader(tarHeader)
if err != nil {
log.Fatal(err, " :unable to write tar header")
}
_, err = tw.Write(x)
if err != nil {
log.Fatal(err, " :unable to write tar body")
}
dockerFileTarReader := bytes.NewReader(buf.Bytes())
ctx := context.Background()
cli, err := client.NewClientWithOpts(client.FromEnv)
if err != nil {
log.Println("Client err:",err)
panic(err)
}
out:=[]types.ImageBuildOutput{
{Type:"string",
Attrs:map[string]string{
"ID":"ID",
}},
}
cli.NegotiateAPIVersion(ctx)
resp, err := cli.ImageBuild(ctx, dockerFileTarReader, types.ImageBuildOptions{
Tags: []string{"ccxtrest/ccxt-rest:1.12.7"},
Remove: true,
Outputs: out,
})
if err != nil {
log.Print(err, " :unable to build docker image")
return
}
defer resp.Body.Close()
rd := bufio.NewReader(resp.Body)
for {
n, _, err := rd.ReadLine()
if err != nil && err == io.EOF {
break
} else if err != nil {
log.Println(err)
return
}
fmt.Println(string(n))
}
}
it works well and build image and if there is something wrong in Dockerfle e.g copying a file that doesn’t exist then it doesn’t give an error instead output stream or stdout prints the error
i want to know is there any way by which i can check image is either built successfully or failed if built successfully then i need image id
1 post - 1 participant
]]>yum repo which is used in Kubespray now returns 403.
url:
https://yum.dockerproject.org/repo/main/centos/7
Please fix it.
Best regards,
Victor Yagofarov
4 posts - 3 participants
]]>