Nils Brinkmann (959ad101) at 12 Jan 12:53
Added docker:dind service
Nils Brinkmann (dec94789) at 12 Jan 12:51
Build in proper docker environment
Nils Brinkmann (75d53a06) at 12 Jan 12:48
Added CI config to build and push the image
Nils Brinkmann (b8ebb50c) at 12 Jan 12:13
Initial commit
With the Pypi implementation there are special branches of the API spec using {CGI.escape(group.full_path)} as the ID. Would it be enough to add those to the Nuget implementation as well?
I haven't worked with Ruby yet, but I guess this part is filtering out any attempts on using the (url-encoded) project path as a project-id
Tagging @marcel.amirault and @sabrams as they seem to have been working on that page. Perhaps you can provide some insight?
I just read that the docs for the Nuget API state that it's possible to provide "ID or full path of the project". Source
Unfortunately it does not contain any examples thereof. Is the documentation wrong, or am I trying the wrong values? Can someone please provide an example?
I'm currently trying with GitLab v16.10.5. Docs haven't changed since then, so I figure it should be working with that version as well?
Nils Brinkmann (6c89a5d9) at 10 Feb 20:11
Latest version
Sorry, I'll give it another try. Example is simplified, but I hope my point is clear:
Scenario:
provide_dependencies-file which includes a provide_dependencies-jobneed the provide_dependencies-jobprovide_dependencies-job depends on many other jobs. Each of them gathers a set of dependencies and provides them as artifactsThis works perfectly well for Job resolution:
provide_dependencies-job. GitLab runs all the upstream jobs automatically within the pipeline.provide_dependencies-job as it is.In case of job-resolution the graph looks like this:
graph LR;
subgraph Devops Team
provide_dependencies-- needs -->gather_dep_a;
provide_dependencies-- needs -->gather_dep_b;
provide_dependencies-- needs -->gather_dep_c;
end
subgraph User
user_job-- needs -->provide_dependencies;
end
This does not work for artifacts:
need for every job that is running under the hoodIn case of artifact resolution the graph looks like this:
graph LR;
subgraph Devops Team
provide_dependencies-->gather_dep_a;
provide_dependencies-->gather_dep_b;
provide_dependencies-->gather_dep_c;
end
subgraph User
user_job-->provide_dependencies;
user_job-->gather_dep_a;
user_job-->gather_dep_b;
user_job-->gather_dep_c;
end
My point is that the needs-keyword covers both jobs and artifacts. Unfortunately it does not work in the same way for both. While jobs are resolved dynamically we need to explicitly need any artifacts we'd like to consume. By having things beings handled in different ways we can't enjoy the advantages of one or the other system but get the disadvantages of each of these solutions.
I think this is a valid issue as needs has two different ways to work when it comes to jobs and artifacts:
As long as I don't care about artifacts I can use all the features of needs. GitLab will create a DAG for me and I don't need to worry about indirect dependencies between jobs.
As soon as I'd like to consume artifacts from indirect dependencies I need to name them explicitly. All of a sudden I have a mix of DAG and "name everything" in my pipeline.
I propose to also resolve artifacts in the same way as jobs: If I need a job I get the artifacts from its upstream jobs as well. Perhaps this can be implemented with a new keyword so this isn't a breaking change?
Usecase:
We're trying to use needs in combination with include to help us build flexible, centrally managed pipelines. A user does not need to know about the internals of the pipeline he's including, all he needs to know is that he'll get specific artifacts within his job.
With stages this worked fine, but needs does not provide access to indirect dependencies of your job. Therefore a user needs to include all the exact jobs in his needs which isn't possible because with includes these can change in the future.
@furkanayhan and @dhershkovitch what do you say?
Everyone can contribute. Help move this issue forward while earning points, leveling up and collecting rewards.
Similar to other package registries like Pypi it would be useful if the Nuget registry URL could also be used with an url-encoded path in addition to the project ID:
https://gitlab.example.com/api/v4/projects/1337/packages/pypi
https://gitlab.example.com/api/v4/projects/group%2Fproject/packages/pypi
https://gitlab.example.com/api/v4/projects/1337/packages/nuget/index.json
https://gitlab.example.com/api/v4/projects/group%2Fproject/packages/nuget/index.json
This issue is about supporting Chocolatey <2.0, is it? Because with Chocolatey 2.0+ they added support for Nuget v3 and thus I can simply use Choco with the endpoints GitLab provides (even back in GitLab 15.x).
Not sure how I can help here... I simply upgraded Choco to 2.0+ within our infrastructure and it works fine since then.
Late reply, I first wanted to make sure that it runs smoothly again: I found that there were many issues from alertmanager within the logs. These were all along those lines:
2023-07-21_10:10:08.98888 level=error ts=2023-07-21T10:10:08.988Z caller=main.go:250 msg="unable to initialize gossip mesh" err="create memberlist: Failed to get final advertise address: No private IP address found, and explicit IP not provided"
I deactivated monitoring by changing the following in our /etc/gitlab/gitlab.rb file: prometheus_monitoring['enable'] = false
After that I did a sudo gitlab-ctl reconfigure and GitLab came up without any hiccups anymore. I can't say for sure that the above error messages haven't been there before and I wouldn't be sure that deactivating monitoring really was the root cause or something related fell into the right place after doing the reconfigure.
In the next weeks I'll see that I introduce FluxCD to our cluster so we can move on from the GitLab k8s integration. When that's done I can finally update GitLab to the newest version again. Let's see if I can reactivate monitoring then (perhaps I'll leave it off anyways, as we've got our own Grafana-Prom Stack running and I can just integrate the related endpoints into that config).
Some more issues from failing job logs:
Created fresh repository.
fatal: unable to access 'http://REDACTED/REDACTED.git/': Error while processing content unencoding: incorrect header check
caught error of type Gitlab::Git::CommandError in after callback inside Grape::Middleware::Formatter : 2:GitCommand: start [/var/opt/gitlab/gitaly/run/gitaly-20181/git-exec-2156390791.d/git --git-dir /repo/git-data/repositories/@hashed/ea/2c/ea2c89be738f88dc66d4d88f4448a99df5f2f85bd94158fc29ed46fd481dcf34.git -c core.fsyncObjectFiles=true -c gc.auto=0 -c core.autocrlf=input -c core.useReplaceRefs=false cat-file --batch --buffer --end-of-options]: fork/exec /var/opt/gitlab/gitaly/run/gitaly-20181/git-exec-2156390791.d/git: resource temporarily unavailable.
I'm leaning towards us having a network issue.