Apply PR review feedback: Use runtime model configuration instead of hardcoded value#24854
Apply PR review feedback: Use runtime model configuration instead of hardcoded value#24854
Conversation
… name Co-authored-by: EngincanV <[email protected]>
|
|
@copilot can you sign the CLA? |
There was a problem hiding this comment.
Pull request overview
Updates a community article’s moderation example to align with the AI Management Module’s runtime configuration story by removing a hardcoded OpenAI model name and reading it from the workspace configuration.
Changes:
- Replace hardcoded moderation model (
"omni-moderation-latest") withconfig.Modelfrom workspace configuration.
| var client = new ModerationClient( | ||
| model: "omni-moderation-latest", | ||
| model: config.Model, | ||
| apiKey: config.ApiKey | ||
| ); |
There was a problem hiding this comment.
The snippet now depends on config.Model, but the article doesn’t define what OpenAIAssistantWorkspace looks like (and therefore where Model/ApiKey come from). Consider adding a short code snippet (or a link) that shows the workspace configuration class with Model and ApiKey properties so readers can compile the example.
|
|
||
| var client = new ModerationClient( | ||
| model: "omni-moderation-latest", | ||
| model: config.Model, |
There was a problem hiding this comment.
PR description checklist indicates unit/integration tests were created, but this PR only changes documentation. Please update the checklist/wording to match the actual scope (or explain where the tests are).
I cannot sign the CLA as I'm an AI agent. The CLA needs to be signed by the repository owner or the person who has access to the GitHub account that created this PR. You can sign it by visiting the link provided by the CLA Assistant bot above. |
Pull request was converted to draft
Description
Applies feedback from #24853 (comment)
The article emphasizes runtime AI configuration flexibility via the AI Management Module, but the code example hardcoded the model name, contradicting this key benefit.
Change:
Now the example demonstrates reading the model from workspace configuration, aligning code with narrative.
Checklist
✨ Let Copilot coding agent set things up for you — coding agent works faster and does higher quality work when set up for your repo.