test-codes for azure nnunet pipeline inference
-
Build dockerfile
docker build -t nnunet_local_inference_env . -
Run dockerfile
docker run -v ${pwd}:/app -p 5001:5001 nnunet_local_inference_env
Note : If you are running from docker desktop, don't forget to mount project root directory to /app directory
Quick note : Workflow tested with python 3.10.6
-
Clone this repository
git clone https://github.com/tekmen0/azure-inference-test.git -
Move to repository folder
cd azure-inference-test -
(Optional) Activate python virtual environment.
-
Install the inference test server
python -m pip install azureml-inference-server-http -
Install requirements.
pip3 install requirements.txt -
Move to server directory
cd server -
Start inference server
azmlinfsrv --entry_script score.py
Now your scoring is deployed locally, running at 127.0.0.1:5001
You can send request to endpoint 127.0.0.1:5001/score for executing the 'run' function in score.py
Example request using python can be found in request.py, you may also want to use curl or postman