Note: images and annotations in this tutorial are modified from COCO dataset.
- update your forked repo from my repo(ref)
- following 01_git, create a new branch
LAST#_07coco(ex: pan667_07coco) in your forked repo - activate the environment you created in 03_conda
- read and understand what COCO format is
- download
2017 Train/Val annotationshere and use python to loadinstances_val2017.json(hint: usejsonpackage) - explore the data, for example checking keys and data types, to see if you thoroughly understand the format (hint: you can cross-validate with the doc COCO format)
- use Python to load
annotations.txtand write aLAST#.pyto convert it to COCO formatLAST#.json(hint: use syntaxwith opento load txt and read lines, usejsonpackage to save the result file)- don't need to care about
info - for images, ignore
license,flickr_url,coco_url, anddate_captured - for annotations, use RLE format for segmentation (hint: use (official tools)[https://github.com/cocodataset/cocoapi/blob/master/PythonAPI/pycocotools/mask.py] which should be already installed with detectron2), find
areaandbboxyourself, and simply setiscrowdto 0 - for categories, simply copy from
instances_val2017.json
- don't need to care about
- run
python demo.py -f pan667.json -r ~/PerceptionTutorials/07_COCO/(repalce with your path) to plot a demo, you should see ademo.jpglooking like the following:

- (NOT REQUIRED) document your code with googledoc style (ref: 05_docstring)
- run
blackwith line length 100,isort, andflake8onLAST#.py - move
LAST#.pyandLAST#.jsontosubmissions - stage changes (DO NOT ADD
demo.jpg), commit with the message "learning coco", push and submit a pr