Skip to content

HaoyuCui/WSI_Segmenter

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

WSI Segmenter

A tool for tile-level tumor-area segmentation or ROI segmentation. Works with QuPath (Version >= 0.4.2). The model utilizes the DeepLabV3 architecture with a pretrained ResNet backbone.

Updates

  1. [03/2025] Currently the repository only supports training your own tumor segmentation model. We will open-source the weights trained on the public dataset (to avoid possible privacy violations) in two months, stay tuned!

  2. [05/2025] We have open-sourced the training code and weights based on the public dataset tsr-crc. Details can be found in inference.py and eg\train_tsr-crc.py. Please extract patches at 20x or higher (recommend 256x256) for better performance.

Usage

  1. run pip install -r requirements.txt for installing dependencies.

  2. run make_tile_mask_pairs.groovy in QuPath to generate tile-mask pairs. Details in image.sc

Please ensure you have made the annotation in the QuPath and specified your parameters in the script.

QuPath annotation

  1. after running the step, you now have the following folder structure:
├── /PATH/TO/DATA
│   ├── slide_1
│   │   ├── patch_1.jpeg  # tile
│   │   ├── patch_1.png  # mask
│   │   ├── ...
│   ├── slide_2
│   │   ├── patch_1.jpeg  # tile
│   │   ├── patch_1.png  # mask
│   │   ├── ...
│   ├── ...
│   └── slide_n
│       ├── ...
│       └── patch_n.png

The example structure can be found in the eg folder.

Suffix:

  • jpeg for tile
  • png for mask
pair a pair b pair c
tile-mask 5 a.jpeg tile-mask 6 b.jpeg tile-mask 7 c.jpeg
tile-mask 5 a.png tile-mask 6 b.png tile-mask 7 c.png
  1. run the following command to train and visualize the training process:
python -m visdom.server
python train.py --data_dir /PATH/TO/DATA --epochs 20

The prediction (up) and ground truth (down) will be shown and refreshed in the visdom server.

visdom

  1. or use the pre-trained model to predict the mask of your own data. The model is trained on the tsr-crc dataset.

    [Google Drive] Download the weights and place it in the checkpoints folder.

  2. you can also apply this with your patch extraction code.

import torch
import cv2
from torchvision import transforms
from torchvision.models.segmentation import deeplabv3_resnet50

def tile_contains_tumor(img, seg_model, device, threshold_tumor=0.5):
    # tile is Image object
    img = cv2.resize(img, (256, 256))  # we strongly recommend to extract patches at 20x or higher
    img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
    # to tensor
    tile = transforms.ToTensor()(img).unsqueeze(0).to(device)
    with torch.no_grad():
        seg_model.eval()
        output = torch.sigmoid(seg_model(tile)['out']).mean()

    if output < threshold_tumor:
        return True
    else:
        return False
    
def TileExporter():
    # your code here
    pass

if __name__ == "__main__":
    # load the model
    device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
    model = deeplabv3_resnet50(num_classes=1)
    model.load_state_dict(torch.load('checkpoints/tsr_crc.pt'))
    model = model.to(device)
    # extract patches 
    tiles = TileExporter()
    for tile in tiles:
        if tile_contains_tumor(tile, model, device):
            # do something
        else:
            # do something
    # save the result

About

A tool for tile-level tumor-area segmentation or ROI segmentation. Works with QuPath.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors