Skip to content

Evaluation metrics and code #11

@peterwilson3912

Description

@peterwilson3912

I was trying to evaluate a model after training. I noticed that they didn't release the ground truth labels of the test dataset.

In the evaluation code provided by https://mmcheng.net/videosal/

I found the comments are "if the ground truth cannot be found, e.g. testing data, the central gaussian will be taken as ground truth automatically."

However, the real code is:

if exist(saliency_path, 'file')
       I = double(imread(saliency_path))/255;
       allMetrics(i) = fh( result, I);
else      
       allMetrics(i) = nan;
end

Then in the end,

allMetrics(isnan(allMetrics)) = [];
meanMetric = mean(allMetrics);

I'm wondering for test set without ground truth, how to generate "central gaussian "

Another question is, for the numbers listed on the board https://mmcheng.net/videosal/, are they tested on validation set or test set?

Thanks a lot for your help!

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions