Hi all,
Thanks very much for your great repo. Now I have the following scenario but I can't figure out what to do by only reading the documentations. I have a fine-tuned Qwen-VL model (locally) which can be loaded by transformers.AutoModelForVision2Seq.from_pretrained, and a HallusionBench dataset (to which I've made some modifications to the images inside) which can be loaded by datasets.load_from_disk. How to evaluate the model and test on my own local modified HallusionBench dataset? Also, is setting CUDA_VISIBLE_DEVICES enough for setting the GPUs to use, and how is batch size determined in the evaluation process?
Appreciations for your help!
Hi all,
Thanks very much for your great repo. Now I have the following scenario but I can't figure out what to do by only reading the documentations. I have a fine-tuned Qwen-VL model (locally) which can be loaded by
transformers.AutoModelForVision2Seq.from_pretrained, and a HallusionBench dataset (to which I've made some modifications to the images inside) which can be loaded bydatasets.load_from_disk. How to evaluate the model and test on my own local modified HallusionBench dataset? Also, is setting CUDA_VISIBLE_DEVICES enough for setting the GPUs to use, and how is batch size determined in the evaluation process?Appreciations for your help!