Submitting to the leaderboard

To submit your method to the leaderboard, contact okvqa.comm [at[ gmail [dot] com and include (1) the OK-VQA test results output file, (2) a name for the method, (3) a github repo or paper link, (4) your institution.


Evaluation

We follow the same format as VQA for evaluation. See the instructions at README and use this code for evaluation.

OK-VQA Leaderboard

Rank Model Overall Accuracy

1

Prophet

HDU & HFUT

61.11

2

PromptCap

UW, Rochester, Microsoft, AI2

60.4

3

REVIVE

Microsoft & University of Washington

58.0

4

KAT

Microsoft, CMU & Yale

54.41

5

PICa

Microsoft

48.0

6

CBM

Hitz Center, UPV

47.9

7

MCAN

Hangzhou Dianzi University

44.65

8

VLC-BERT

University of British Columbia (UBC), Vector Institute for AI

43.14

9

UnifER

NUS

42.13

10

MAVEx

UT Austin & AI2

41.37

11

KRISP

FAIR & CMU

38.90

12

ConceptBERT

Ecole Poyltechnique

33.66

13

MUTAN + AN

AI2 and CMU

27.84

14

MUTAN

26.41

15

BAN + AN

AI2 and CMU

25.61

16

BAN

25.17

17

MLP

20.67

18

Q only

14.93