Large language models (LLMs)-based image captioning has the capability of describing objects
not explicitly observed in training data; yet novel objects occur frequently, necessitating
the requirement of sustaining up-to-date object knowledge for open-world comprehension.
Instead of relying on large amounts of data and scaling up network parameters,
we introduce a highly effective retrieval-augmented image captioning method that prompts LLMs
with object names retrieved from External Visual--name memory (EVCap).
We build ever-changing object knowledge memory using objects' visuals and names, enabling us to
(i) update the memory at a minimal cost and
(ii) effortlessly augment LLMs with retrieved object names by utilizing a lightweight and fast-to-train model.
Our model, which was trained only on the COCO dataset,
can adapt to out-of-domain without requiring additional fine-tuning or re-training.
Our comprehensive experiments conducted on various benchmarks and synthetic commonsense-violating data demonstrate that EVCap,
comprising solely 3.97M trainable parameters, exhibits superior performance compared to other methods of equivalent model size scale.
Notably, it achieves competitive performance against specialist SOTAs with an enormous number of parameters.