728x90
반응형
Improving Vision-and-Language Navigation with Image-Text Pairs from the Web
Following a navigation instruction such as 'Walk down the stairs and stop at the brown sofa' requires embodied AI agents to ground scene elements referenced via language (e.g. 'stairs') to visual content in the environment (pixels corresponding to 'stairs'
arxiv.org
논문을 깊게 읽고 만든 자료가 아니므로, 참고만 해주세요. 얕은 지식으로 모델의 핵심 위주로만 파악한 자료이다 보니 없는 내용도 많습니다. 혹시 사용하실 경우 댓글 부탁드립니다.
728x90
반응형
'Paper Reading > Vision and Language Navigation(VLN)' 카테고리의 다른 글
Chasing Ghosts: Instruction Following as Bayesian State Tracking (0) | 2020.08.18 |
---|---|
BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby steps (0) | 2020.08.18 |
Vision-Dialog Navigation by Exploring Cross-modal Memory (0) | 2020.08.18 |
VALAN: Vision and Language Agent Navigation (0) | 2020.08.18 |
Cross-Lingual Vision-Language Navigation (0) | 2020.08.18 |