728x90
반응형
BabyWalk: Going Farther in Vision-and-Language Navigation by Taking Baby Steps
Learning to follow instructions is of fundamental importance to autonomous agents for vision-and-language navigation (VLN). In this paper, we study how an agent can navigate long paths when learning from a corpus that consists of shorter ones. We show that
arxiv.org
[44]BabyWalk_Going Farther in Vision-and-Language Navigation.pptx
0.95MB
논문을 깊게 읽고 만든 자료가 아니므로, 참고만 해주세요. 얕은 지식으로 모델의 핵심 위주로만 파악한 자료이다 보니 없는 내용도 많습니다. 혹시 사용하실 경우 댓글 부탁드립니다.
728x90
반응형
'Paper Reading > Vision and Language Navigation(VLN)' 카테고리의 다른 글
Chasing Ghosts: Instruction Following as Bayesian State Tracking (0) | 2020.08.18 |
---|---|
Improving Vision-and-Language Navigation with Image-Text Pairs from the Web (0) | 2020.08.18 |
Vision-Dialog Navigation by Exploring Cross-modal Memory (0) | 2020.08.18 |
VALAN: Vision and Language Agent Navigation (0) | 2020.08.18 |
Cross-Lingual Vision-Language Navigation (0) | 2020.08.18 |