728x90
반응형
Vision-and-Language Navigation: Interpreting visually-grounded navigation instructions in real environments
A robot that can carry out a natural-language instruction has been a dream since before the Jetsons cartoon series imagined a life of leisure mediated by a fleet of attentive robot helpers. It is a dream that remains stubbornly distant. However, recent adv
arxiv.org
180808-[1]Vision-and-Language Navigation.pptx
4.02MB
논문을 깊게 읽고 만든 자료가 아니므로, 참고만 해주세요. 얕은 지식으로 모델의 핵심 위주로만 파악한 자료이다 보니 없는 내용도 많습니다. 혹시 사용하실 경우 댓글 부탁드립니다.
728x90
반응형
'Paper Reading > Vision and Language Navigation(VLN)' 카테고리의 다른 글
QMDP-Net (0) | 2020.08.11 |
---|---|
FollowNet: Robot Navigation by Following Natural Language Directions with Deep Reinforcement Learning (0) | 2020.08.11 |
Speaker-Follower Models for Vision-and-Language Navigation (0) | 2020.08.11 |
Embodied Question Answering (0) | 2020.08.11 |
IQA: Visual Question Answering in Interactive Environments (0) | 2020.08.11 |