Details

Advances in Multimodal Information Retrieval and Generation


Synthesis Lectures on Computer Vision

von: Man Luo, Tejas Gokhale, Neeraj Varshney, Yezhou Yang, Chitta Baral

58,84 €

Verlag: Springer
Format: PDF
Veröffentl.: 25.06.2024
ISBN/EAN: 9783031578168
Sprache: englisch
Anzahl Seiten: 150

Dieses eBook enthält ein Wasserzeichen.

Beschreibungen

<p>This book provides an extensive examination of state-of-the-art methods in multimodal retrieval, generation, and the pioneering field of retrieval-augmented generation.&nbsp; The work is rooted in the domain of Transformer-based models, exploring the complexities of blending and interpreting the intricate connections between text and images.&nbsp; The authors present cutting-edge theories, methodologies, and frameworks dedicated to multimodal retrieval and generation, aiming to furnish readers with a comprehensive understanding of the current state and future prospects of multimodal AI.&nbsp; As such, the book is a crucial resource for anyone interested in delving into the intricacies of multimodal retrieval and generation.&nbsp; Serving as a bridge to mastering and leveraging advanced AI technologies in this field, the book is designed for students, researchers, practitioners, and AI aficionados alike, offering the tools needed to expand the horizons of what can be achieved in multimodal artificial intelligence.</p>
<p>Preface.-&nbsp;Motivation and Background.- Review: Methods for Information Retrieval under Single Modality Setting.- Text IR.-&nbsp;Image IR.- Audio IR.- Review: Multimodal Representation Learning.-&nbsp;Evaluation Methods.- Information Retrieval for Multi-modality Setting.-&nbsp;Conclusions and Future Directions.</p>
<p>Man Luo, Ph.D. is a Research Fellow at Mayo Clinic, Arizona.&nbsp; She received her Ph.D. at ASU in 2023. Her research interests lie in Natural Language Processing (NLP) and Computer Vision (CV) with a specific focus on open-domain information retrieval under multi-modality settings and Retrieval-Augmented Generation Models.&nbsp; She has published first author at top conferences such as AAAI, ACL and EMNLP. She serves as the guest editor of PLOS Digital Medicine Journal. She has served as reviewer for AAAI, IROS, EMNLP, NAACL, ACL conferences.&nbsp; Dr. Luo is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023 and Multimodal4Health at ICHI 2024.&nbsp;</p>

<p>Tejas Gokhale, Ph.D., is an Assistant Professor at the University of Maryland, Baltimore County.&nbsp; He received his Ph.D. from Arizona State University in 2023, M.S. from Carnegie Mellon University in 2017, and B.E.(Honours) from Birla Institute of Technology and Science, Pilani in 2015.&nbsp; Dr. Gokhale is a computer vision researcher working on robust visual understanding with a focus on connection between vision and language, semantic data engineering, and active inference.&nbsp; His research draws inspiration from the principles of perception, communication, learning, and reasoning.&nbsp; He is an organizer of the ODRUM workshops at CVPR 2022 and CVPR 2023, SERUM tutorial at WACV 2023, and RGMV tutorial at WACV 2024.&nbsp;</p>

<p>Neeraj Varshney is a Ph.D. candidate at ASU and works in natural language processing, primarily focusing on improving the efficiency and reliability of NLP models. He has published multiple papers in top-tier NLP and AI conferences including ACL, EMNLP, EACL, NAACL, and AAAI and is a recipient of the SCAI Doctoral Fellowship, GPSA Outstanding Research Award, and Jumpstart Research Grant.&nbsp; He has served as a reviewer for several conferences including ACL, EMNLP, EACL, and IJCAI and has also been selected as an outstanding reviewer by EACL'23 conference.</p>

<p>Yezhou Yang, Ph.D., is an Associate Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University.&nbsp; He received his Ph.D. from University of Maryland.&nbsp; His primary interests lie in Cognitive Robotics, Computer Vision, and Robot Vision, especially exploring visual primitives in human action understanding from visual input, grounding them by natural language as well as high-level reasoning over the primitives for intelligent robots.&nbsp;</p>

<p>Chitta Baral, Ph.D., is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University and received his Ph.D. from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.Chitta Baral is a Professor with the School of Computing and Augmented Intelligence (SCAI), Arizona State University, and received his PhD from University of Maryland. His primary interests lie in Natural Language Processing (NLP), Computer Vision (CV), the intersection of NLP and CV, and Knowledge Representation and Reasoning.</p>
<p>This book provides an extensive examination of state-of-the-art methods in multimodal retrieval, generation, and the pioneering field of retrieval-augmented generation.&nbsp; The work is rooted in the domain of Transformer-based models, exploring the complexities of blending and interpreting the intricate connections between text and images.&nbsp; The authors present cutting-edge theories, methodologies, and frameworks dedicated to multimodal retrieval and generation, aiming to furnish readers with a comprehensive understanding of the current state and future prospects of multimodal AI.&nbsp; As such, the book is a crucial resource for anyone interested in delving into the intricacies of multimodal retrieval and generation.&nbsp; Serving as a bridge to mastering and leveraging advanced AI technologies in this field, the book is designed for students, researchers, practitioners, and AI aficionados alike, offering the tools needed to expand the horizons of what can be achieved in multimodal artificial intelligence.</p>
Provides a comprehensive overview of the state-of-the-art in multi-modal architectures and representation learning Presents state-of-the-art techniques including neural models based on transformers and multi-modal learning techniques Explores the foundations and algorithms that power MMIR and how information can be retrieved using multimodal queries