Keynote Speech

 


Title:
Quality Evaluation of Computer Generated & Partially-generated Visual Signals
Abstract:
With rapid advancement of visual signal acquisition, computing and networking, there are more and more scenarios to make use of computer generated (artificial) or partially-generated images and videos. Meaningful visual signals can be generated by computer graphics (CG) for VR, AR and even the emerging metaverse. In addition, partial visual content may be also generated as screen-content, retargeted, stitched, HDR tone-mapped, style-transferred, and DIBR images; various applications include screen capturing/analysis/matching, multi-client communication, content editing, remote education, native advertisement, and data augmentation for training. In this talk, different computational models are to be presented toward quality evaluation of generated or partially-generated visual signals, and their potential extensions and future directions will be also discussed, since quality assessment plays crucial roles in benchmarking and shaping related algorithms and systems. 
Bio:
Weisi Lin researches in intelligent image and video processing, computational perceptual signal assessment, and multi-modality/media modeling. He received his B.Sc and M. Sc from Sun Yat-Sen University, China, and Ph.D. from King鈥檚 College, U.K. He is currently a Professor in School of Computer Science and Engineering, Nanyang Technological University, Singapore, where he also serves as the Associate Chair (Research).

He is a Fellow of IEEE and IET, and has been a Highly Cited Researcher 2019, 2020 and 2021. He has elected as a Distinguished Lecturer in both IEEE Circuits and Systems Society (2016-17) and Asia-Pacific Signal and Information Processing Association (2012-13), and given keynote/invited/tutorial/panel talks in 40+ international conferences. He has been an Associate Editor for IEEE Trans. Neural Networks and Learning Syst., IEEE Trans. Image Process., IEEE Trans. Circuits Syst. Video Technol., IEEE Trans. Multimedia, IEEE Signal Process. Lett., Quality and User Experience, and J. Visual Commun. Image Represent., and a Senior Editor in APSIPA Trans. Info. and Signal Process, as well as a Guest Editor for 7 special issues in international journals. He also chaired the IEEE MMTC QoE Interest Group (2012-2014); he has been a Technical Program Chair for IEEE ICME 2013, QoMEX 2014, PV 2015, PCM 2012 and IEEE VCIP 2017. He leads the Temasek Foundation Programme for AI Research, Education & Innovation in Asia, 2020-2025. He believes that good theory is practical, and has delivered 10+ major systems for industrial deployment with the technology developed. 


Title:
Learning to Enhance 3D Point Clouds: from Static to Dynamic
Abstract:
3D point cloud data are widely used in immersive telepresence, cultural heritage reconstruction, geophysical information systems, autonomous driving, and virtual/augmented reality. Despite rapid development in 3D sensing technology, acquiring 3D point cloud data with high spatial and temporal resolution and complex geometry/topology is still time-consuming, challenging, or costly. This talk will present our recent studies on computational methods (i.e., deep learning)-based 3D point cloud reconstruction, including sparse 3D point cloud upsampling, 3D point cloud generation, and temporal interpolation of dynamic 3D point cloud sequences.
Bio:
Junhui Hou is an Assistant Professor with the Department of Computer Science, City University of Hong Kong. His research interests fall into the general areas of multimedia signal processing, such as image/video/3D geometry data representation, processing and analysis, graph-based data modeling, and data compression. 

He received the Chinese Government Award for Outstanding Students Study Abroad from China Scholarship Council in 2015 and the Early Career Award (3/381) from the Hong Kong Research Grants Council in 2018. He is an elected member of IEEE MSA-TC, VSPC-TC, and MMSP-TC. He is currently an Associate Editor for IEEE Transactions on Image Processing, IEEE Transactions on Circuits and Systems for Video Technology, Signal Processing: Image Communication, and The Visual Computer. He also served as an Area Chair of various international conferences, including ACM MM, IEEE ICME, VCIP, ICIP, MMSP, and WACV.

photo2010-6
Title:
Segmentation of multimodal medical images
Abstract:
In order to better delineate the tumor contour for its treatment, several medical imaging examinations, such as CT and PET images, are necessary for patients. The work to segment the tumor from multimodal images is an important issue for the diagnostic, radiotherapy or cancer outcome prediction. The challenge is how to efficiently fuse the multi-sources of information to improve tumor segmentation performance. In addition, it鈥檚 common to have some missing MRI modalities in clinical practice due to different acquisition protocol, image corruption, scanner availability or scanning cost. Missing data make tumor segmentation even more difficult. In this talk, I will present our work which are based on deep learning to exploit latent features and fuse them to improve segmentation performance in the case of complete modalities or missing modalities. The proposed methods are applied on Brats MICAAI challenge datasets to show the good performance of our methods.
Bio:
Su RUAN received the M.S. and the Ph.D. degrees in image processing from the University of Rennes, France, in 1989 and 1993, respectively. From 2003 to 2010, she was a Full Professor with the University of Reims Champagne-Ardenne, France. She is currently a Full Professor with the Department of Medicine, Rouen Normandy University, France. Her research interests include pattern recognition, machine learning, information fusion, and medical imaging. She is currently an Associate Editor for Computerized Medical Imaging and Graphic, IRBM and Array. She served also as an Area Chair of various international conferences, such MICCAI et IEEE-ISBI.