Main

Invited Speech
 

Prof. Ran He
Institute of Automation of Chinese Academy of Sciences, China


Ran He is a Professor at the National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences, Beijing, China. He received the B.E. degree in Computer Science from Dalian University of Technology, the M.S. degree in Computer Science from Dalian University of Technology, and Ph.D. degree in Pattern Recognition and Intelligent Systems from Institute of Automation, Chinese Academy of Sciences in 2001, 2004 and 2009, respectively. His research interests focus on information theoretic learning, pattern recognition, and computer vision. He has published over 140 journal and conference papers in these fields, and has widely published at highly ranked international journals, such as IEEE TPAMI, TIP, TIFS, IJCV, PR, and leading international conferences, such as ICCV, CVPR, NIPS, IJCAI, AAAI. He is currently serving as an associate editor of Elsevier Neurocomputing and IET Image Processing, and served as area chair and senior program member of several conferences. His research was supported by NSFC for Excellent Young Scientist Programme, and Beijing Natural Science Funds for Distinguished Young Scholars.

Speech Title: "Variational Image Analysis under Limited Computational Resource"

Image data tend to be high-dimensional and large-scale. When given infinite computational resource, machine learning algorithms can generate exact results (prohibitively expensive). Variational approximation methods arise from the use of a finite amount of processor time. These methods are often built on top of standard function approximators. In this talk, we introduce a group of variational inference and learning algorithms that scale to high-dimensional and large-scale image datasets. First, we address the linear approximation to learn robust and compact local features of image data, named ordinal measures. Second, we address the quadratic approximation of a family of loss functions that widely used in image analysis. Accordingly, a half-quadratic optimization framework is proposed for modeling sparsity, low-rank recovery and noise. Third, we introduce an Introspective Variational Autoencoders to approximate the posterior distribution, then we can generate high-resolution images from the learnt distribution, paving a way for analysis via synthesis.

 

Assoc. Prof. Qingzheng Xu
National University of Defense Technology, China


Qingzheng Xu is an Associate Professor with the College of Information and Communication, National University of Defense Technology, Xi'an, China. He received the B.S. degree in information engineering from the PLA University of Science and Technology, Nanjing, China, in 2002, and the Ph.D. degree in control theory and engineering from the Xi'an University of Technology, Xi'an, China, in 2011. He was a visiting scholar at the School of Computer Science and Engineering, Nanyang Technological University, Singapore, from May 2018 to May 2019. He is a senior member of China Computer Federation. His main research interests are opposition-based learning, nature-inspired computation and combinatorial optimization. He has published over 50 technical papers in international journals or conference proceedings.

Speech Title: "Opposition-Based Learning and its Application in Evolutionary Computing"

An opposition concept is both familiar and mysterious at the same time to ordinary mortals like us. However, due to the lack of an accepted mathematical or computational model, until recently it has not been explicitly investigated to any great length in the fields outside of philosophy and logic. The basic concept of Opposition-Based Learning (OBL) was originally introduced by Tizhoosh in 2005. In a very short period of time, it has been utilized in different evolutionary computing areas. This speech covers basic concepts, theoretical foundation, combinations with intelligent algorithms, and typical application fields. A number of challenges that can be undertaken to help move the field forward are also discussed.