Your browser does not support JavaScript!
pure css3 slider
pure css5 slider
pure css6 slider
pure css7 slider
 
回首頁 English 校友專區 登入
首頁 > 公告_最新演講
【演講】智慧農機具與高光譜在農業上之應用/ On Source Anonymity in Heterogeneous Statistical Inference
講題: 智慧農機具與高光譜在農業上之應用
時間:108年5月2日(四)13:45~15:30
地點:理工二館 第三講堂(C101)
演講者:李龍正/財團法人國家實驗研究院台灣儀器科技研究中心研究員
廖泰杉/財團法人國家實驗研究院台灣儀器科技研究中心研究員
歐陽盟/國立交通大學電機工程學系教授


*********************************************************************
5/3(五)電機/資工系 專題演講資訊如下:

講題: On Source Anonymity in Heterogeneous Statistical Inference
演講者: 王奕翔 教授/台大電機
時間:108年5月3日(五)14:00~16:00
地點:理工二館 第四講堂(A307)
Title: On Source Anonymity in Heterogeneous Statistical Inference
Abstract:
Statistical inference is a fundamental task in data science, where a decision maker aims to determine a hidden parameter based on the data it collects, as well as how the data depends on the target parameter statistically. In many modern applications such as crowdsourcing and sensor networks, data is heterogeneous and collected from various sources following different distributions. These sources, however, may be anonymous to the decision maker due to considerations in identification costs and privacy. Since the distribution becomes unknown, it is unclear how to carry out optimal inference, and hence the impact of source anonymity on the performance of statistical inference remains elusive. In this talk, I will present our recent work towards settling this question for binary hypothesis testing. Considering the anonymity of data sources, it is natural to formulate it as a composite hypothesis testing problem. First, we propose an optimal test called mixture likelihood ratio test, a randomized threshold test based on the ratio of the uniform mixture of all the possible distributions under one hypothesis to that under the other hypothesis. Second, we focus on the Neyman-Pearson setting and characterize the error exponent of the worst-case type-II error probability as the dimension of data tends to infinity while the proportion among the dimensions of different data sources remains constant. It turns out that the optimal exponent is a generalized divergence between the two families of distributions under the two hypotheses. Our results elucidate the price of anonymity in heterogeneous hypothesis testing and can be extended to more general inference tasks.

Bio:
I-Hsiang Wang received his Ph.D. in Electrical Engineering and Computer Sciences from University of California at Berkeley, USA, in 2011. From 2011 to 2013, he was a postdoctoral research associate in the School of Computer and Communication Sciences (IC) at École Polytechnique Fédérale de Lausanne (EPFL), Switzerland. In Fall 2013, he joined National Taiwan University, where he is now an associate professor. Prof. Wang’s expertise lies in information theory, statistical learning, and networked data processing. He received the Berkeley Vodafone Fellowship in 2006 and 2007. He was a finalist of the Best Student Paper Award of IEEE International Symposium on Information Theory, 2011. He won the 2017 IEEE Information Theory Society Taipei Chapter and IEEE Communications Society Taipei/Tainan Chapters Best Paper Award for Young Scholars, and the 2016 National Taiwan University Distinguished Teaching Award. He served on the technical program committees of flagship conferences in information theory, including IEEE International Symposium on Information Theory (ISIT) and IEEE Information Theory Workshop (ITW).

歡迎各位師生踴躍參加
瀏覽數