南京大学



中国计算机学会



中国人工智能学会
CRSSC-CWI-CGrC2011


特邀报告




报告人:张钹(清华大学) 张铃(安徽大学)

报告题目:不确定性的表示与度量

报告摘要:
首先讨论为什么粒计算要讨论不确定性问题,在搞清这个问题之后,接着讨论不确定表示问题,指出目前国际上最主要的三种不同的不确定性的表示方法,都可以用商空间链的方法统一表示,并讨论这种表示的优点,以及它与粒计算之间的关系。其主要结论: {1) 证明它们都可以统一地用商空间链的分层坐标方法来表示,(2) 这种表示方法为粒计算提供一个非常方便的模型,(3) 证明了逻辑运算(交、并)在各商空间中都存在有对应的商运算,它为推理的粒计算提供了理论的基础。文章的第二部分,从商空间的观点出发,讨论模糊集的度量问题,指出模糊度是对模糊集的粗粒度的观察。根据模糊集的结构分析,我们提出“各向同性”的假设,在这个假设下得出以下结果。(1)在有限完备半序集上,具有严格单调性和各向同性的模糊度是唯一的。(2)得出模糊数学中的模糊度函数同构的充分必要条件。(3)得出模糊度具有模糊单调性和粒度单调性的充分必要条件。(4) 在一定的假设下,给出模糊度的解析表达式。这些结果阐明了模糊度与粒计算的关系,揭示了模糊度的本质,同时提供一种构造模糊度的简易方法。

专家介绍:
张钹,计算机科学与技术系教授,中国科学院院士。1935年生,福建福清市人。毕业于清 华大学自动控制系。美国伊利诺斯大学访问学者。现任清华大学智能技术与系统国家重点实验室主任。目前从事人工智能、神经网络与遗传算法理论和应用以及知识工程、智能机器人与智能控制应用技术研究。
先后发表论文100多篇与专著3部。提出人工智能问题求解的商空间理论,从多粒度问题求解的理论出发,给出研究不确定性处理、定性推理、模糊分析、证据合成等的新原理。提出多层信息综合、多层规划与搜索的新方法,提高了机器问题求解的能力和降低计算复杂性。给出分析多种神经网络模型的定量方法,以及新的网络学习机制等。 科研成果先后获ICL欧洲人工智能奖,国家自然科学三等奖,国家教委科技进步一等奖、二等奖,以及电子工业部科技进步一等奖等奖励。
个人主页:张钹

专家介绍:
张铃,1937年5月生,男。1961年南京大学毕业,安徽大学计算机学院教授,计算机应用博士点带头人、安徽大学计算机应用国家重点学科带头人,先后被清华大学、浙江大学、同济大学和中科院智能所等单位聘为客座教授。
张教授先后获国家自然科学奖等省级二等奖以上奖励十次,获省级以上荣誉称号八次,主持或参加国家863、973、国家攀登计划、自然科学重点项目、自然科学面上项目多项;出版专著三部(两部获国家出版署优秀图书一等奖,一部获高教出版社优秀科技专著特等奖);发表论文一百多篇。
主要研究方向:商空间理粒度计算理论(国际三大粒度计算理论之一)、人工智能、机器学习、智能计算等。
个人主页:张铃



Presenter:Ning Zhong(Maebashi Institute of Technology, Japan)
                                  (Beijing University of Technology, China)

Title:Wisdom Web of Things (W2T):
          Fundamental Issues, Challenges and Potential Applications

Abstract:
With rapid development of the Internet and the Internet of Things, a new world, called `hyper world', is emerging by coupling and empowering humans in the social world, information/computers in the cyber world, and things in the physical world. The notion of `Wisdom Web of Things (W2T)' is a novel vision for computing and intelligence in the post-WWW era, recently put forward by a group of leading researchers from the fields of Web Intelligence (WI), Ubiquitous Intelligence (UI), Brian Informatics (BI), and Cyber Individual (CI). As inspired by the material cycle in the physical world, the W2T focuses on the data cycle, namely `from things to data, information, knowledge, wisdom, services, humans, and then back to things.' A W2T data cycle system is designed to implement such a cycle, which is, technologically speaking, a practical way to realize the harmonious symbiosis of humans, computers and things in the emerging hyper world. In this talk, we discuss fundamental issues, challenges and potential applications of such a W2T framework.

Biography:
Ning Zhong received the Ph.D. degree in the Interdisciplinary Course on Advanced Science and Technology from the University of Tokyo. He is currently head of Knowledge Information Systems Laboratory, and a professor in Department of Life Science and Informatics at Maebashi Institute of Technology, Japan. He is also director and an adjunct professor in the International WIC Institute (WICI), Beijing University of Technology. He has conducted research in the areas of Web intelligence, brain informatics, knowledge discovery and data mining, granular-soft computing, intelligent agents, and knowledge information systems, with over 200 journal and conference publications and 20 books. He is the editor-in-chief of the Web Intelligence and Agent Systems journal (IOS Press), and serves as associate editor/editorial board for several international journals and book series including IEEE Transactions on Knowledge and Data Engineering (2005-2008), Knowledge and Information Systems (Springer), Cognitive Systems Research (Elsevier), Health Information Science and Systems (Springer) and International Journal of Information Technology and Decision Making (World Scientific). He is the co-chair of Web Intelligence Consortium (WIC), chair of IEEE Computational Intelligence Society Task Force on Brain Informatics (TF-BI). He has served as chair of the IEEE Computer Society Technical Committee on Intelligent Informatics (TCII) (2006-2009), member of the steering committee of IEEE International Conferences on Data Mining (ICDM) (2000-2009). He has served or is currently serving on the program committees of over 100 international conferences and workshops, including ICDM'02 (conference chair), ICDM'06 (program chair), WI-IAT'03 (conference chair), WI-IAT'04 (program chair), IJCAI'03 (advisory committee member), Brain Informatics 2009 (program chair) and AMT'11 (program chair). He was awarded the best paper awards of AMT'06, JSAI'03, IEEE TCII/ICDM Outstanding Service Award in 2004, and Pacific-Asia Conference on Knowledge Discovery and Data Mining (PAKDD) Most Influential Paper Award (1999-2008).
Homepage:Ning Zhong



Presenter:Zhexue Huang(Shenzhen Institutes of Advanced Technology (SIAT))

Title:Subspace approach for mining massive high-dimensional data

Abstract:
The 21th century is viewed as the century of data. On the one hand, the total amount of data in the society is accumulating at very high speed. On the other hand, the complexity of data is increasing as well. Data analysis becomes more important than ever before, because many scientific, social and business problems depend on data analysis to solve. However, data volume and complexity make data analysis very challenging tasks. One big challenge is high dimensionality in data that emerges in many fields such as text mining and bioinformatics. As dimensionality increases, sparseness and noise in data also increase, which makes many existing data analysis techniques inadequate. New theory and algorithms for analysis of complex high-dimensional data have become a major research direction in several disciplines, such as statistics, machine learning, data mining and bioinformatics. Industrial demand on new technology for high-dimensional data analysis is increasing as well.In this talk, I will discuss a subspace approach for mining massive high-dimensional data. Subspace data mining algorithms search for clusters or build classification models from subspaces of high dimensional data. I will use a feature grouping subspace clustering algorithm and a feature weighting random forest algorithm to illustrate the subspace method in dealing with very high- dimensional data. In the end, I will use an example of MapReduce random forest implementation to demonstrate how to make use of distributed cloud computing platform to enable data mining algorithms scalable to very large data.

Biography:
Dr. Joshua Zhexue Huang is a professor and Chief Scientist at Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences. He is also the director of Shenzhen Key Laboratory for High Performance Data Mining. Prof. Huang is known for his contributions to the development of a series of k-means type clustering algorithms in data mining, such as k-modes, fuzzy k-modes, k-prototypes and w-k-means that are widely cited and used, and some of which have been included in commercial software. He has led the development of the open source data mining system AlphaMiner (www.alphaminer.org) that is widely used in education, research and industry. He has extensive industry expertise in business intelligence and data mining and has been involved in numerous consulting projects in Australia, Hong Kong, Taiwan and mainland China. Dr Huang received his PhD degree from the Royal Institute of Technology in Sweden. He has published over 100 research papers in conferences and journals. In 2006, he received the first PAKDD Most Influential Paper Award.



Presenter:Mengjie Zhang(Victoria University of Wellington,New Zealand)

Title:Genetic Programming Principles and Applications

Abstract:
One of the central challenges of computer science is to use a computer to do what needs to be done without telling it/knowing the specific process. Genetic programming (GP) addresses this challenge by providing a method for automatically creating a working computer program from a high-level statement of a specific task. GP achieves this goal by genetically breeding a population of computer programs using the principles of Darwinian natural selection and biologically inspired operations. This talk will start with the GP principles, including representation, operators, search mechanisms and the evolutionary process. The talk will then discuss the most popular applications of GP with a focus on symbolic regression and mathematical modelling, classification with unbalanced data, and feature selection and manipulation. The talk will end with some interesting demonstrations using GP for motion detection and object tracking.

Biography:
Mengjie Zhang is an Professor (British System) in Computer Science at Victoria University of Wellington, where he founded and currently heads the Evolutionary Computation Research Group. His research is mainly focused on evolutionary computation, particularly genetic programming, particle swarm optimisation and learning classifier systems with application areas of computer vision and image processing, multi-objective optimisation, classification with unbalanced data, and feature selection and dimension reduction for classification with high dimensions. He is also interested in data mining, machine learning, and web intelligence.
Dr Zhang has published over 180 academic papers in refereed international journals and conferences in these areas. He has been serving as an associated editor or editorial board member for five international journals including IEEE Transactions on Evolutionary Computation and the Evolutionary Computation Journal (MIT Press), and as a reviewer of over 15 international journals. He has been a program/technical/special session co-chair for five international conferences. He has also been serving as a steering committee member and a program committee member for over 80 international conferences including all major conferences in evolutionary computation. Since 2007, he has been listed as one of the top ten international genetic programming researchers by the GP bibliography (http://www.cs.bham.ac.uk/~wbl/biblio/gp-html/index.html).
Dr Zhang is Panel member of the Mathematical and Information Sciences panel for the Marsden Fund of New Zealand (equivalent to the National Science Foundation of USA). He has been awarded a number of national competitive research grants (some with <10% acceptance rate). He is also a senior member of IEEE, a member of the IEEE CIS Evolutionary Computation Technical Committee, a vice-chair of the IEEE CIS Task Force on Evolutionary Computer Vision and Image Processing, and a committee member of the IEEE New Zealand Central Section. He has been a main organizer of the special session of Evolutionary Computer Vision at IEEE Congress on Evolutionary Computation since 2005.
Further Information: http://homepages.ecs.vuw.ac.nz/~mengjie/, http://ecs.victoria.ac.nz/Groups/ECRG/
Homepage:Mengjie Zhang



报告人:吴伟志(浙江海洋学院数理与信息学院)

报告题目:模糊环境下粗糙集的数学结构

报告摘要:
本报告介绍无限论域中模糊环境下基于二元关系的粗糙集的数学结构。首先,介绍用构造性方法定义模糊环境下的粗糙集: 模糊粗糙集和粗糙模糊集,给出近似算子的性质。其次,介绍模糊环境下粗糙集的公理化方法,给出刻画模糊近似算子的独立公理集。并进一步讨论模糊近似算子和模糊拓扑空间之间的关系,阐明由自反经典近似空间或自反模糊近似空间可以导出一个模糊拓扑空间,而自反和传递的经典近似空间或自反和传递的模糊近似空间全体与模糊Alexandrov空间之间存在一一对应关系,使得由近似空间导出的下近似算子和上近似算子分别就是拓扑空间的内部算子和闭包算子。另一方面,由自反和对称的经典近似空间可以生成一个模糊闭开拓扑,反之,在一定条件下,一个模糊拓扑空间可以由一个经典近似空间或模糊近似空间导出。同时,我们讨论模糊环境下粗糙集的可测结构,给出了由串行近似空间导出的模糊可定义集全体构成了一个模糊sigma-代数。最后,介绍模糊环境下粗糙集理论与模糊证据理论之间的关系,得到粗糙集理论与证据理论之间的相互解释,指出任何一个证据结构必对应一个概率近似空间使得由近似空间导出的模糊集的下近似的概率和上近似的概率恰好就是由证据结构导出的该集合的信任测度和似然测度。


专家介绍:
吴伟志,1964年3月出生,籍贯浙江普陀,浙江海洋学院数理与信息学院院长,浙江海洋学院应用数学学科带头人,教授,获理学博士学位,香港中文大学博士后,西安交通大学博士后,西安交通大学兼职博士生导师,2005年全国优秀博士学位论文提名奖获得者,浙江省“新世纪151人才工程”第二层次培养人员,浙江省高校中青年学科带头人,舟山市第一、二届专业技术拔尖人才。
从事数学和信息科学的教学与研究工作,主要研究方向:粗糙集、概念格、随机集、粒计算。在粗糙集近似算子的特征刻画、基于粗糙集和概念格理论的知识获取、粒计算、集值随机过程等方面取得一系列重要研究成果。主持国家自然科学基金项目2项、 国家博士后科学基金项目1项、浙江省自然科学基金项目1项,在国内外学术期刊和国际学术会议上发表论文80多篇,其中SCI、EI、ISTP收录论文分别为25篇、40篇、23篇,发表的SCI收录论文被SCI论文他引达110多次,在科学出版社合作出版《粗糙集理论与方法》与《信息系统与知识发现》两部专著,两部专著已经被1000多篇论文引用,5篇论文获浙江省自然科学优秀论文二等奖。
任国际粗糙集学会顾问委员会委员、中国人工智能学会粗糙集与软计算专业委员会副主任委员、中国系统工程学会模糊系统与模糊数学理事会理事、浙江省数学会理事、浙江省应用数学研究会理事、浙江省高校高等数学教学指导委员会委员。担任《International Journal of Computer Science and Knowledge Engineering》主编、3个国际学术杂志和1个中文核心杂志的编委。被邀请担任国际和国内相关学术会议的程序委员会主席,在国际和国内学术会议上被邀请作大会特邀报告各1次,任10多个国际SCI收录期刊的审稿专家。
个人主页:吴伟志



报告人:苗夺谦 (同济大学)

报告题目:不同目标下的知识约简研究进展

报告摘要:
相对属性约简是保持决策表分类能力不变的最小属性子集,是粗糙集理论研究中的核心内容之一。传统Pawlak属性约简目标是保持正域不变,现有三种约简算法:(1) 基于属性重要度的约简;(2) 基于信息熵的约简;(3)基于差别矩阵的约简。近些年来,国内外学者在属性约简上做了大量研究工作,对传统Pawlak属性约简进行了推广,如分布约简,最大分布约简等,但对推广缺乏系统的分析与研究。本报告将介绍以下内容:(1)根据约简目标的不同,给出了一个统一的相对属性约简框架。(2)对不同约简目标下属性约简之间的关系进行分析;(3)根据约简目标的不同,可定义不同的差别矩阵。基于差别矩阵,可开发高效的属性约简算法。通过上述工作,阐明了不同目标下属性约简算法之间的关系,揭示了属性约简的本质。


专家介绍:

苗夺谦,1997年毕业于中国科学院自动化研究所模式识别与智能系统专业,获工学博士学位。现任同济大学电子与信息工程学院教授、博士生导师、副院长,计算机科学与技术专业负责人,计算机科学与技术博士后流动站副站长,计算机与信息技术国家级教学实验示范中心主任,嵌入式系统与服务计算教育部重点实验室副主任;国家自然科学基金委信息学部评议组专家,教育部高等学校计算机科学与技术专业教学指导分委员会专家工作组成员;上海市计算机学会、人工智能学会的理事和专委会委员,上海市政府采购咨询专家等职务。 苗夺谦教授一直从事粗糙集理论、粒计算、Web智能、模式识别与人工智能等方面的研究工作,应邀担任CGrC、CCML、CCDM、RSFDGrC、WI、WCICA、ISNN、 AICI、GrC、RSKT、RSCTC等国内外系列会议的程序委员会主席或委员。发表学术论文140余篇,其中SCI和EI收录70余篇,出版教材和学术著作5部,论文著作它引1000多次,授权专利9项。主持国家自然科学基金项目5项,参与“973”项目1项和“863”项目2项,主持并参与省部级自然科学基金与科技攻关项目20余项。曾获国家教学成果二等奖、上海市教学成果一等奖、教育部科技进步一等奖、上海市技术发明一等奖、重庆市自然科学一等奖和其它省部级二、三等奖共7项。
个人主页:苗夺谦


报告人:胡清华 (哈尔滨工业大学)

报告题目:混合数据分类学习的邻域粗糙集模型及应用

报告摘要:
Pawlak粗糙集模型为符号数据的归纳学习提供了有效的数学工具,然而实际应用中绝大多数数据是数值的或者是数值变量与符号变量共存的,这给粗糙集理论的实际应用带来了挑战。数值变量张成的特征空间的邻域结构提供了分类学习的重要信息,本报告将Pawlak粗糙集模型中的等价关系和划分替换为邻域关系和覆盖,构造了距离诱导的邻域粗糙集模型,并探讨该模型的数学性质及其与其它现有数据分析工具的联系,同时介绍了基于该模型的边界样本发现、属性约简、规则学习和多分类器系统设计的算法。


专家介绍:

胡清华,男。1995年进入哈尔滨工业大学能源科学与工程学院,1999年和2002年分别获学士和硕士学位,2008年在哈尔滨工业大学航天学院控制科学与工程学科获博士学位。2006年留校任教,2008年评上副教授,2011年评上博士生导师。2009年10月获香港政府资助进入香港理工大学做博士后研究。2007年和2009年分别获得国家自然科学基金资助。参与国家自然科学基金重大项目“超燃冲压发动机突变控制问题研究”和国家973项目“智能电网中大规模新能源电力安全高效利用基础研究”课题。目前在IEEE TKDE, IEEE SMC-B和IEEE Transactions on Fuzzy Systems等杂志和会议发表论文80余篇,其中SCI检索论文近50篇,EI检索60余篇,发表论文近三年被SCI他引200余次,多次获国际国内会议优秀论文奖,2010年应邀担任国际会议RSCTC2010程序委员会共同主席。
个人主页:胡清华




计算机软件新技术国家重点实验室 南京大学计算机科学与技术系




Copyright © 2010-2011 CRSSC-CWI- CGrC2011