QIMIE'09 Description
News
We are pleased to announce that Prof.
Einoshin Suzuki (Kyushu University, Japan) will give an invited talk on
Interestingness Measures - Limits, Desiderata, and Recent Results.
Abstract: In the last two decades, interestingness measures, each of which estimates the degree of interestingness of a discovered pattern, have been actively studied. A typical interestingness measure is more complex than a machine-learning measure which can be computed from statistics of the given data such as the accuracy, the recall, the precision, the F-value, and the area under the ROC curve.
Defining human's interestingness can be called as AI-hard as it is as difficult as all problems in artificial intelligence (AI). We must beware of the hype of omnipotent interestingness measures and we must settle realistic objectives for research on interestingness measures.
Desiderata on interestingness measures can be classified into qualitative expressions such as generality, accuracy, simplicity, and comprehensibility; and quantitative relations. Pitfalls to avoid are much less known than the desiderata and include four biases in evaluation and a use of many parameters, the latter of which poses extra work on the users and results in a problem that is analogous to overfitting in classification.
Recently we have proposed, for a structured pattern, an interestingness measure which is parameter-free, exploits information from an initial hypothesis, and is based on the minimum description length principle. The measure has exhibited high ``discovery accuracy`` i.e. the ratio that the measure discovers the true hypothesis from several data with up to 30% of noise using
incomplete initial hypotheses.
We are pleased to announce that Ass. Prof.
Nitesh V. Chawla (University of Notre Dame, USA) will give an invited talk on
A framework for monitoring classifiers’ performance: when and why failure occurs.
Abstract: Classifier error is the product of model bias and data variance. While understanding the bias involved when selecting a given learning algorithm, it is similarly important to understand the variability in data over time, since even the One True Model might perform poorly when training and evaluation samples diverge. Thus, the ability to identify distributional divergence is critical towards pinpointing when fracture points in classifier performance will occur. Contemporary evaluation methods do not take the impact of distribution shifts on the quality of classifiers’ predictions. In this talk, I present a comprehensive framework to proactively detect breakpoints in classifiers’ predictions and shifts in data distributions through a series of statistical tests. I outline and utilize three scenarios under which data changes: sample selection bias, covariate shift, and shifting class priors.
QIMIE'09 accepted papers will be selected for POST-PROCEEDINGS in LNCS series.
February 1, 2009: to all authors, please be informed that we have sent the notification to you by email. So, please kindly check your email at this time.
Submission: deadline passed.
Short presentation of QIMIE'09
There are a lot of data mining algorithms and methodologies for various fields and various problematic. Each researcher is faced with assessing the performance of his own proposal in order to make comparisons with state of the art approaches. Which methodology, which benchmarks, which measures of performances, which tools, etc., should be used, and why?
The Quality issues, measures of interestingness and evaluation of data mining models Workshop (QIMIE'09) will focus on the theory, the techniques and the practices that can ensure the discovered knowledge is of quality. It will thus cover the problems of quality and evaluation of data mining models.
QIMIE'09 is organized in association with the next PAKDD'09 conference (
Pacific-Asia Conference on Knowledge Discovery and Data Mining, Bangkok, Thailand,on April 27-30, 2009), a major international conference in the areas of data mining and knowledge discovery.
- objective measures of interest (for individual rules, for rules basis)
- subjective measures of interest and quality based on human knowledge, quality of ontologies, etc. (for individual rules, for rules basis)
- algorithmic properties of measures
- comparison of algorithm: issues with benchmarks, methodologies, statistical tests, etc.
- robustness evaluation
- special issues: imbalanced data, very large data, very high dimensional data
- special issues in specialized domains: bioinformatics, security, etc.
- graphical tools like ROC, cost curves
- etc.