Jul-26: Informal Proceedings
Mar-29: Nitesh Chawla'talk
Feb-18: Accepted papers
Jan-4: J. of Frontiers of Computer Science PAKDD WS issue
Jan-3: Longbing Cao'talk
Dec-13: Submission is open
Oct-13: QIMIE blog is open
Prof. Longbing Cao (University of Technology Sydney, Australia) will give an invited talk on Knowledge Actionability: Evaluation and Practices.
Abstract: Actionable knowledge discovery and delivery is very demanding and challenging. It is regarded as one of grand challenges in the next-generation knowledge discovery in database (KDD) studies. Traditionally, data mining research mainly focuses and relies on developing and improving technical interestingness. This has approved not enough in the real-world enterprise data mining and emerging applications such as bioinformatics and behavior informatics. In this talk, a general evaluation framework is discussed to measure the actionability of knowledge discovered, which covers both technical and business performance evaluation. Metrics and strategies for supporting the extraction of actionable knowledge will be discussed. Practices in evaluating findings in several domains, such as actionable trading strategies in financial data mining, actionable intervention rules for social security data mining, actionable fraud detection rules for online banking risk management, will be introduced.
Prof. Nitesh Chawla (University Notre Dame, USA) will give an invited talk on Evaluation Conundrum in Machine Learning/Data Mining.
Abstract: Reproducibility is imperative in modern science. Reproducibility not only includes the ability to recreate scientific experiments, but also the underlying data (where possible). A central point in reproducibility is "evaluation", which broadly includes the performance metrics as well as the validation strategies. The prevailing approach to evaluating classifiers involves comparing the performance of several algorithms over a series of usually unrelated data sets. However, beyond this there are many dimensions along which methodologies vary wildly. Depending on the stability and similarity of the algorithms being compared, these
sometimes-arbitrary methodological choices can have a significant impact on the conclusions of any study, including the results of statistical tests. Moreover, an estimate of a classifier's performance may not be aligned with the true cost of applying the classifier on future data. In this talk, I will discuss the various aspects of classifier evaluation (including performance metrics, cross-validation issues), as well as classifier selection. I will also offer recommendations.