We are pleased to introduce the invited speakers:
Tony Veale, University College Dublin, Ireland
Tony Veale (Afflatus.UCD.ie) is a computer scientist whose principal research topic is Computational Linguistic Creativity. Veale teaches Computer Science at University College Dublin (UCD) and at Fudan University Shanghai as part of UCD’s international BSc. in Software Engineering, which Veale helped establish in 2002. Veale’s work on Computational Creativity (CC) focuses on creative linguistic phenomena such as metaphor, simile, blending, analogy, similarity and irony. He leads the European Commission’s coordination action on Computational Creativity called PROSECCO – Promoting the Scientific Exploration of Computational Creativity – which aims to develop the CC field into a mature discipline. He is author of the 2012 monograph Exploding the Creativity Myth: The Computational Foundations of Linguistic Creativity and is principal co-editor of the multidisciplinary volume from de Gruyter titled Creativity and the Agile Mind. As a visiting professor in Web Science at the Korean Advanced Institute of Science and Technology (2011-2013) Veale was funded by the Korean World Class University (WCU) programme to study the convergence of Computational Creativity and the Web in the form of Creative Web Services. He has recently launched a new Web initiative called RobotComix.com to engage with the wider public on the theory, philosophy and practice of building creative services, and one such system – the creative web service Metaphor Magnet – can be sampled via the automated twitterbot @MetaphorMagnet.
Title: The shape of Tweets to Come.
Abstract: Twitter has proven itself a rich and varied source of language data for linguistic analysis. For Twitter is more than a popular new platform for social interaction via language; in many ways Twitter constitutes a whole new genre of text, as users adapt to its limitations (140 character “tweets”) and its novel conventions (e.g. re-tweeting, hashtags). Language researchers can thus harvest Twitter data to study how users convey meaning with affect, and how they achieve stickiness and virality with the texts they compose. But Twitter presents an opportunity of another kind to the computationally-minded language researcher, a generative opportunity to study how algorithmic models might impose linguistic hypotheses onto large data sources to compose novel and meaningful micro-texts of their own. This computational turn allows researchers to go beyond merely descriptive models of playful uses of language such as metaphor, sarcasm and irony. It allows researchers to test whether their models embody a sufficiently algorithmic understanding of a phenomenon to facilitate the construction of a fully-automated computational system, one that can generate wholly novel examples that are deemed acceptable to humans. This talk presents and evaluates one such system, a Twitterbot named @MetaphorMagnet that generates, expresses and shares its own playful insights on Twitter. I shall show how @MetaphorMagnet’s tweets are inspired by data but shaped by knowledge, and consider how the outputs of this hybrid data/knowledge-driven bot may be usefully anchored in another source of data — the news stream.
Nick Heard, Imperial College London, UK
Nick Heard is a Senior Lecturer in the Statistics section of the Department of Mathematics, Imperial College London. He obtained his PhD under the supervision of Adrian Smith in 2001. Nick’s best known work is on cluster analysis, where he has developed Bayesian methodology and software for use in bioinformatics. He has also written papers on Monte Carlo convergence, sequential Monte Carlo, and social networks. Nick’s current main research interest is in dynamic networks and cyber security. He is currently on a long term secondment to the Heilbronn Institute of Mathematical Research, University of Bristol, which has a research focus on Internet data analysis and cyber security applications in the presence of so called ‘Big Data’.
Abstract: Cyber attacks on government and industry computer networks are now commonplace and no system can be made invulnerable to intrusion. Instead, much importance is placed on reducing the impact of cyber attacks when they occur, which first means quickly detecting their presence amongst the flow of cyber traffic. However, sophisticated hackers and cyber criminals will act carefully to hide their presence, and so any hard detection rules (“signatures”) can be circumnavigated. Nonetheless, if an intrusion has a malign purpose, then at least some unusual behaviour will be hidden within the network traffic data. Statistical modelling of nodes and edges in a computer network can build up a picture of normal behaviour in the system. Typical institutional computer networks produce high volume data streams and so, from time to time, surprising but benign behaviour will be observed. The task is to detect the significance of genuine intrusion events against this background. In statistical modelling, p-values are the fundamental quantities for measuring the significance of observed data against a null hypothesis. This talk will review methods of combining p-values to accumulate evidence, investigating their properties in depth. Some new approaches will then be proposed which are better suited for detecting subsets of significant p-values. Finally, the advantages of the proposed approach will be illustrated on a cyber authentication problem, stemming from collaborative work with Los Alamos National Laboratory.
Pascal Van Hentenryck, National ICT, Australia (NICTA)
Pascal Van Hentenryck leads the Optimisation Research Group (about 75 people) at National ICT Australia (NICTA). He also holds a Vice-Chancellor Strategic Chair in data-intensive computing at the Australian National University. Van Hentenryck is the recipient of two honorary degrees and is a fellow of the Association for the Advancement of Artificial Intelligence. He was awarded the 2002 INFORMS ICS Award for research excellence in operations research and compute science, the 2006 ACP Award for research excellence in constraint programming, the 2010-2011 Philip J. Bray Award for Teaching Excellence at Brown University, and was the 2013 IFORS Distinguished speaker. Van Hentenryck is the author of five MIT Press books and has developed a number of innovative optimisation systems that are widely used in academia and industry. Van Hentenryck’s research is currently at the intersection of data science and optimization with a focus on disaster management, energy systems, recommender systems, and transportation.
Title: Evidence-Based Optimization.
Abstract: For the first time in the history of mankind, we are accumulating data sets of unprecedented scale and accuracy about physical infrastructures, natural phenomena, man-made processes, and human behavior. These developments, together with progress in high-performance computing, machine learning, and operations research, offer exciting opportunities for the evidence-based optimization of global systems. This talk reviews some case-studies in disaster management, energy systems, high-performance computing, and market optimization to showcase these unique opportunities and their associated challenges, and presents some emerging architectures for evidence-based optimization.