Dissertation news

Dissertation: Dissertation: 23.5.2016 Taming Big Knowledge Evolution (Cochez)


23.5.2016 12:00 — 15:00

Location: Mattilanniemi, Agora, Delta -sali
M.Sc. Michael Cochez defends his doctoral dissertation in Information Technology ”Taming Big Knowledge Evolution”. Opponent Professor Evgeny Osipov (Luleå University of Technology, Sweden) and custos Professor Vagan Terziyan (University of Jyväskylä).

Michael CochezM.Sc. Michael Cochez defends his doctoral dissertation in Information Technology Taming Big Knowledge Evolution. Opponent Professor Evgeny Osipov (Luleå University of Technology, Sweden) and custos Professor Vagan Terziyan (University of Jyväskylä).



Information and its derived knowledge are not static. Instead, information is changing over time and our understanding of it evolves with our ability and willingness to consume the information. When compared to humans, current computer systems seem very limited in their ability to really understand the meaning of things. On the other hand, they are very powerful when it comes down to performing exact computations. One aspect which sets humans apart from machines when trying to understand the world is that we will often make mistakes, forget information, or choose what to focus on. To put this in another perspective, it seems like humans can behave somehow more randomly and still outperform machines in knowledge related tasks. In computer science there is a branch of research concerned with allowing randomness or inaccuracy in algorithms, which are then called approximate algorithms. The main benefit of using these algorithms is that they are often much faster than their exact counterparts, at the cost of producing wrong or inexact results, once in a while. So, these algorithms could be used in contexts where erring once in while does not harm. If the chance of making a mistake is very slim, say lower than the chance of a memory error, then the expected precision will rival their exact counterparts. Furthermore, the input data to the algorithms often already contains a fair amount of uncertainty, such that the small error which the approximate algorithm introduces becomes more or less insignificant. In this dissertation, the author investigates the use of familiar and new approximate algorithms to knowledge discovery and evolution. The main contributions of the dissertation are a) an abstract formulation of what it means for an ontology to be and stay optimal over time, b) a contribution to a vision paper regarding the future of evolving knowledge ecosystems, c) an investigation of the application of locality-sensitive hashing (LSH) in the context of ontology matching and semantic search, d) the twister tries algorithm which is a novel approximate hierarchical clustering approach with linear space and time constraints, and e) an extension on the twister tries algorithm which trades a longer, but adaptable running time for a likely improvement of the clustering result.

More information

Michael Cochez