Computer Vision, Perception and Cognition Group
“AI is THE key technology of the digital transformation, across sectors and industries, with major effects on our societies. Our research thus makes major contributions to the development of robust and trustworthy AI methods, and we enthusiastically teach their safe implementation and application.”
- Pattern recognition with deep learning
- Machine perception, computer vision and speaker recognition
- Neural system development
The CVPC group conducts pattern recognition research, working on a wide variety of tasks relating to image, audio, and signal data per se. We focus on deep neural network and reinforcement learning methodology, inspired by biological learning. Each task we study has its own learning target (e.g., detection, classification, clustering, segmentation, novelty detection, control) and corresponding use case (e.g., predictive maintenance, speaker recognition for multimedia indexing, document analysis, optical music recognition, computer vision for industrial quality control, automated machine learning, deep reinforcement learning for automated game play or building control), which in turn sheds light on different aspects of the learning process. We use this experience to create increasingly general AI systems built on neural architectures.
- Insight: keynotes, trainings
- AI consultancy: workshops, expert support, advise, technology assessment
- Research and development: small to large-scale collaborative projects, third party-funded research, student projects, commercially applicable prototypes
Stability of self-organizing net fragments as inductive bias for next-generation deep learning
We recently released "A Theory of Natural Intelligence", proposing a possible key to the emergence of intelligence in biological learners. Goal of this fellowship is to develop a technical implementation of the concept of self-organizing netfragments within contemporary deep artificial neural nets. ...
ML-BCA: Machine Learning for Body Composition Analysis
The Centre for Artificial Intelligence (CAI) of the ZHAW, together with the Cantonal Hospital Aarau, has laid the foundations for machine learning-supported body composition analysis on image files of the KSA within the framework of preliminary studies and has achieved promising results. The aim of this project is ...
Master3D – 3D-Master for a Digitized Manufacturing Platform
We enhance Bossard's Real Time Manufacturing Services by automatically creating quotes for special parts. The core is an AI-created 3D Master that unifies all available part information, enabling pricing and feasibility evaluation for many manufacturing technologies incl. additive manufacturing. ...
certAInty – A Certification Scheme for AI systems
Certification of AI Systems by an accredited body increases trust, accelerates adoption and enables their use for safety-critical applications. We develop a Certification Scheme comprising specific requirements, criteria, measures, and technical methods for assessing Machine Learning enabled Systems. ...
DISTRAL: Industrial Process Monitoring for Injection Molding with Distributed Transfer Learning
We develop a distributed machine learning system to sort out defect plastic parts during production. Main challenge is the transferability of learnt process know-how from case to case; the solution builds on domain adaptation, continual data-centric deep learning and federated edge computing. ...
Roost, Dano; Meier, Ralph; Toffetti Carughi, Giovanni; Stadelmann, Thilo,
Active Vision and Perception in Human(-Robot) Collaboration Workshop at IEEE RO-MAN 2020 (AVHRC’20), online, 31 August - 4 September 2020.
University of Essex.
Available from: https://doi.org/10.21256/zhaw-20419
Perdikis, Serafeim; Leeb, Robert; Chavarriaga, Ricardo; Millán, José del R.,
IEEE Transactions on Neural Networks and Learning Systems.
Available from: https://doi.org/10.1109/TNNLS.2020.3011671
IEEE Systems, Man, and Cybernetics Magazine.
6(3), pp. 50-51.
Available from: https://doi.org/10.1109/MSMC.2020.2995438
Roost, Dano; Meier, Ralph; Huschauer, Stephan; Nygren, Erik; Egli, Adrian; Weiler, Andreas; Stadelmann, Thilo,
Proceedings of the 7th SDS.
7th Swiss Conference on Data Science, Lucerne, Switzerland, 26 June 2020.
Available from: https://doi.org/10.21256/zhaw-19978
Aydarkhanov, Ruslan; Ušćumlić, Marija; Chavarriaga, Ricardo; Gheorghe, Lucian; del R Millán, José,
Journal of Neural Engineering.
17(3), pp. 036030.
Available from: https://doi.org/10.1088/1741-2552/ab95eb
|2023||Extended Abstract||Thilo Stadelmann. KI als Chance für die angewandten Wissenschaften im Wettbewerb der Hochschulen. Workshop (“Atelier”) at the Bürgenstock-Konferenz der Schweizer Fachhochschulen und Pädagogischen Hochschulen 2023, Luzern, Schweiz, 20. Januar 2023|
|2022||Extended Abstract||Christoph von der Malsburg, Benjamin F. Grewe, and Thilo Stadelmann. Making Sense of the Natural Environment. Proceedings of the KogWis 2022 - Understanding Minds Biannual Conference of the German Cognitive Science Society, Freiburg, Germany, September 5-7, 2022.|
|2022||Open Reserach Data||Felix M. Schmitt-Koopmann, Elaine M. Huang, Hans-Peter Hutter, Thilo Stadelmann, and Alireza Darvishy. FormulaNet: A Benchmark Dataset for Mathematical Formula Detection. One unsolved sub-task of document analysis is mathematical formula detection (MFD). Research by ourselves and others has shown that existing MFD datasets with inline and display formula labels are small and have insufficient labeling quality. There is therefore an urgent need for datasets with better quality labeling for future research in the MFD field, as they have a high impact on the performance of the models trained on them. We present an advanced labeling pipeline and a new dataset called FormulaNet. At over 45k pages, we believe that FormulaNet is the largest MFD dataset with inline formula labels. Our dataset is intended to help address the MFD task and may enable the development of new applications, such as making mathematical formulae accessible in PDFs for visually impaired screen reader users.|
|2020||Open Research Data||Lukas Tuggener, Yvan Putra Satyawan, Alexander Pacha, Jürgen Schmidhuber, and Thilo Stadelmann, DeepScoresV2. The DeepScoresV2 Dataset for Music Object Detection contains digitally rendered images of written sheet music, together with the corresponding ground truth to fit various types of machine learning models. A total of 151 Million different instances of music symbols, belonging to 135 different classes are annotated. The total Dataset contains 255,385 Images. For most researches, the dense version, containing 1714 of the most diverse and interesting images, is a good starting point.|