Dr. Ionut Damian

Signal Processing / Sensor Data Fusion / Autonomous Driving

Contact » Publications »

Personal History

1994 - 2006
2006 - 2011
2011 - 2018
2018 - now

Expertise

Software Engineering

Software Engineering

Since my early teenage years I strived to create complex yet flexible software solutions. While I started with webdevelopment and building websites in html, php and javascript, during my student and PhD years I switched to developing in C++, C# and Java. I both participated in and coordinated small teams of programmers and designers for developing various software solutions. Some examples can be seen below.

Human-Computer Interaction

Sensor Data Fusion

I have experience with fusing data from multiple sensors into one consistent environment model to enable highly accurate perception systems for autonomous driving.

Machine Learning

Machine Learning / Signal Processing

During my PhD programme I built numerous signal processing systems with the purpose of automatically classifying user behaviour in real time. For this, I implemented signal filters, developed feature extractors and trained various types of classification models including Neural Networks and Support Vector Machines.

Languages

Languages

Besides Romanian, my mother tongue, I speak fluently German and English. Furthermore, I have basic knowledge of the French and Georgian languages.

Portfolio

Signal Processing for Android

SSJ

SSJ is an extensible android framework for realtime signal processing and classification in an out of lab environment. It enables the recording, processing and classification of sensor data from over 20 internal and external sensors. For this, a wide array of signal processing tools are packaged in a flexible, mobile friendly Java library which can be easily integrated into Android apps. Moreover, with the help of the SSJ Creator app, complex signal processing pipelines can be designed and executed without writing a single line of code.

View details »

Social Signal Interpretation

SSI

The Social Signal Interpretation (SSI) framework offers tools to record, analyse and recognize human behavior in realtime, such as gestures, mimics, head nods, and emotional speech. For this it can extract and process data from multiple sensor devices in parallel. SSI is also able to use machine learning techniques for the automatic classification of human behaviour.

View details »

Logue

Logue

Logue is an open source application for augmenting social interactions by providing behavioural feedback in realtime using different modalities: visual, auditory and haptic. The aim is to increase awareness and improve the quality of one's own nonverbal behaviour.

View details » Video »

Advanced Agent Animation

AAA

Advanced Agent Animation (AAA) is an application designed for managing virtual social situations. It provides extended support for manipulating virtual characters and simulating social interactions. The behaviour (speech, gestures, postures, gaze) of each character can be customized to mimic various social characteristics such as gender, culture and personalty.

View details »

c-plusplus

Glassistant

The German-funded GLASSISTANT project aims to use smart glasses (e.g. Google Glass) to support persons with mild cognitive impairment (MCI). In GLASSISTANT, an Android application continuously monitors the stress level of the user with the help of wearable sensors. The goal is to detect when the user is in need of assistance. If an increased level of stress is detected, the system automatically attempts to guide the user towards a more relaxed state, provides directions home or contacts a family member.

View details » Video »

c-plusplus

TARDIS

The EU-funded TARDIS project aims to build a job interview simulation platform for young people at risk of exclusion to explore, practice and improve their social skills. For the simulation of job interviews, virtual agents (VAs) are used as recruiters in job interviews scenarios. The behaviour of the user is analysed in real time with the help of various sensors. This allows the VA to react to the user in a human-like fashion, and facilitates the post-hoc inspection of the interview by the user.

View details » Video »

Publications

Please wait, fetching data ...

Contact