Since my early teenage years I strived to create complex yet flexible software solutions. While I started with webdevelopment and building websites in html, php and javascript, during my student and PhD years I switched to developing in C++, C# and Java. I both participated in and coordinated small teams of programmers and designers for developing various software solutions. Some examples can be seen below.
Sensor Data Fusion
I have experience with fusing data from multiple sensors into one consistent environment model to enable highly accurate perception systems for autonomous driving.
Machine Learning / Signal Processing
During my PhD programme I built numerous signal processing systems with the purpose of automatically classifying user behaviour in real time. For this, I implemented signal filters, developed feature extractors and trained various types of classification models including Neural Networks and Support Vector Machines.
Languages
Besides Romanian, my mother tongue, I speak fluently German and English. Furthermore, I have basic knowledge of the French and Georgian languages.
Portfolio
SSJ
SSJ is an extensible android framework for realtime signal processing and classification in an out of lab environment.
It enables the recording, processing and classification of sensor data from over 20 internal and external sensors.
For this, a wide array of signal processing tools are packaged in a flexible, mobile friendly Java library which can be easily integrated into Android apps.
Moreover, with the help of the SSJ Creator app, complex signal processing pipelines can be designed and executed without writing a single line of code.
The Social Signal Interpretation (SSI) framework offers tools to record, analyse and recognize human behavior in realtime, such as gestures, mimics, head nods, and emotional speech.
For this it can extract and process data from multiple sensor devices in parallel.
SSI is also able to use machine learning techniques for the automatic classification of human behaviour.
Logue is an open source application for augmenting social interactions by providing behavioural feedback in realtime using different modalities: visual, auditory and haptic. The aim is to increase awareness and improve the quality of one's own nonverbal behaviour.
Advanced Agent Animation (AAA) is an application designed for managing virtual social situations.
It provides extended support for manipulating virtual characters and simulating social interactions.
The behaviour (speech, gestures, postures, gaze) of each character can be customized to mimic various social characteristics such as gender, culture and personalty.
The German-funded GLASSISTANT project aims to use smart glasses (e.g. Google Glass) to support persons with mild cognitive impairment (MCI).
In GLASSISTANT, an Android application continuously monitors the stress level of the user with the help of wearable sensors. The goal is to detect when the user is in need of assistance.
If an increased level of stress is detected, the system automatically attempts to guide the user towards a more relaxed state, provides directions home or contacts a family member.
The EU-funded TARDIS project aims to build a job interview simulation platform for young people at risk of exclusion to explore, practice and improve their social skills. For the simulation of job interviews, virtual agents (VAs) are used as recruiters in job interviews scenarios.
The behaviour of the user is analysed in real time with the help of various sensors. This allows the VA to react to the user in a human-like fashion, and facilitates the post-hoc inspection of the interview by the user.