Navigation

You are here: Home / Research

Research Projects

by year:  all years   2016   2015   2014   2013   2012   2011   2010   2009   2008   2007   2006   2005   2004   2003   2002   2001   2000   1999   1998   1997   prior projects
by contact:  all contacts
Show Contacts
Maurer Michael Poier Georg Saffari Amir Schulter Samuel Seichter Hartmut Voglreiter Philip Zeisl Bernhard Lex Alexander Armagan Anil Arth Clemens Barakonyi István Bauer Joachim Beichel Reinhard Bischof Horst Boechat Pedro Bornik Alexander Reitinger Bernhard Bauer Christian Gruber Lukas Kainz Bernhard Pirchheim Christian Daftry Shreyansh Wagner Daniel Kalkofen Denis Dokter Mark Donoser Michael Elbischger Pierre Ferstl David Fleck Philipp Fraundorfer Friedrich Reitmayr Gerhard Geymayer Thomas Godec Martin Munda Gottfried Grabner Markus Grubert Jens Hammernik Kerstin Hartl Andreas Daniel Hauswiesner Stefan Riemenschneider Hayko Grabner Helmut Hirzer Martin Hofer Manuel Holzmann Thomas Hoppe Christof Irschara Arnold Isop Werner Alexander Jampour Mahdi Newman Joseph Junghanns Sebastian Khan Inayatullah Kalkusch Michael Karner Konrad Kenzel Michael Kerbl Bernhard Khlebnikov Rostislav Klatzer Teresa Klaus Andreas Klopschitz Manfred Kluckner Stefan Knöbelreiter Patrick Köstinger Martin Kontschieder Peter Pirker Katrin Kruijff Ernst Langlotz Tobias Langs Georg Leberl Franz Lee Felix Leistner Christian Leitner Raimund Lenz Martin Lepetit Vincent Mauthner Thomas Meixner Philipp Mendez Erick Grabner Michael Heber Markus Mohr Peter Mostegel Christian Mühl Judith Mulloni Alessandro Ober Sandra Oberweger Markus Opitz Michael Pacher Georg Partl Christian Pflugfelder Roman Pinz Axel Roth Peter M. Pock Thomas Poier Georg Possegger Horst Pestana Puerta Jesús Puff Werner Pan Qi Ram Surinder Ranftl René Grasset Raphael Recky Michal Regenbrecht Holger Reinbacher Christian Riegler Gernot Rüther Matthias Rumpler Markus Santner Jakob Sareika Markus Schall Gerhard Schenk Fabian Schmalstieg Dieter Schulz Hans-Jörg Shekhovtsov Alexander Sormann Mario Steinberger Markus Stern Darko Sternig Sabine Storer Markus Straka Matthias Streit Marc Tatzgern Markus Nguyen Thanh Nguyen Thuy Trobin Werner Unger Markus Uray Martina Urschler Martin Veas Eduardo Ventura Jonathan Waldner Manuela Waltner Georg Wendel Andreas Werlberger Manuel Winter Martin Wohlhart Paul Zach Christopher Zebedin Lukas Zollmann Stefanie
by keyword:  all keywords
Show Keywords
3D Computer Vision 3D reconstruction Aerial Vision Augmented Reality Augmented Video Best Paper Award best paper award Biometrics C++ Caleydo Classification Computational Photography Computer Graphics Computer Vision Computer Vison Convex Optimization Coordinate transformations Dataset acquisition detection Discrete Optimization Experimental Robotics face Fingerprint Forensic Image Analysis Frozen Georeferencing GPU GUI HOG Human Computer Interaction Human Machine Interaction Image Labelling Image restoration Industrial Applications Information Visualization integral imaging Interaction Interaction Design Light estimation Machine Learning Medical computer vision Medical Image Analysis Medical Visualization Mixed Reality Mobile computing Mobile phone Model Multi-Display Environments Multiple Perspectives Multithreading Non-convex optimization Object detection Object recognition Object reconstruction Object Tracking Offline On-Line Learning Paid Thesis Perception Photorealism Pose Estimation Procedural Modeling Robotics Segmentation Shape analysis shape from focus Simulation SLAM Software Projects Solar Physics Structure from Motion Surveillance SVM Symmetry Tracking Fusion Tracking, Action Recognition User Interfaces Variational Methods View Management Virtual Copy Virtual Reality Virtual reality and augmented reality Visual Tracking Visualization
per page:    all   50   20
  Title     Abstract     Start     End  
UFO - User’s Flying Organizer
(details)
As Augmented Reality (AR) presents information registered to any 3D environment, considering that it commonly relies on Head Mounted Displays (HMD's) or handheld devices, spatial AR is an unencumbered alternative. It considers visual projection techniques, which are able to augment surfaces directly in the environment. However, until now projectors have been stationary and were not able to cover large areas with augmentation, even if extended by swiveling platforms. Project UFO tries to address this problem throughout combination of spatial AR with the field of mobile robotics. Specifically, by focusing on robotics, spatial augmented reality and human machine interaction, the project aims for an entirely novel user AR experience in which a small semi-autonomous MAV with on-board mounted projection devices is able to create personal projection screens on arbitrary environmental surfaces. 2015 2017
Diagnostik der Tumorheterogenität – ein neuer Steuerfaktor für die Therapie des Dickdarmkarzinoms?
(details)
Das Kolonkarzinom ist weltweit eine der häufigsten Krebserkrankungen, die trotz Fortschritte in der Behandlung nach Ausbildung von Metastasen fast immer zum Tod führt. Gemäß internationalen Standards ist derzeit die pathohistologische Untersuchung entscheidend für das therapeutische Vorgehen. Für Patienten in fortgeschrittenen Tumorstadien wurden kürzlich Therapien verfügbar, die auf den Mutationsstatus des Tumors ausgerichtet sind, jedoch eine mögliche Tumorheterogenität nicht berücksichtigen. Derzeit nicht detektierte Tumorklone werden für das oft fehlende Therapieansprechen und die Tumorprogression verantwortlich gemacht. Das beantragte Projekt soll durch Anwendung neuer sensorischer Verfahren zur kosteneffizienten und verlässlichen Bestimmung der genetischen Diversität von Dickdarmkarzinomen beitragen. Mittels statistischer Verfahren und bioinformatischer Analyse der genetischen Profile werden die Häufigkeit sowie die prognostische Bedeutung der Tumorheterogenität für das biologische Verhalten der Tumore sowie ihr Ansprechen auf spezifische onkologische Therapien ermittelt. Durch spezielle, an der TUG entwickelte Visualisierungstechniken wird die erhobene Datenfülle für Pathologen und klinische Onkologen verständlich und verwertbar gemacht. Eine umfassende genetische Tumoranalyse setzt das vollinhaltliche Einverständnis des Patienten voraus, welches untrennbar mit dem Verständnis und der Zustimmung zu den hierzu verwendeten Methoden verbunden ist. Ein weiteres Projektziel ist daher die Untersuchung von Erwartungen und Hoffnungen aber auch von Vorbehalten bzw. Befürchtungen, die in die Beratung und Aufklärung des Patienten Eingang finden sollen und die die unterschiedlichen Einstellungen der Patienten zu den diagnostischen Verfahren berücksichtigen. Diese neuen Diagnoseverfahren werden ein Ansprechen auf eine Therapie wesentlich gezielter voraussagen können als die derzeitigen Methoden, den Patienten Nebenwirkungen unwirksamer Medikamente ersparen und damit nicht zuletzt zu einer Kostenreduktion im Gesundheitssystem beitragen. 2012 2014
AUGUR: portable AR visualization of structure within structure using high precission detection
(details)

This project aims to develop portable measurement tools with in-situ visualization for the construction industry. A future measurement tool will provide a direct augmented reality view of measured properties over the real environment together with instructions as to where and how a certain task can be completed. For example, a metal detection tool should be able to provide direct visual feedback on the location of hidden metallic structures over a live video view of the inspected wall area. Furthermore it can guide a construction engineer to the optimal position for drilling a hole, avoiding any damage to existing structures.

Thus the tools should combine information from several sources to provide interactive and contextaware

guidance: Measurements from built-in sensors; location-aware through online tracking and registration; spatial, semantic information retrieved from a building information system (BIM). At the same time, future tools need to be simple to be used by non-expert users; therefore the system needs to be intuitive and guide users in the correct operation to fulfil their tasks. To accomplish this goal, The project addresses the following challenges:

  • Tracking for mobile devices in changing and unknown environments for correct visual overlays. We will investigate the combination of visual online reconstruction methods with range finders and coarse models for absolute registration.
  • X-Ray visualization of hidden and abstract information in unknown environments. Here we will investigate automatic approaches that take the environment’s appearance and the virtual information into account to select the best visualization method.
  • User guidance based on measured and plan information requires automatic analysis of the spatial arrangements and automatic visualization.
2012 2013
CONSTRUCT: Construction Site Monitoring and Change Detection using UAVs
(details)

The goal of the project is to develop methods for modeling and surveying large construction sites. The project will make use of unmanned aerial vehicles and existing stationary or pan-tilt zoom cameras at the construction site. The goal is to provide accurate 3D models on a regular basis of the whole site. This will generate a 4D data set (3D+time). This data can then be used for documentation, visualization (we will use a mobile augmented reality system to overlay e.g. the plan or a model of the building) as well as measurement (e.g., how much material has been transported). From a scientific point of view we will have to solve following tasks:

  • Dense 3D reconstruction from highly overlapping data, we will use variational methods implemented on the GPU.
  • Accurate registration of subsequent models over time. Since the 3D reconstruction is changing (per definition) the method needs to handle this. This is an instance of the highly relevant 3D model updating problem.
  • Integration of multiple camera sources. Using the 3D model and additional cameras poses the problem of localization of the additional cameras with respect to the 3D model which is again an instance the registration problem.
  • Development of a handheld AR platform for visualization. In order to use AR technology the pose of the platform with respect to the model and the reconstruction needs to be determined.
2011 2014
HOLISTIC: Holistic Aerial Scene Understanding Using Highly Redundant Data
(details)

The aim of this research project is holistic scene understanding in large aerial datasets, consisting of thousands of massively redundant high-resolution images. Holistic scene understanding is one of the major problems in computer vision and photogrammetry and has recently got a lot of attention. The problem of holistic image understanding includes two fundamental tasks: 3D scene reconstruction and semantic interpretation of the imaged content at the level of pixels. The tight interaction between semantic classification and 3D reconstruction is often ignored by state of the art aerial image processing workflows, due to the lack of computational power, the absence of efficient algorithms or the enormous effort of manual intervention. However, these tasks are mutually informative and should be solved jointly as a correct class labelling is a valuable source of information for reconstruction, and 3D information can help to improve the semantic interpretation. For instance, a correct classification is a valuable source of information for reconstruction in regions where dense matching methods fail (e.g. sheets of water and reflecting windows / facades), and 3D information can be used as a prior to improve classification (e.g. building and road detection). The high resolution and redundancy due to large overlaps of aerial images requires massive processing power which will be handled by taking advantage of graphic processing units that have proved to give a significant speedup compared to single core machines. In particular, we will focus on algorithms based on variational methods, which provide a high degree of parallelization capability. In order to reduce cost-intensive manual interaction, we further will exploit publicly available user-data from the Internet to improve both interpretation and 3D reconstruction.

In the HOLISTIC project we will provide a flexible framework for scene classification and 3D reconstruction from aerial images that outperforms current state-of-the art and delivers interpretable models at highest possible accuracy. To achieve this goal, we will focus our attention on the following two research subjects: (i) the joint optimization of geometry and semantic classification from aerial images in a unified framework, and (ii) the exploitation of existing geographic information systems and web data to support these two sub-tasks. In addition, we will use web-based standard to efficiently represent the obtained results for fast modeling and data parsing.

2011 2014
Caleydoplex- Information Exploration in Teams
(details)

Critical decisions involving a lot of data are rarely made by a single person, but are rather discussed and evaluated by a team of experts. Examples are doctors deciding for treatment of severe illness, emergency services having to react to ongoing crises, or engineers collaborating to make technical decisions concerning expensive products. These activities can be assisted by information visualization tools. However, traditional information visualization rarely considers the collaborative nature of data analysis tasks. The foundation of our research proposal is the extension of a multiple view visualization system to a multi-display environment. Multiple view visualization shows data in different representations and thereby accommodates for different knowledge backgrounds and user preferences. Multi-display environments turn unused wall and table spaces into interactive surfaces using off-the-shelf projection technology and integrate private workstations smoothly into this shared interactive workspace. Our research aim is the design and creation of a co-located collaborative information visualization workspace dealing with two principal challenges: display space management and collaborative interaction techniques. Intelligent display space management adopts information visualizations and placement of views automatically to the physical display properties and supports the users interacting with the environment. Combined with visual linking of related data entities distributed across the environment, it will help to establish a common knowledge ground. Collaborative interaction techniques are required to organize such a rich, but potentially complex environment. We will investigate high-level activity support for typical tasks in shared information workspaces and how users can maintain awareness of each other’s activities. The proposed research benefits from two ongoing projects at Graz University of Technology: Deskotheque delivers the basic technology necessary for collaborative work in multi-display environments, while Caleydo, a visualization project from the biomedical domain, provides an excellent use case, including the necessary experts willing to collaborate in studies. Using these frameworks, we plan to conduct several usability studies, with prototypes of different levels of sophistication. This research is part of the project Caleydo.

2011 2014
Managed Volume Processing (MVP)
(details)
Volumetric data is very common in medicine, geology or engineering, but the high complexity in data and algorithms has prevented widespread use of volume graphics. Recently, however, 3D image processing and visualization algorithms have been parallelized and ported to graphics processing units (GPUs). This proposal is concerned with new ways of designing volume graphics algorithms for the GPU that can interactively cope with these huge problems by better utilization of GPU capacity. Unfortunately, only certain parts of common image or volume processing algorithms can be mapped to the standard GPU stream processing model. For most real-world problems, writing programs for this architecture is a tedious task. As a result, most algorithms use the available processing power only for small subtasks -- the number crunching in inner loops. For example, direct volume rendering (DVR) methods send rays into a volumetric object, accumulate intensities, divide rays into sub-rays, scatter rays in materials and/or extract certain features. All GPU implementations of DVR use one processing unit for one pixel, regardless of whether the pixel will require very complex calculations or not. This strategy frequently leads to strong load imbalances. A particular problem of interactive applications such as volume graphics is that they are not traditional number crunching tasks, which only require optimal computational throughput, while having relaxed or no constraints concerning latency. On the contrary, interactive applications demand meeting real-time deadlines to ensure interactive response. This is a classical real-time resource scheduling problem. It can only be achieved by adaptive algorithms that rely on complex flow control and memory management decisions during the parallel execution. Both is currently only available on the CPU, which allows access to privileged mode through the operating system. On the GPU, components for high level scheduling involving latency hiding and memory management are missing or inaccessible. The desired full utilization of the GPU is very difficult to achieve for complex graphics algorithms with real-time demands. Building a toolset that allows harvesting the full GPU power for a general class of real-time volume graphics algorithms is the main goal of this proposal. We propose a managed volume processing system that incorporates the missing components. Its key modules are a task model, a workload scheduler with real-time capabilities and a virtual memory management system executed in tandem on the GPU and CPU. We will rely on the most recent hardware developments and use OpenCL as the standardized interface to access them. 2011 2014
Smart Reality - Innovation Network for Smart Applications and Media
(details)

The market for mobile media services will expand significantly in the next years. The explosion in the usage of smartphones and the growth of the application store model to sell individual services to smartphone users opens a new and attractive market for developers of simple, useful applications. New revenue streams can be created by in-application one-click purchasing. Aggregation of a camera on Internet-connected smartphones leads to the possibility of having a live video stream of the user's reality augmented by content and services from the Web. Location-based services and augmented reality are seen as potential killer applications of the mobile Internet because users are enabled to access additional information related to where they are, what they are seeing, or what they are doing, as well as instantly purchase related services and content.

For example, instead of just seeing a street poster for a club night and passing by, this new paradigm opens up instant access via the Internet-enabled smartphone to the club‘s location, purchasing an entrance ticket or listening to/buying the DJ mixes. A new co-operation net-work – the Innovation Network for Smart Applications and Media - will bring key Austrian R&D and innovative SMEs together to make real this new paradigm for smart mobile and media applications which we call Smart Reality, and be the first to benefit commercially from it.

2010 2012
AR4DOC - Augmented Reality for Document Inspection
(details)

Smartphones have evolved considerably in processing power over the last years. They now feature multi-core CPUs as well as GPUs and consumer-quality cameras up to HD resolution. This makes them an interesting platform for graphics and vision and opens new opportunities for research.

The aim of AR4DOC is to facilitate the task of document inspection by a human operator. This requires the person to have detailed knowledge about the nature of a document, which may be outdated or even unavailable at the time of inspection.

We seek to provide this information in an interactive way using Mobile Augmented Reality (AR), so that a well-grounded decision on the vailidity of a document is possible. This involves several tasks such as document localization, recognition, tracking, presentation as well as interaction.

 

 

 

 

2010 2013
PEGASUS: Autonomous Inspection of Overhead Power Lines using an Unmanned Aerial Vehicle
(details)

The aim of the PEGASUS project is to develop a mobile vision system for overhead power line inspection to be mounted on an unmanned aerial vehicle (UAV). The long term goal is to develop a fully autonomous aerial vehicle which is able to perform power line inspection in an automated manner. This goal requires innovative solutions to a number of problems such as visual navigation, visual tracking and obstacle detection, model-based inspection under harsh conditions etc. In addition, due to the use of a small scale UAV (e.g. a quad-rotor helicopter) we have restricted computational resources for algorithms that need to be executed on the UAV (especially for navigation and tracking). Within PEGASUS we want to make significant progress towards this long term goal. In particular, PEGASUS will provide a set of tools for the inspector. The project is organized in four phases: First, an inspection system for a single power tower is developed. Used in ground-based inspection, the UAV provides close-up views of all points of interest from an optimal viewpoint. Second, we want to implement an automatic visual inspection system which highlights possible faulty components. In a third step, the system is extended towards multiple towers (still in the sight of the operator). Finally, the system will be used as a handheld system in manned helicopters by power line inspectors, where it should dramatically reduce the time needed for inspection. From a research perspective we will develop novel solutions for model-based recognition and pose estimation, visual navigation including obstacle avoidance and automated model-based visual inspection. All of these problems are extremely challenging because of the uncontrolled conditions (illumination etc.) and the real-time requirements. If successful, the methods developed in PEGASUS will be usable beyond the task of power line inspection.

2010 2013
Mobi-Trick
(details)

The focus of the project is outdoor mobile computer vision with all of its challenges. Mobile systems need to be compact and energy efficient and are frequently changing locations. Therefore they must be autonomous and perform processing locally. A number of challenges arise from these requirements for which the project aims to provide solutions: Being compact, there is not much space for a large number of sensors such as laser scanners, radar antennas and the like. The work in this project will focus on stereo vision but with two different types of cameras. Often a second camera is already available and stereo information increases detection accuracies. Each time the system moves it needs to adapt to the changing situation. This requires adaptive calibration and online learning. Mobile systems often work from batteries. In addition, there is not much space to include intricate cooling systems. Thus, the system must be designed to be very energy efficient. New approaches for dynamic power management will be explored in the project. To put the work into context, several applications from the area of traffic surveillance/toll enforcement will be implemented and tested in an application oriented setting.

Current traffic enforcement solutions are either very large and costly (section control, toll enforcement) or do not offer much in terms of image processing (radar speed control). The technological output of Mobi Trick makes it possible to design mobile solutions for traffic monitoring, vehicle identification and classification, intelligent incident detection and observation of driver behavior. Mobile devices are also more efficient in enforcement. Their transient nature makes them less predictable. Mobile systems can also react more flexibly to changing road situations such as construction sites.

2010 2013
HD-VIP: High Definition Video Processing
(details)

The growth of information is nowadays enormous and at a level which had never been reached before. We currently produce almost more data in one year than was produced in the entire history of mankind so far. In particular the trend to a full digitization of audiovisual content is contributing to this explosion of available material. The exponential growth of online video, most notably YouTube among the many prominent video portals is just one example for that. Even if international studies are not arriving at exactly the same results, the figures are impressive: digital production in 2006 was approximately 160 Exabyte, and is predicted to rise to 990 Exabyte in 2010.

Any video processing /editing software has to keep pace with these extraordinary data rates which requires special efforts from the hardware and the software. Fortunately we see also an extraordinary increase in processing power, especially when looking at recent developments of graphics cards (GPUs). These cards offer massive parallelism (ideally suited for video processing) at a rather modest price. All these facts make this hardware an ideal candidate for video processing. But in order to make full use of the hardware the algorithms have to be highly parallel. Typical tasks encountered in video processing (which will also be tackled by the proposed project are):

Superresolution: With the advent of HDTVs in many homes there is an increasing need to produce also HDTV content. In order to make use of existing (low-resolution) material one can use so called superresolution algorithms. These methods generate from a sequence of low resolution frames a high resolution image by exploiting the high interframe redundancy.

Denoising: There are many sources of noise in a video, either the material is historic or during production/compression etc. noise is added to the video. A basic task is to remove the noise but still preserve all fine scale details.

Interactive video editing: For post production purposes one wants to mark objects in a video (of course the object should only be marked in a single frame and then segmented automatically in all subsequent frames) and either remove them (which requires inpainting methods to fill the holes with meaningful content), place them somewhere else in the video or replace them with different objects. Since these tasks are done interactively this requires interactive framerates.

Fortunately all of these tasks can be addressed by so called variational methods. The basic idea is to formulate the task as a minimization problem of a suitable energy functional. Besides other desirable properties these methods can be implemented in a highly parallel fashion which makes them ideal candidates for implementation on modern GPUs.

2010 2012
Higher Order Variational Methods
(details)

This research project is devoted to the study of higher order convex variational methods for problems in computer vision. First order methods, i.e. methods which take into account first order derivatives have shown a great success for a variety of inverse computer vision problems. This success is mostly due to the introduction of total variation methods by Rudin, Osher and Fatemi in 1992. Total variation methods exhibit the important property to preserve sharp discontinuities in the solution while the associated optimization problem is still convex. This leads to robust problem solutions, independent of any initialization. Besides this, total variation methods also exhibit some disadvantages. First, total variation methods favor piecewise constant solutions which leads to staircaising artifacts in image restoration problems and to the preference of fronto‐parallel structures in stereo problems. Second, total variation methods introduce a shrinking bias in shape optimization problems. The aim of this project is therefore to study higher order convex variational methods in order to improve the shortcomings of first order methods. We therefore propose to investigate two approaches. The first approach is based on the so‐called generalized total variation method, recently introduced by Bredies, Kunisch and Pock. It provides a framework to recover piecewise polynomial functions based on a convex functional. We expect that this method leads to significant improvements of stereo and motion estimation problems. The second approach is based on the so‐called roto‐translation space introduced by Citti and Sarti in 2006. It allows to rewrite functionals incorporating curvature regularity by means of a convex first order functional in higher dimensions. We expect that this approach will significantly improve the performance of various shape optimization problems.

2010 2013
Highly accurate range computation in driver assistence systems
(details)

In this project we study variational methods for computing highly accurate range data in driver assistance systems.

2010 2011
Image Processing and Statistical Learning
(details)

The goal of this project is to study statistical learning methods in particular boosting and random forest for computer vision tasks. We are especially interested in on-line learning.

2009 2010
KIRAS - SECRET
(details)

Different authorities like such as the Ministry of the Interior often require to find certain event or behavior patterns in recordings in large video archives. This "forensic" search is computationally extremely expensive and due to restricted storage permissions often even not possible. Thus, security-critical events can often not prevented or being postpursued. To overcome these problems, the aim of the OUTLIER project is the investigation of algorithms, methods, and processes to alleviate the work of security staff in searching and pursuiting of events in video archives. Furthermore these tasks should be performed more efficiently.

Based on the requirements of the Ministry of the Interior as well as the possibilities of an infrastructure operator these issues should be examined and a research prototype should be created. This should occur in cooperation of AIT and ICG (University of Technology Graz) as research partners and ASE as an industrial partner. Essential research subjects are: (i) detection and segmentation of people, (ii) comparisons and finding of events in different video streams, and (iii) analyses and learning of behavior patterns. In addition, a social-scientific acceptance research will be established by the research institute of the Red Cross (FRK). Based on these results recommendations are compiled for the optimization by use and minimization of problem potentials from social-scientific view.

2009 2012
Narkissos - Virtual Dressing Room
(details)
The main goal of NARKISSOS is to develop the next generation “magic mirror“ to be installed in a dressing room of a fashion store. The magic mirror is a technical multimedia system, where the consumer can watch himself on a video wall dressed by the clothes which are chosen by touch board or which he did register per RFID tag (embedded in the clothing) at a RFID reader stationed near the video wall of the virtual dressing room. Users can interactively change shape and appearance of the clothing in the mirror image without actually having to change cloths. Customers can also observe themselves (i.e., their avatar) from every side instantaneously. 2009 2012
OUTLIER
(details)

The ever increasing number of cameras in surveillance system requires automatic video analysis in order to spot critical situations and to alert the monitoring personnel in a timely manner. While most current approaches in this area aim for detecting a large number of specific events on a large set of complex application scenarios, the goal of this project is to go far beyond state of the art by developing novel online learning methods to detect unusual situations in a camera specific scenario. We will exploit the huge amount of data available for a specific camera to reliably learn usual and unusual situations.

In particular the OUTLIER project will carry out basic research in the following areas:

  • Improved unsupervised learning methods for huge amounts of data
  • Novel methods for semi-supervised learning in huge amounts of unlabeled data

These generic learning algorithms will be applied for the detection of unusual situations in public places and traffic scenarios. Examples are the detection of unusual crowd behavior (upcoming panic, barred emergency exits, or toppled persons), suspicious behavior of pedestrians (e.g. going from one car to another, loitering), vehicles or persons moving on unusual locations, the detection of unusual types of moving objects and detection of unusual situations like accidents, clashes and collisions. Unlike other approaches we do not want to model these situations explicitly and individually, but we will resort to learning to discriminate the usual situation from the unusual one.

Research partners in the project are JRS, TUG for basic and applied research and Siemens for industrial exploitation of project results.

2009 2011
Multimedia Documentation Lab
(details)

The potential for integration of multimedia content into the analysis of security relevant affairs is researched for the first time within the scope of Austrian security research efforts. The project’s goal is to harvest audio-visual information from specified open multimedia sources such as TV broadcasts and allow for integration into existing environments at user sites. The intended use of the system is to allow experts to efficiently generate more realistic and high-quality situation reports in the face of critical situations. Subsequently, these can be employed for communication with the population of Austria and to increase its security and sense of security - target goals of the KIRAS framework. An exemplary implementation of a prototype will be installed at the Zentraldokumentation of the Austrian Armed Forces. In terms of audio-processing the project builds upon existing technologies of the industrial partner, while the visual processing is investigated by ICG as academic partner and will mainly deal with person/face detection, tracking and recognition methods.

2009 2011
inGeneious - Holistic Visualization of Biomolecular and Clinical Data
(details)

Ziel des Projekts inGeneious ist es, Visualisierungsmethoden und Work-Flows zu entwickeln, die Biologen und Medizinern bei der Analyse biomolekulare Daten im Kontext von klinischen Faktoren sowie biologischen Prozesse unterstützen. Die Berücksichtigung dieser Faktoren bei der Analyse von zum Beispiel Genexpressionsdaten ist entscheidend, da auf diese Weise Rückschlüsse über Zusammenhänge von genetischer Predisposition und Krankheitsverlauf gewonnen werden können. Zwei zentrale Forschungsfragen sind Gegenstand des inGeneious-Projektes. Zunächst soll eine ganzheitliche Betrachtungsweise der drei Datenräume durch Multiple-View-Verfahren und effizientes visuelles Verbinden von Informationen ermöglicht werden. Darauf aufbauend soll eine vergleichende Analyse divergierender Gruppen durch neue, vergleichende Visualisierungsmethoden ermöglicht werden. Experten erhalten damit ein Werkzeug um die immer größer werdende Menge biomolekularer Daten effizient verwenden zu können. Diese Forschungsarbeit wird innerhalb des Projekts Caleydo durchgeführt.

2009 2011

[Powered by Plone]