PRIMORIS      Contacts      FAQs      INSTICC Portal
 

Tutorials

The role of the tutorials is to provide a platform for a more intensive scientific exchange amongst researchers interested in a particular topic and as a meeting point for the community. Tutorials complement the depth-oriented technical sessions by providing participants with broad overviews of emerging fields. A tutorial can be scheduled for 1.5 or 3 hours.

TUTORIALS LIST

Egocentric (First-Person) Vision  (VISIGRAPP)
Lecturer(s): Giovanni Maria Farinella

Bayesian and Quasi Monte Carlo Spherical Integration for Illumination Integrals  (VISIGRAPP)
Lecturer(s): Kadi Bouatouch, Ricardo Marques and Christian Bouville

Image Quality Assessment based on Machine Learning for the Special Case of Computer-generated Images  (VISIGRAPP)
Lecturer(s): Andre Bigand

Natural Human-Computer-Interaction in Virtual and Augmented Reality  (VISIGRAPP)
Lecturer(s): Manuela Chessa

Perception for Visualization: From Design to Evaluation  (VISIGRAPP)
Lecturer(s): Haim Levkowitz

Depth Video Enhancement  (VISIGRAPP)
Lecturer(s): Djamila Aouada



Egocentric (First-Person) Vision


Lecturer

Giovanni Maria Farinella
Università di Catania
Italy
 
Brief Bio
Giovanni Maria Farinella received the M.S. degree in Computer Science (egregia cum laude) from the University of Catania, Italy, in 2004, and the Ph.D. degree in computer science in 2008. He joined the Image Processing Laboratory (IPLAB) at the Department of Mathematics and Computer Science, University of Catania, in 2008. He is an Adjunct Professor of Computer Science at the University of Catania (since 2008) and a Contract Professor of Computer Vision at the Academy of Arts of Catania (since 2004). His research interests lie in the fields of computer vision, pattern recognition and machine learning. He has edited five volumes and coauthored more than 90 papers in international journals, conference proceedings and book chapters. He is a co-inventor of four international patents. He serves as a reviewer and on the programme committee for major international journals and international conferences. He founded (in 2006) and currently directs the International Computer Vision Summer School (ICVSS). More information: www.dmi.unict.it/farinella
Abstract

In the next years a huge number of images and videos acquired by using wearable cameras and related to our daily life will be available. The increasing use of Egocentric (First-Person) Cameras poses new challenges for the computer vision community, and gives the opportunity to build new applications with possibility of commercialization. This tutorial will give an overview of the advances in the field of Egocentric (First-Person) Vision. Challenges, applications and algorithms will be discussed by considering the past and recent literature.












Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Bayesian and Quasi Monte Carlo Spherical Integration for Illumination Integrals


Lecturers

Kadi Bouatouch
IRISA
France
 
Brief Bio
Professor Kadi Bouatouch is an electronics and automatic systems engineer (ENSEM 1974). He was awarded a PhD in 1977 (University of Nancy 1) and a higher doctorate on computer science in the field of computer graphics in 1989 (University of Rennes 1). He is working on global illumination, lighting simulation for complex environments, GPU based rendering and computer vision. He is currently Professor at the university of Rennes 1 (France) and researcher at IRISA Rennes (Institut de Recherche en Informatique et Systèmes Aléatoires). He is the head of the FRVSense team within IRISA. He is a member of Eurographics, ACM and IEEE. He was/is member of the program committee of several conferences and workshops and referee for several Computer Graphics journals such as: The Visual Computer, ACM Trans. On Graphics, IEEE Computer Graphics and Applications, IEEE Trans. On Visualization and Computer Graphics, IEEE Trans. On image processing, etc. He also acted as a referee for many conferences and workshops. He has reported for several PhD theses or higher doctorates in France and abroad (USA, UK, Belgium, Cyprus, The Netherlands, Spain, etc.). He is an associate editor for the Visual Computer Journal.
Ricardo Marques
Universitat Pompeu Fabra, Barcelona
Spain
 
Brief Bio
Ricardo Marques received his MSc degree in Computer Graphics and Distributed Parallel Computation from Universidade do Minho, Portugal, (fall 2009), after which he worked as a researcher in the same university. He joined INRIA (Institut National de Recherche en Informatique et Automatique) and the FRVSense team as a PhD student in the fall 2010 under the supervision of Kadi Bouatouch. His thesis work has focused on spherical integration methods applied to light transport simulation. He defended his PhD thesis in the fall 2013 and joined the Mimetic INRIA research team as a research engineer in 2014, where he worked in the field of Crowd Simulation. In the fall 2015 he joined the Interactive Technologies Group (GTI) of Universitat Pompeu Fabra (UPF) in Barcelona as a post-doc. In August 2016 he received a Marie Curie Fellowship.
Christian Bouville
IRISA
France
 
Abstract

The tutorial addresses two quadrature methods: Quasi Monte Carlo (QMC) and Bayesian Monte Carlo (BMC). These two approaches are applied to compute the shading integral in global illumination. First, we will show that Bayesian Monte Carlo can significantly outperform importance sampling Monte Carlo through a more effective use of the information produced by sampling. As for for QMC, we will show that QMC methods exhibit a faster convergence rate than that of classic Monte Carlo methods. This feature has made QMC prevalent in image synthesis, where it is frequently used for approximating the value of spherical integrals (e.g.shading integral). In this tutorial we present a strategy for producing high-quality QMC sampling patterns for spherical integration by resorting to spherical Fibonacci point sets.












Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Image Quality Assessment based on
Machine Learning for the Special Case of Computer-generated Images


Lecturer

Andre Bigand
ULCO
France
 
Brief Bio
André Bigand (IEEE Member) received the Ph.D. Degree in 1993 from the University Paris 6 and the HDR degree in 2001 from the Université du Littoral of Calais (ULCO, France). He is currently senior associate professor in ULCO since 1993. His current research interest include uncertainty modeling and machine learning with applications to image processing and synthesis (particularly noise modeling and filtering). He is currently with the LISIC Laboratory (ULCO). He is author and co-author of 120 scientific papers in international journals and books or communications to conferences with reviewing ommittee. He has 33 years experience teaching and lecturing. He is a visiting professor at UL - Lebanese University- where he teaches "machine learning and pattern recognition" in research master STIP.
Abstract

Unbiased global illumination methods based on stochastic techniques provide phororealistic images. They are however prone to noise that can only be reduced by increasing the number of computed samples. The problem of finding the number of samples that are required in order to ensure that most of the observers cannot perceive any noise is still open since the ideal image is unknown. Image quality assessment is well-known considering natural scene images and this is summed up in the tutorial introduction. Image quality (or noise evaluation) of computer-generated images is slightly diferent, since image acquisition is diferent. In this tutorial we address this problem focusing on visual perception of noise. But rather than use known perceptual models we investigate the use of machine learning approaches classically used in the Artifcial Intelligence area as full-reference and reduced-reference metrics. We propose to use such approaches to create a machine learning model based on Learning Machines as SVM, RVM, ... in order to be able to predict which image highlights perceptual noise. We also investigate the use of soft computing approaches based on fuzzy sets as no-reference metric. Learning is performed through the use of an example database which is built from experiments of noise perception with human users. These models can then be used in any progressive stochastic global illumination method in order to fnd the visual convergence threshold of diferent parts of any image.
This tutorial is structured as a half day presentation (3 hours). The goals of this course are to make students familiar with the underlying techniques that make this possible (machine learning, soft computing).

Keywords: Computer-generated Images; Quality Metrics; Machine Learning, Soft Computing.













Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Natural Human-Computer-Interaction in Virtual and Augmented Reality


Lecturer

Manuela Chessa
University of Genoa
Italy
 
Brief Bio
Manuela Chessa is a Postodoctoral Research scientist at Dept. of Informatics, Bioengineering, Robotics, and Systems Engineering of the University of Genoa. She received her MSc in from the University of Genoa, Italy, in 2005, and the Ph.D. in Bioengineering from University of Genoa in 2009, under the supervision of Prof. S. P. Sabatini. She has been working in the PSPC Lab since 2005, and from 2015, she is with the SLIPGURU Research Group (www.slipguru.unige.it). Her research interests are focused on the study of biological and artificial vision systems, on the development of bioinspired models, and of natural human-machine interfaces based on virtual, augmented and mixed reality. She studies the use of novel sensing technologies (e.g. Microsoft Kinect, Leap Motion, Intel Real Sense) and of visualization devices (e.g. 3D monitors, head-mounted-displays, tablets) to develop natural interaction systems, always having in mind the human perception. In particular, she is active in studying misperception issues, visual stress and fatigue that arise by using such systems. She has been involved in several national and international research projects. She is author and co-author of 39 peer reviewed scientific papers, both on ISI journal and on International Conferences, of 5 book chapters, and of 2 edited book.
Abstract

One of the goals of Human-Computer-Interaction (HCI) is to obtain systems where people can act in a natural and intuitive way. In particular, the aim of Natural Human Computer Interaction (NHCI) is to create new interactive frameworks that mimic as much as possible real life experience. Nevertheless, the gap among computer vision, computer graphics, cognitive science, behavioral and psychophysics studies is still preventing to obtain a real NHCI. In this tutorial, I will review the past and recent literature about human-computer-interaction systems, focusing on the recent development in the field. In particular, I will address the topics of misperception, visual fatigue and cybersickness in virtual and augmented reality scenarios, and I will discuss the open issues and the possible ways to improve such systems.












Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Perception for Visualization: From Design to Evaluation


Lecturer

Haim Levkowitz
University of Massachusetts, Lowell
United States
 
Brief Bio
Haim Levkowitz is the Chair of the Computer Science Department at the University of Massachusetts Lowell, in Lowell, MA, USA, where he has been a Faculty member since 1989. He was a twice-recipient of a US Fulbright Scholar Award to Brazil (August – December 2012 and August 2004 – January 2005). He was a Visiting Professor at ICMC — Instituto de Ciencias Matematicas e de Computacao (The Institute of Mathematics and Computer Sciences)—at the University of Sao Paul, Sao Carlos – SP, Brazil (August 2004 - August 2005; August 2012 to August 2013). He co-founded and was Co-Director of the Institute for Visualization and Perception Research (through 2012), and is now Director of the Human-Information Interaction Research Group. He is a world renowned authority on visualization, perception, color, and their application in data mining and information retrieval. He is the author of “Color Theory and Modeling for Computer Graphics, Visualization, and Multimedia Applications” (Springer 1997) and co-editor of “Perceptual Issues in Visualization” (Springer 1995), as well as many papers in these subjects. He is also co-author/co-editor of "Writing Scientific Papers in English Successfully: Your Complete Roadmap," (E. Schuster, H. Levkowitz, and O.N. Oliveira Jr., eds., Paperback: ISBN: 978-8588533974; Kindle: ISBN: 8588533979, available now on Amazon.com: http://www.amazon.com/Writing-Scientific-Papers-English-Successfully/dp/8588533979). He has more than 44 years experience teaching and lecturing, and has taught many tutorials and short courses, in addition to regular academic courses. In addition to his academic career, Professor Levkowitz has had an active entrepreneurial career as Founder or Co-Founder, Chief Technology Officer, Scientific and Strategic Advisor, Director, and venture investor at a number of high-tech startups.
Abstract

What is the smallest sample I can show that will be perceived? What is the smallest sample I can show that will be perceived in color? Can I afford using image compression? If yes, how much and what kind? Should I use a grayscale or another color scale to present data? How many gray levels do I absolutely need? What color scale should I use? How many bits for color do I need to have? Should I use 3D, stereo, texture, motion? If so what kinds? and Has my visualization been successful meeting its goals and needs?

If you have ever designed a visualization, you probably have asked yourself (perhaps others) some of these questions; at least you should have.

Since visualization “consumers” are humans, the answers to these questions can only come from a thorough analysis and understanding of human perceptual capabilities and limitations, combined with the visualization's goals and needs.

This tutorial will teach you the basics of human perception and how to utilize them in the complete process of visualization: from design to evaluation.













Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

Depth Video Enhancement


Lecturer

Djamila Aouada
University of Luxembourg
Luxembourg
 
Brief Bio
Djamila Aouada received the State Engineering degree in electronics in 2005, from the École Nationale Polytechnique (ENP), Algiers, Algeria, and the Ph.D. degree in electrical engineering in 2009 from North Carolina State University (NCSU), Raleigh, NC. She is Research Scientist at the Interdisciplinary Centre for Security, Reliability, and Trust (SnT), at the University of Luxembourg. Dr. Aouada has been leading the computer vision activities at the SnT since 2009. She has worked as a consultant for multiple renowned laboratories (Los Alamos National Laboratory, Alcatel Lucent Bell Labs., and Mitsubishi Electric Research Labs.). Her research interests span the areas of signal and image processing, computer vision, pattern recognition and data modelling. She is the co-author of two IEEE Best Paper Awards, and member of IEEE, IEEE SPS, and IEEE WIE.
Abstract

3D sensing technologies have witnessed a revolution in the past years making depth sensors cost-effective and part of accessible consumer electronics. Their ability in directly capturing depth videos in real-time has opened tremendous possibilities for multiple applications in computer vision. These sensors, however, have some shortcomings due to their high noise contamination, including missing and jagged measurements, and their low spatial resolutions. In order to extract detailed 3D features from this type of data, a dedicated data enhancement is required. This tutorial reviews the different approaches proposed in the literature, and especially focuses on strategies targeting dynamic depth scenes with non-rigid deformations.












Secretariat Contacts
e-mail: visigrapp.secretariat@insticc.org

footer