The following special sessions are planned. If you intend to submit your paper to a special session, you must select its title as first research topic (EDICS) on the paper submission page. The special sessions are listed in section 9 of the list the EDICS.
We recommend that you also select a second topic as EDICS 2 that describes the topic of your paper, to aid in the reviewing process. Please note that the submission deadline, paper format, and review process for special session papers are the same as those of regular papers. See the ICIP2015 Paper Submission and Paper Kit pages for further information.
Tong ZHANG, HP
Khaled EL-MALEH, Qualcomm
Haohong WANG, TCL Research America
Branislav KISACANIN, Interphase
ICIP2015 is proud to organize an Industry-oriented special session. The selected theme is Industry Innovations in Image and Video Processing. See details on the Industry Session page
Wearable visual sensors can give a picture of a person’s daily activities from their own perspective, recording how they perform daily activities (such as washing dishes, making a phone call), what objects they interact with, who they interact with. They can thus provide in depth understanding of a person’s lifestyle, their abilities to carry out everyday activities, their level of sociability, the environment and context in which they live. In the realm of healthcare applications, the deployment of wearable visual sensors allows the monitoring of progress or deterioration of a person’s condition, as they clearly show how the individual carries out daily activities and how these abilities may change over time. They may also show how their lifestyle and sociability levels change over time, through long-term profiling of activity levels, daily life routines, places they frequent, as well as detect emergencies in real time.
The automated analysis of data from ambient visual sensors can provide a picture of a person’s daily activities and daily routine, from an observer’s standpoint. Ambient visual sensors allow the detection and recognition of activities of daily living, and provide valuable information about the way these are carried out since they show the entire person. As for wearable visual sensing, emergencies can also be detected in real time. Color and depth information complement each other for the recognition of daily activities when several people are in a room. An added benefit of these visual sensors is their unobtrusiveness, which allows individuals to go about their daily life as usual. This Special Session aims to present the latest advances in image and video processing of both ambient and wearable visual sensors, with particular focus on healthcare applications, such as the recognition of activities of daily living of elderly in a hospital lab environment, or in their home. Its goal is to support and promote advances in visual sensing for healthcare, as this modality can provide a comprehensive picture of the lifestyle of individuals, of their condition, and these change over time. Safety applications such as the online detection of emergencies are also a useful by-product of such sensing.
Finally medical image analysis has received significant attention in the past years, and is expected to continue to play a central role in modern healthcare. The proliferation of imagery in medical devices and the continuous improvements in its quality and availability make its automated analysis a necessity, where interesting research is taking place.
The goal of this special session is to bring together researchers (including mathematicians, physicians, physiologists, psychologists, cogniticians, computer scientists, …) and practitioners working in the area of Color Imaging and their applications. We are soliciting original contributions, which address a wide range of theoretical and practical issues related to the early stages of the color image-processing pipeline including, but not limited to:
There is an unceasing interest in a global understanding of the processes governing the Earth involving a broad variety of Remote Sensing imaging sensors. The challenge is the exploration of these images and the timely delivery of focused information and knowledge in a simple understandable format. Remote Sensing image analysis and information extraction are postulating additional challenges emerging from its very particular nature:
The new generation video coding systems define new tools to efficiently compress video contents at high frame rate, high resolution (4K and 8K), high bit depth and wide colour gamut. The latest video coding standard, High Efficiency Video Coding (HEVC), was finalized in January 2013 and its scalable (SHVC) and Multi-View (MV-HEVC) extensions in July 2014. The HEVC standard and its extensions are expected to be adopted in the upcoming years as solution for different emerging video services including 4K live video streaming, 4K and 3D broadcast and IP TV, …
Privacy and security of these contents is a hot research topic and such solutions should be carefully designed to meet with both security and complexity requirements. Complexity requirement poses significant power and performance challenges for battery-operated devices such as smartphones and tablets, as well as for real-time and low delay applications such as video conferencing.
This special session will focus on efficient encryption and watermarking algorithms for video content coded in the HEVC standard and its extensions. This session includes, but not limited to, topics related to joint compression-encryption, joint compression-watermarking and selective video encryption solutions in the new generation video coding systems (HEVC, SHVC, MV-HEVC and 3D-HEVC). In addition, this special session gathers researchers from academia and industry with interdisciplinary background including encryption, watermarking, video coding, optimized algorithms as well as multimedia security and privacy.
Handheld mobile devices, such as smart camera phones, have great potential for emerging mobile visual search and augmented reality applications, such as location recognition, scene retrieval, product search, or CD/book cover search. In real-world mobile visual search applications, the reference database typically has millions of images and can only be stored on the remote server(s). Therefore, online querying involves transferring the query image from the mobile device to the remote server. This query delivery is often over a relatively slow wireless link. As a result, the quality of the user experience heavily depends on how much information has to be transferred. This issue becomes even more crucial when we move towards streaming augmented reality applications. Indeed, this time consuming query delivery is often unnecessary, since the server only performs similarity search rather than query image reconstruction. With the ever growing computational power on mobile devices, recent works have proposed to directly extract compact visual descriptor(s) from the query image on the mobile end, and then send this descriptor over the wireless link with a low bit rate. This descriptor is expected to be compact, discriminative, and meanwhile efficient in extraction to reduce overall query delivery latency. In particular, the research, development, and standardization of compact descriptors for visual search are involving big industry efforts from Nokia, Qualcomm, Aptina, NEC, etc.
W. Clem KARL
Charles A. BOUMAN
Imaging, a field historically dominated by sensing, is becoming increasingly computational. Modern imaging systems are increasingly tightly integrated and combine sensing, optics, algorithms, and computation to extract imaging information from diverse, multimodal, noisy, and convoluted measurements. Emerging areas, such as computational photography, computational microscopy, and mobile and distributed imaging, combine new algorithmic techniques, such as compressed sensing and Bayesian inversion, with novel sensing methods and modalities, such as coded apertures and dynamic acquisition, to dramatically extend imaging beyond classical limits. They thus open new frontiers for future imaging system design. Computational imaging is more than the mere processing of formed images; it spans both the formation of images as well as the design and analysis of integrated systems of sensing and computation. This integrated field of Computational Imaging is driving a wide range of technology from consumer products such as cell-phone and light-field cameras to basic scientific sensors such as electron microscopies and space-born sensors. Computational Imaging lies at the intersection of a number of application domains and brings together communities in signal processing, applied mathematics, and the physical sciences. The proposed special session is focused on this emerging interdisciplinary topic.
Visual media data are increasingly becoming ever bigger and stand as one of the most important and valuable information source. They come in many different heterogeneous forms, such as images, 2D video, stereoscopic and multi-view video, 3D point clouds, meshes, etc. and associated metadata (tags, automatically extracted metadata). A number of issues that are specific to the volume of these data arise, such as big visual media data storage, search and retrieval, representation and summarization, intelligent processing and analysis, coding and transmission, etc. All these are hot research topics that pose significant challenges regarding mainly the processing resources, execution speed, accuracy and scalability. At the same time, all these data are usually accommodated by rich metadata that, in many cases, are themselves big. The growth of big social media networks poses additional requirements on these aspects inducing ideas from large-graph analysis and multimodal data processing and analysis. Traditional treatment with single-core processors is not adequate and research directions are usually pointing at fast, incremental, approximate, distributed, parallel, multithreaded solutions. The session will collect papers presenting state-of- the art approaches in these topics focusing on the big media data aspects.
Topics of interest include but are not limited to big data methods for:
Patrick LE CALLET
This proposed special session brings together key figures in perceptual-motivated video compression to present the latest advances in the field. Our contributors represent a well mix of renowned academic and industrial institutions from Europe, America and Asia. The selected contributions are aimed at advancing HEVC, the latest iteration of hybrid block-based video coding standards, towards a new generation of perceptual-optimized video coding standards.
The papers selected for this special session cover three main topics in perceptual-motivated video compression: First, texture perception, analysis and synthesis. Second, perceptually optimized bit allocation and quantization. Third, perceptual quality metrics and artifact detection for rate-quality optimization. In addition, two papers were included to present advancements in perceptual-motivated compression in the related fields of stereoscopic video and image set compression. Thus, emphasizing the importance and interdisciplinary character of perceptual video compression.
We are confident that the selected contributions provide a strong line-up, representing latest scientific advances in perceptual video compression worldwide. In addition, we are certainly more than happy to work with the technical committee on possible adjustments to the outline.