Abstract: Information and Communication Technologies, although representing currently about 2% to 4% of worldwide carbon footprint, should allow a reduction by 15% of this global carbon footprint by 2020. But they also result in ever-increasing volumes of data traffic and the need for increasingly powerful and energy-hungry fixed and mobile broadband networks. Decreasing energy consumption of these networks while steeply increasing their overall capacity is a real challenge which has been addressed for several years. In 2009 for example, the worldwide volume of mobile data traffic exceeded that of voice traffic. As wireless technologies consume more power than wireline technologies for given access rates and traffic volumes, improving the energy efficiency of mobile networks is a key issue.
At the same time, making fixed and mobile networks converge is a desirable though very complex target for network operators. Better integration of fixed and mobile networks would result in both an optimal and seamless quality of experience for the end user together with an optimized network infrastructure ensuring increased performance, reduced cost and also reduced energy consumption. As energy efficiency of equipment is of course essential to overcome the energy challenge of broadband networks, this presentation will instead focus on the potential green benefits of converged fixed and mobile network architectures. Various architectural aspects of converged fixed and mobile networks will be addressed and their potential benefits will be commented.
Title: Photonic Network Vision 2020 in Big Data Era
Biography: Ken-ichi Kitayama received the M.E. and Dr. Eng. degrees from Osaka University, Osaka, Japan, in 1976 and 1981, respectively. In 1976, he joined the NTT Laboratories. In 1995, he joined the NICT, Japan. Since 1999, he has been the Professor of Osaka University, Japan. His research interests are in photonic label switchings, optical signal processings, and next-generation access technology such as OCDMA and OFDMA, and radio-over-fiber (RoF) systems.
He has published over 270 papers in refereed journals and holds more than 30 patents. He is the Fellow of the IEEE and the Fellow of the IEICE of Japan.
Abstract: Data on this globe have been exploding, and the analysis of large datasets, so-called big data create value in various ways and will become a key basis of competition, underpinning new waves of productivity growth, innovation, and a precise market research. Trillions of bytes of information are captured from networked sensors such as mobile phones, smart meters, automobile, and industrial machines, and they are analyzed by the power of cloud computing on a global scale. The capture and post-processing of big data will fuel exponential growth in data volume in 2020.
“One-globe photonic cloud,” photonic L2 network serves as the platform for cloud computing, connecting mega data centers around the world. One-globe photonic cloud features abundant and elastic bandwidth and the minimum latency, distinct from legacy cloud computing. The photonic L2 network will leverage on software defined
networking principles of photonic network virtualization as well as elastic optical circuit and packet switching technologies. One of the key enabling engines will be a photonic network processor, based upon powerful DSP and optical interconnect.
Title: Panorama of Optical Network Survivability
Biography: Biswanath (Bis) Mukherjee is Distinguished Professor at University of California, Davis, where was Chairman of Computer Science during 1997-2000. He received the BTech degree from Indian Institute of Technology, Kharagpur (1980) and PhD from University of Washington, Seattle (1987). He was General Co-Chair of the IEEE/OSA Optical Fiber Communications (OFC) Conference 2011, Technical Program Co-Chair of OFC’2009, and Technical Program Chair of the IEEE INFOCOM’96 conference. He is Editor of Springer’s Optical Networks Book Series. He has served on eight journal editorial boards, most notably IEEE/ACM Transactions on Networking and IEEE Network. He has supervised over 50 PhDs to completion and currently mentors 15 advisees, mainly PhD students. He is co-winner of Optical Networking Symposium Best Paper Awards at IEEE Globecom 2007 and 2008. He is author of the graduate-level textbook Optical WDM Networks (Springer, January 2006). He served a 5-year term on Board of Directors of IPLocks, a Silicon Valley startup company. He has served on Technical Advisory Board of several startup companies, including Teknovus (acquired by Broadcom). He is Founder and served as the first CEO of Ennetix, Inc., a startup company incubated at UC Davis, and specializing in energy-efficiency software products for networks and IT equipment. He is an IEEE Fellow.
Abstract: This talk will mainly focus on the emerging topic of an optical telecom backbone network’s adaptability from disaster disruptions and cascading failures. But, for the sake of completeness, the first few minutes of the talk will be devoted to review the traditional topics on optical network survivability (such as mesh protection and restoration, differentiated protection, availability-aware protection, holding-time-aware protection, reprovisioning, grooming and protection, partial protection, etc.) We will spend a few more minutes on newer topics such as survivability of Virtual Private Networks (VPNs), exploiting the excess capacity in an operational network for improved survivability, etc. Then, the majority of our time will be used to discuss disaster survivability, where we will cover topics such as: (1) Normal Disaster Preparedness (by accounting for risk of disasters in different parts of the infrastructure); (2) Enhanced Disaster Preparedness (under more-accurate intelligence on potential disasters); and (3) Post-Disaster Service Survivability (by employing concepts such as partial protection and degraded services). Note that, while traditional approaches focus on protecting links and nodes (routers, switches, etc.) to provide “network connectivity”, the shifting paradigm towards cloud computing/storage require that we protect the data (or content), so we have developed the concept of “content connectivity” and methods to achieve this.