Call for Participation

[pdf version is here].

 

Keynote Presentations

Open Source Hardware Development Model and Old CPU

Shumpei Kawasaki (Open Core Foundation, USA)

Abstract:     In 1965 Gordon E. Moore discovered a trend that the number of transistors on semiconductor integrated circuits doubles every 18 months. Semiconductor experts now predict this trend ends in 2013. From 2014 onward semiconductor industry needs innovations more than ever. While most disruptive innovation occurs from a new entrant concatenating prior inventions in her or his peculiar way, the chip hardware industry rarely shares design information with outsiders. To make things worse innovations within semiconductor companies are not likely either. Most corporations no longer assign resource to improve their own designs and license them from IP companies. The link of innovations is lost.  To restore the lost link of innovations, people from all walks of life are exploring the possibility of open source hardware. A pinnacle of open source hardware today is open CPUs. These CPUs are coded in a Hardware Description Language (HDL) and uploaded to web for distribution with various licenses. Through logic synthesis, place and route and downloading they can be instantiated on FPGAs and later on ASICs. The presenter outlines perspectives on these grass-roots activities.  In late 1980s the presenter worked on a settlement of intellectual property dispute between two companies one in US and one in Japan over instruction set architectures (ISAs). Through 1990s he held positions on CPU instruction and system architecture definitions. By juxtaposing corporate R&D experiences and observations of emerging open source hardware movement community, the presenter gives his perspectives on what should be in place for intellectual property management, design infrastructure, technological governance, licensing model, software compatibilities, and other things, if open source CPUs are to become a major element in future semiconductor. One important question is where should we seek open source CPU architecture…create new ISAs? or reuse existent ISAs?

ShumpeiKawasaki-Photo2Shumpei Kawasaki has experiences in Hardware and Software Designs, Project Management, and Business Development. He was a technical lead on SH CPU and Chip development between 1988 and 2001 at Hitachi Ltd. Before SH project started, Shumpei visited Richard Stallman at MIT on his pocket money. At the time Stallman was conceptualizing his Free Software Foundation. Stallman later gave generous consultations on technical requirements for SH as a GNU target. In 2001 Shumpei changed his field to embedded software and managed development of open and proprietary Operating Systems on Hitachi CPUs and Renesas ARM CPUs until 2013.  Shumpei is now the representative of Open Core Foundation (OCF), a Californian Nonprofit Public Benefit Corporation, and CEO of Software Hardware & Consulting LLC, a Californian company. OCF promotes a universal freedom to use, improve, and redistribute hardware. Shumpei received MS in Computer Science from University of Illinois at Urbana-Champaign, and BA in Mathematics from Knox College, Galesburg, Illinois. He was a recipient of Grew-Bancroft Scholarship, NSF, NIH and Control Data Corporation grants. He has 45 active US patents many in CPU ISAs and CPU system architecture.

 

 

 

Future Trend and the chance of Reconfigurable Computing

Jay Kim (Samsung, Korea)

Abstract :    Mobile devices become major computing resources in these days.   The required computing has been dramatically changed corresponding to the increase of computing capability coming from technology improvement.   It has been forced to increase the computing capacity while managing the battery life efficiently in the mobile market.   The technical challenges to the mobile computing get more difficulties where the devices are wearable because of their requirements – better user experience, miniaturization, less skin temperature, and etc.  In computing-wise, it is necessary to attack these requirements in two aspects.  First, wearable computing still gives users enough computing power and ability to run the various applications.  It is needed to handle different level of computing complexity and wide range of workload with different characteristics under limited battery capacity.  In order to manage this situation efficiently, adaptive computing environment will be needed to catch up with this rapid change.   At the same time, the superior position of the PPA (Power, Performance and Area) than other competitors will be essential for commercial success.   Second, since wearable device is integrated into everyday objects that we wear constantly, its power consumption and heat dissipation become more important than those of hand-held devices.  It is necessary to find the extremely energy efficient computing.  In order to handle these two technical challenges and penetrate into the related market successfully, we need the proper solution to meet the similar PPA efficiency to HW while keeping scalability and flexibility. One of those candidates should be a reconfigurable computing.

Jay Kim - Photo2Jay (Jeongwook) Kim is Corporate Vice President, Samsung Advanced Institute of Technology, Samsung Electronics.  He is the lab director of Processor Architecture Lab at SAIT(Samsung Advanced Institute of Technology), Samsung’s Corporate research center.  He is in charge of developing advanced  processor architecture for Samsung’s future product.  He had been working for Samsung dedicated DSP development project from 2004 to 2013 as the leader of Reconfigurable Processor Group. Before joining Samsung, he was a senior engineer related to optimizing compiler for reconfigurable processor at Proceler, venture company developing advanced compiler technology.  He got a Ph.D degree from Georgia Institute of Technology . While he was studying, he also had been working for system level simulation and optimization at another venture company named VP Technologies.

 

 

 

SyNAPSE: Foundation of Future
Neuronsynaptic Computing System

Jun Sawada (IBM, USA)

Abstract:    SyNAPSE (Systems of Neuromorphic Adaptive Plastic Scalable Electronics) is a project aiming to build a brain-like computing system in the scale of mammalian brains.  Our project team has been combining the principles of nano-science, neuroscience and super-computing to simulate and emulate the brain’s abilities for sensation, perception, action, interaction and cognition, while rivaling its low power consumption and compact size.  One key component of creating a scalable neurosynaptic computing is energy efficiency.  In the phase one of this project, IBM demonstrated an ASIC neurosynaptic chip implemented with asynchronous circuits.  This chip achieved an extremely low active-power consumption of 45pJ per spike.  We also have created a neuron model that is simple enough to be implemented by a tiny circuit, but versatile enough to create various types of neurons.  This neuron model can emulate logical and arithmetical functions, as well as biologically interesting neuron behaviors.  Using this neuron model, we simulated a system of 5.3 x 10^11 neurons and 1.37 x 10^14 synapses on BlueGene/Q supercomputer.  We envision that these technologies will be the foundation for a future scalable and flexible neurosynaptic computation system.

Jun Sawada-Photo2Jun Sawada graduated Kyoto University in Japan with BS and MS degrees in Mathematics.  He received Ph.D in Computer Sciences from the University of Texas at Austin for the study of formal verification of hardware, VLSI micro-architecture, theorem proving and automated deduction.  In 2000, he joined IBM Austin Research Laboratory, and he is an IBM research staff member since then.  At IBM, he has been involved in many projects developing semiconductor chips including BlueGene/Q and POWER processors.  Recently he is working on the SyNAPSE project, leading the design effort of neurosynaptic hardware.

 

 

 

Low Power High Performance
Processors for Quantum Computing

Colin Williams (D-wave Systems, Canada)

Abstract:  The media likes to portray quantum computing as being in a head to head race with high performance computing. In reality, we foresee ways for quantum computing to complement and enhance high performance computing and vice versa. In this talk I will describe D-Wave’s approach to quantum computing including its operating principles, system architecture, evidence of quantumness, and report on our latest performance data, including the power efficiency of our processors. In particular, I will describe the latest version of D-Wave’s quantum processor, and describe several examples of computational problems we have mapped to our architecture. I will then describe strategies for integrating our low power high performance quantum processors into mainstream high performance computing systems. As our quantum processors are naturally well suited to solving discrete combinatorial optimization, sampling, machine learning and artificial intelligence problems the talk should be of broad interest to computer scientists, physicists, and engineers with interests in a wide range of application areas.

Colin P. WilliamsDr. Colin P. Williams is Director of Business Development & Strategic Partnerships at D-Wave Systems Inc. – the World’s first quantum computer company – where he works with corporate and government clients to infuse D-Wave quantum computing technology into their products and services.  Colin’s academic interests have spanned many areas connecting physics with computer science. In his doctoral work Colin developed an artificial intelligence system for reasoning about the physical world via qualitative and quantitative differential equations. At Xerox PARC he became interested in the links between statistical physics and computer science, invented the theory of computational phase transitions, and applied it to understanding the deep structure of NP-Hard problems. Later, Colin became interested in the connections between quantum physics and computer science. He wrote the first book on quantum computing, “Explorations in Quantum Computing”, and followed it up with two others, started the Quantum Computing Group at the NASA Jet Propulsion Laboratory, Caltech, and quickly broadened its scope to include research on quantum communications, quantum key distribution, quantum sensors, and quantum metrology. As NASA JPL’s Program Manager for Advanced Computing Paradigms, Colin seeded and managed a broad portfolio of advanced computing projects in cognitive computing, neuromorphic computing, spintronics, machine learning, virtual environments, computational nanomaterial design, computational cameras, intelligent energy applications, and the detection of concealed nuclear weapons. In addition, he personally invented quantum algorithms for solving NP-Hard problems, computing quantum wavelet transforms, accelerating average case quantum search, and gravitational tomography. He also showed how to perform arbitrary non-unitary quantum computation probabilistically, wrote a CAD tool for automatically designing quantum circuits that implement arbitrary desired quantum computations, and specialized it to use gate sets optimized for linear optical quantum computing, spintronic quantum computing and superconducting quantum computing. In 2012 he published a greatly expanded and updated edition of “Explorations in Quantum Computing”.  Colin holds a Ph.D. in artificial intelligence from the University of Edinburgh, a M.Sc. and D.I.C. in atmospheric physics and dynamics from Imperial College, University of London, and a B.Sc. (with Hons.) in mathematical physics from the University of Nottingham. He was formerly a research assistant in general relativity & quantum cosmology to Prof. Stephen W. Hawking, at the University of Cambridge, a research scientist at Xerox PARC, and an acting Associate Professor of Computer Science at Stanford University.

 

 

 

Programming the Internet of Things-
Combining Internet and Embedded
Programming Models

Michael McCool (Intel, USA)

Abstract:> The Internet of Things (IoT) can be defined most broadly as internet-enabled embedded computing. Things, able to sense and interact with the real world, must now also able to communicate over the internet with a potentially global reach. This broad definition of IoT covers a wide range of possible use cases, including but not limited to automotive computing, industrial, municipal, and environmental monitoring, home automation and security, smart appliances, wearables, and robotics. Communication may be machine-to-machine (M2M) or human-to-machine (H2M). Tiny devices need to be able to communicate and coordinate with data centers, and interact with users—often without displays of their own, and under severe power and form-factor constraints. IoT is also emerging in the context of existing web, tablet, and phone ecosystems. In this talk, I will discuss Intel’s efforts to address the emerging IoT market with a range of suitable SoCs (both Atom and Quark based) scaling down to very low power for endpoints and gateways, novel dual operating system architectures combining real-time and Linux capabilities, and new programming models and tools for rapid development of applications. IoT “applications” may in fact span multiple devices, including sensor nodes, gateways, interface devices (such as smartphones, tablets, and laptops), and cloud services. Programming models developed for embedded devices need to be coordinated with programming models developed for web services and for developing rich user interfaces on smart phones. These programming models need to be accessible but at the same time need to deal with the realities of the limited resources available on IoT devices. This is a challenge, but the potential of IoT will be best unlocked by providing a wide range of developers the means to rapidly develop and bring their ideas to market.

Michael McCool_Photo2Michael McCool(Intel Principal Engineer) has degrees in Computer Engineering (University of Waterloo, BASc) and Computer Science (University of Toronto, M.Sc. and PhD.) with specializations in mathematics (BASc) and biomedical engineering (MSc) as well as computer graphics and parallel computing (MSc, PhD).  He has research and application experience in the areas of data mining, computer graphics (specifically sampling, rasterization, path rendering, texture hardware, antialiasing, shading, illumination, function approximation, compression, and visualization), medical imaging, signal and image processing, financial analysis, and parallel languages and programming platforms.  In order to commercialize research work into many-core computing platforms done while he was an Associate Professor at the University of Waterloo, in 2004 he co-founded RapidMind, which in 2009 was acquired by Intel.  Currently he is a software architect with Intel working on programming models for both parallel computing on the one hand, and embedded systems (including internet-enabled embedded systems) on the other.  In addition to his university teaching, he has presented numerous tutorials at Eurographics, SIGGRAPH, and SC on graphics and/or parallel computing, and has co-authored three books.  The most recent book, Structured Parallel Programming, was co-authored with James Reinders and Arch Robison.  It presents a pattern-based approach to parallel programming using a large number of examples in Intel Cilk Plus and Intel Threading Building Blocks.  Most recently, he is collaborating with the Intel Edison team on the development of suitable programming model that combines low-level high-performance device control with sophisticated internet capabilities.

 

 

 

Panel Discussion

Topics: “Toward Wearable Computing Era, How COOL Chip Architectures and Tools will Evolve? “

Organizer and Moderator:
Fumio Arakawa (Nagoya University, Japan)
Panelist:
Michael McCool (Intel, USA)
Soojung Ryu (Sumsung, Korea)
Shumpei Kawasaki (Open Core Fundation, USA)
Hiroaki Tobita (Sony, Japan)

Abstract:  Now we are enjoying the mobile computing era mainly with smart phones. This is a fruit of the continuous downsizing of computing devices, and further downsizing will realize a wearable computing era.  Some wearable devices have been already available or announced today, and the shift to the new era is ongoing.  We can enjoy a network infrastructure in many situations, although its speed, response time, or dependability is not always enough, and cloud computing is now popular approach to get extra computing power for client devices.  So, we must consider how and from where we should get computing power to realize a good wearable computing device that will make new application enjoyable.  An embedded processor of the device should have the features tuned for the device and different from that of a mobile device, and low power is one of the key features.  Under such backgrounds, we will discuss how COOL chip architectures and tools will evolve toward wearable computing era.

 

 

 

Special Sessions (invited lectures)

Vivado HLS – High Level Synthesis for FPGA

Igor A. Kostarnov (Xilinx Japan, Japan)

Abstract:    Compiling from C, C++ or SystemC into RTL is quickly becoming a common technique for complex algorithm implementation in silicon.  This talk presents Vivado HLS – a high level synthesis tool from Xilinx that targets FPGA.  Using FPGA changes how high level synthesis runs in a few ways.  First, rather than generating a netlist of logic gates, HLS tool in FPGA should optimally use the existing coarser-grain resources.  Then it should use the adopted way of joining the generated blocks in order to build a system on a chip and also such tool should well understand the trade-offs available in FPGA.  In the presentation the basics of generating datapath, control and interfaces out of C code is described, paying special attention to the optimizations available for synthesis of high speed and efficient design. Optimal use of different levels of parallelism and eliminating memory bottlenecks is explained. Also the presentation covers the methodology for doing a software/hardware co-design in Vivado

Igor KostarnovIgor A. Kostarnov is currently a staff DSP specialist at Xilinx, Japan.  He received MSc degree at Moscow Institute of Physics and Technology, Department of EE and CS in Russia in 1987. Currently at Xilinx Japan his responsibility and interest is introduction of new FPGA programming technologies such as high level synthesis and C/C++ programmability to Xilinx customers.  Before joining Xilinx in 2008, he worked at Laboratory of Bioacoustics, Russia on ultrasound signal processing and since 1994 on FPGA and FPGA tool research within Hewlett-Packard Research Laboratory in Bristol in UK and USA; Altera Corporation, UK and ST Microelectronics, USA.  He also worked on VLIW processor architecture and implementation of VLIW in FPGA.

 

 

 

A New Generation of Parallel
Processing: Altera FPGAs as
Accelerators for an OpenCL Platform

Dirk Seynhaeve (Altera, USA)

Abstract:    Instruction-set processors in search of more compute power have bumped into the barrier of power consumption. The solution for more performance is parallelism, however traditional programming languages were not very efficient to program parallel platforms. OpenCL is a standard C-based solution that elegantly solves the efficient programming of an entire heterogeneous platform.  FPGAs are programmable devices, where application specific applications are implement as a custom circuit. This creates higher performance and lower power solutions compared to instruction set based processors. FPGAs are traditionally programmed with a class of tools labeled as hardware design tools, geared towards the physical implementation of the custom circuits.  This tutorial will show the basics of OpenCL, an introduction to FPGAs, and how Altera has abstracted away a hardware design environment under a programmer-friendly OpenCL infrastructure.

Dirk Seynhaeve-Photo2Dirk Seynhaeve received his MSEE from the University of Louvain, Belgium.  He was involved with ground breaking work in High Level Synthesis methodologies, with the teams of Prof. H. De Man, and Prof. J. Rabaey.  He carried that expertise into a commercial environment, designing ASICs and FPGAs.  He combined hands-on design activities with application engineering work at  EDA companies and startups, architecting and driving product solutions all the way from High Level Design, through emulation and verification, synthesis, floor planning, place and route, layout verification, to DFM solutions for Lithography, Chemical Mechanical Polishing and power optimization.  In the world of High Level Design, Dirk was the architect of Synplify DSP at Synplicity, and the technical/product marketing lead for AutoESL at Xilinx. He is currently involved with product planning for High Level Design solutions at Altera.

 

 

 

Application-Specific Processors –
Addressing High-Performance
Computing Challenges

Masaaki Ideno (Synopsys)

Abstract:    Embedded processors constantly increase their performance/power efficiency. At the same time, ever increasing algorithm complexity and low-power requirements make acceleration and processor offload ubiquitous in almost any SoC design. While hardwired logic is the traditional way accelerators have been implemented, multicore and  application-specific processor (ASIP) solutions are gaining momentum because the advantages in programmability and time to market. Using the example of a pedestrian detection system as applied in modern automotive applications, we will illustrate how specialized ASIP architectures allow combining flexibility (e.g. faster time to market) and high-efficiency (performance/Watt). We will cover ASIP design methodology requirements, considering both architecture design, as well as the need for embedded software tools such as C-compiler, simulator, assembler, linker and debugger.

MasaakiIdeno-Photo2

Masaaki Ideno is a Japan and Asia-pacific regional manager of Processor Designer product at Synopsys. He has experience at ARC international (configurable processor) and IPFlex (dynamically reconfigurable processor) before join to CoWare in 2006. In 2010, CoWare merged to Synopsys and he continue Processor Designer business until now. He graduate Tokyo kogakuin colledge of computer technology in 1987.

[Top]