Call for Participation

[pdf version is here](As of 2021-Mar-3)

Keynote Presentations

“虎穴に入らずんば虎子を得ず<High Risk, high return /No Risk, no return>: Domain-specific Processors make for Cool Solutions”

 Avi Baum   (Hailo)

Abstract:  In recent years, domain specific architectures are thriving. One main reason that fuels this trend is the prolific domain of machine learning.  In this talk I will briefly survey some of the main approaches and a glimpse into theoretical aspects that underlie their suggested benefit.  I will share some observations on present and future developments in the field and share my subjective view on about the possible implications on compute architectures.

Avi Baum is CTO and Co-Founder of Hailo. Prior to this, he served as Texas Instruments’ CTO for Wireless Connectivity, working with the company for over a decade. In this role, he established the connected-MCU product line for IoT and IIoT markets and defined the technological roadmap for products in the IoT ecosystem. He also served as a Technical Team Leader in the Israel Defense Forces’ elite technology unit. Avi holds a B.Sc. in Electrical Engineering from the Technion, Israel Institute of Technology.

 

 

“Why Preferred Networks Made MN-Core?”

Yusuke Doi  (Preferred Networks)

Abstract: At Preferred Networks, we use deep learning as the core of our technology to contribute to various customers, including those in the manufacturing, biotechnology, and healthcare industries. As efficient computation is a critical differentiator in this field, we are also working on high-efficiency computation using MN-Core, an ASIC that we made. Preferred Networks, which initially started with software and algorithm technology as its core, decided to create MN-Core because of how to utilize the power of software in hardware and the economic aspect of computational optimization. In this talk, I will introduce MN-Core backgrounds and targets and the industrial impacts achieved by vertical integration from software to hardware.

Yusuke Doi is Corporate Officer and VP of Computing Infrastructure, Preferred Networks, Inc.
Joined Preferred Networks in 2016. He worked on the design, management, and operation of the computing infrastructure of Preferred Networks.

 

 

 

 

 

“Codesign and System of the Supercomputer    “Fugaku””

Mitsuhisa Sato  (Riken)

Abstract:  We have been carrying out the FLAGSHIP 2020 Project to develop the Japanese next-generation flagship supercomputer, “Fugaku”. We have designed an original manycore processor based on Armv8 instruction sets with the Scalable Vector Extension (SVE), an A64FX processor, as well as a system including interconnect and a storage subsystem with the industry partner, Fujitsu. The “co-design” of the system and applications is a key to making it power efficient and high performance. We determined many architectural parameters by reflecting an analysis of a set of target applications provided by applications teams. As a result, the system has been proven to be a very power-efficient system, and it is confirmed that the performance of some target applications using the whole system is more than 100 times the performance of the K computer. In this talk, the pragmatic practice of our co-design effort for “Fugaku” and its performance will be presented as well as an overview of system software.

Mitsuhisa Sato received the M.S. degree and the Ph.D. degree in information science from the University of Tokyo in 1984 and 1990. From 2001, he was a professor of Graduate School of Systems and Information Engineering, University of Tsukuba. He has been working as a director of Center for computational sciences, University of Tsukuba from 2007 to 2013. Since October 2010, he is appointed to the research team leader of programming environment research team in Advanced Institute of Computational Science (AICS), renamed to R-CCS, RIKEN. Since 2014, he is working as a team leader of architecture development team in FLAGSHIP 2020 project to develop Japanese flagship supercomputer “Fugaku” in RIKEN. Since 2018, he is appointed to a deputy Director of RIKEN Center for Computational Science. He is a Professor (Cooperative Graduate School Program) and Professor Emeritus of University of Tsukuba.

 

 

“High-Efficiency Inferencing for Scalable Machine Learning”

Art Swift (Esperanto Technologies)

Abstract:  The extraordinary market demand for large-scale machine learning solutions requires more than GPUs, FPGAs, or large multiplier arrays. These approaches deliver high performance, but at high costs: high power consumption, prohibitively complicated programming models, and unacceptable inflexibility.  Esperanto Technologies CEO Art Swift will describe the architectural approach and design methodology for the company’s first supercomputer-on-chip solution for ML inferencing acceleration. The ET-SoC-1 combines the traditional flexibility and programmability of CPU cores with the high efficiency of autonomous tensor processing to deliver unmatched system-level efficiency and all-layer ML acceleration.  Every element of Esperanto’s integrated solution represents best-in-class technology: the simplicity of the RISC-V instruction set, proprietary instruction-set extensions for machine learning, an on-chip mesh interconnect, a uniquely optimized memory hierarchy, state of the art process technology, and custom low-voltage circuits. In this way, Esperanto delivers more performance per watt than existing products without compromising flexibility.

 

Art Swift has 30 plus years of executive-level experience in the tech and microprocessor industry, including CEO at low power processor chip-maker Transmeta; President of MIPS, a leading provider of microprocessor IP; CEO of Wave Computing, a pioneer in dataflow computing architectures, as well as CEO of nanotech innovator Unidym. Previously, Art served in executive level positions at Cirrus Logic; in Digital Equipment’s Alpha processor group; and at Sun Microsystems, one of the pioneering companies in networked computing and RISC processing.

 

 

Invited presentations

“Architectural Challenges in the Era of New Technologies and Extreme Heterogeneity”

Anastasiia Butko   (Lawrence Berkeley Nat’l Lab.)

Abstract: As the end of the Moore’s Law is approaching, we enter the era of new technologies and extreme heterogeneity. Novel architectures bring new challenges in their adoption and integration into larger systems. For example, adopting quantum accelerators hinges on building a classical control hardware pipeline that is scalable, extensible, and provides a real-time response. The physical nature of quantum devices creates non-trivial architectural challenges for control hardware that cannot be solved with the existing approaches. In this talk, we address the architectural challenges related to the adoption of novel accelerators and how these challenges can be addressed with the open-source hardware trends. 

Anastasiia Butko, Ph.D. is a Research Scientist in the Computational Research Division at Lawrence Berkeley National Laboratory (LBNL), CA. Her research interests lie in the general area of computer architecture, with particular emphasis on high-performance computing, emerging and heterogeneous technologies, associated programming models and architectural simulation techniques. Her primary research projects address architectural challenges in adopting novel technologies to provide continuing performance scaling in the approaching Post-Moores Law era. Dr. Butko is a chief architect of the custom control hardware stack for the Advanced Quantum Tested at LBNL.

 

 

“The CMOS image sensor Advance in key technology and the Introduction of Next-generation image sensor”

Akito Kuwabara  (Sony Semiconductor Solutions)

Abstract:The CMOS image sensor is widely used not only in video cameras, digital still cameras and smartphones and security cameras, but also in-vehicles and medical, because its productivity and performance has been improved through the development of basic semiconductor technology and stacked structure technology.  In particular, the stacked CMOS image sensor has made it possible to mount various processing circuits on sensor edge and has expanded possibilities of the CMOS image sensor. For example, Intelligent Vision Sensor equipped with CNN(Convolutional Neural Network)processing on sensor edge enables high-speed edge AI processing and extraction of only the necessary data(Metadata), which, when using cloud AI processing, reduces data transmission latency, power consumption and communication costs, and protects privacy and confidential information. In this presentation, we will explain the CMOS image sensor advance in key technology and the features of Intelligent Vision Sensor using stacked structure technology and its architecture.

Akito Kuwabara received a bachelor’s degree and a master’s degree in engineering science from Osaka University, Osaka, Japan in 2017 and 2019. He joined the Sony Semiconductor Solutions Corporation in 2019 and engaged in the research of the CMOS image sensor equipped with AI processing functionality.

 

 

 

 

Panel Discussion

Topics: “Hot” Techs for “Cool” AI Computing: Do We have Enough Tricks?

Organizer and Moderator:
           Masato Motomura   (Tokyo Tech)
Panelist:
Yusuke Doi (Preferred Networks, Japan)
Avi Baum (Hailo, Israel)
Art Swift (Esperanto Technologies, USA)
Mitsuhisa Sato (Riken, Japan) 
 

Abstract:   It is often mentioned that data is the new oil in the 21st century.  Importantly, oil was able to drive industrial revolution only after the advent of combustion engine. By analogy, data can drive AI revolution only after the right silicon engines, i.e., cool chips. The panel will try to discuss hot topics regarding this important role that cool chips should fulfill for AI computing from various aspects.

Masato Motomura graduated and received Ph.D. from Kyoto University. He was a researcher in NEC central research labs, then became a professor at Hokkaido University. Now he is at Tokyo Institute of Technology leading AI computing research unit. He is actively working on reconfigurable and parallel architectures for deep neural networks, machine learning, annealing machines, and intelligent computing in general. He was a recipient of the IEEE JSSC Annual Best Paper Award, the IPSJ Annual Best Paper Award, the IEICE Achievement Award.

 

 

 

Special Sessions (invited lectures)

“Reducing Errors in Quantum Computation via Program Transformation”

Moinuddin Qureshi   (Georgia Institute of Technology)

Abstract: Quantum computing promises exponential speedups for an important class of problems. While quantum computers with few dozens of qubits have been demonstrated, these machines suffer from a high rate of gate errors. Such machines are operated in the Noisy Intermediate Scale Quantum (NISQ) mode of computing where the output of the machine can be erroneous.  In this talk, I will discuss some of our recent work that aims to improve the reliability of NISQ computers by developing software techniques to mitigate hardware errors. Our first work exploits the variability in the error rates of qubits to steer more operations towards qubits with lower error rates and avoid qubits that are error-prone.  Our second work looks at executing different versions of the programs each crafted to cause diverse mistakes so that the machine becomes less vulnerable to correlated errors.  Our third work looks at exploiting the state-dependent bias in measurement errors (state 1 is more error-prone than state 0) and dynamically flips the state of the qubit to perform the measurement in the stronger state. We perform our evaluations on real quantum machines from IBM and demonstrate significant improvement in the overall system reliability. Finally, I will also briefly discuss the hardware aspect of designing large-scale quantum computers, including cryogenic processor and cryogenic memory system.

 

Moinuddin Qureshi is a Professor of Computer Science at the Georgia Institute of Technology. His research interests include computer architecture, memory systems, hardware security, and quantum computing. He is a member of the Hall of Fame of ISCA, Hall of Fame of MICRO, and Hall of Fame of HPCA. His research has been recognized with the best paper awards at MICRO 2018, CF 2019, and two selections (and three honorable mentions) at IEEE MICRO Top Picks. His ISCA 2009 paper on Phase Change Memory was awarded the 2019 Persistent Impact Prize in recognition of “exceptional impact on the fields of study related to non-volatile memories”. He received the “Outstanding Researcher Award” from Intel (2020) and an “Outstanding Technical Achievement” award from IBM (2011). He was the Program Chair of MICRO 2015 and Selection Committee Co-Chair of Top Picks 2017. He received his PhD (2007) and MS (2003) from the University of Texas at Austin.

 

“Processor Hardware Security”

Jakub Szefer   (Yale University)

Abstract: As the amount of sensitive information processed by computers constantly increases, there is a need to continue to harden the processors, and the whole computer systems.  Among the possible threats, the variety of remote attacks are of importance since they do not require attacker to be physically near the target system, they only require that attacker and victim are executing on the same system, such as by being co-located on same server in a public cloud computing data center.  At the same time, there is ever-expanding use of machine learning and other algorithms that process sensitive information in the cloud data centers.  Both data, as well as the algorithms, e.g. the specific machine learning architectures or models, can be targets of attacks.  This opens up the various algorithms to variety of hardware-rooted side and covert channel attacks, which continue to pose threat to our privacy and security.  Meanwhile, considering only performance or security is not enough, and the processor designers need to be mindful of the power consumption and energy usage of their systems.  In this talk, we will first cover various remote timing and power related information leaks to give background of the existing threats.  We will then cover variety of transient execution attacks, which work with the covert channels, and can further undermine system security.  As examples of specific threats, attacks on machine learning algorithms from literature will be reviewed. The talk will next cover various defenses, such as secure caches or secure TLBs that aim to protect from the threats. In addition, the talk will touch upon power and energy issues, and especially the need to better understand the performance-power-security trade off in design of processors.  How to design high performance, low power, and secure systems is a research challenge that hopefully this talk can motivate academics and researchers to explore more.
 
Jakub Szefer’s research focuses on computer architecture and hardware security. His research encompasses secure processor architectures, cloud security, FPGA attacks and defenses, and hardware FPGA implementation of cryptographic algorithms. His research is supported through US National Science Foundation and industry grants and donations. He is currently an Associate Professor of Electrical Engineering at Yale University, where he leads the Computer Architecture and Security Laboratory (CASLAB). Prior to joining Yale, he received Ph.D. and M.A. degrees in Electrical Engineering from Princeton University, and B.S. degree with highest honors in Electrical and Computer Engineering from University of Illinois at Urbana-Champaign. He has received the NSF CAREER award in 2017. Jakub is the author of first book focusing on processor architecture security: “Principles of Secure Processor Architecture Design”, published in 2018. Recently, he has been promoted to the IEEE Senior Member rank in 2019. Details of Jakub’s research can be found at: https://caslab.csl.yale.edu/~jakub