Corporate President, Samsung Research
Sebastian Seung, a world-renowned neuroscience-based AI expert, is the Head of Samsung Research, the advanced research & development hub of Samsung’s Consumer Electronics (CE) Division and IT & Mobile Communications (IM) Division. Under his leadership, Samsung Research develops future innovative technology and identifies new growth possibilities for Samsung Electronics through a global effort spanning 15 R&D centers and 7 AI centers worldwide. Samsung Research's main focus is Artificial Intelligence, data intelligence, next generation communication/media, robot, security, Tizen etc. but is constantly expanding its research areas to new fields to realize a new lifestyle based on AI technologies.
Corporate EVP, Samsung Research
EVP Daniel Lee is the head of the Global AI Center at Samsung Research. Under his leadership, Samsung researchers at world-wide AI centers including Cambridge (England), Moscow, Toronto, Montreal, New York and Seoul are investigating next generation AI technologies in voice, language, vision, recommendation systems, and other domains to enhance future Samsung products and services.
Resolution-robust Large Mask Inpainting with Fourier Convolutions
DeepLandscape: Adversarial Modeling of Landscape Video
High-Resolution Daytime Translation without Domain Labels
NNStreamer: Efficient and Agile Development of On-Device AI Systems
LightSys: Lightweight and Efficient CI System for Improving Integration Speed of Software
Bunched LPCNet : Vocoder for Low-cost Neural Text-To-Speech Systems
For user-centric R&D, Samsung Research launches the first SW Service, SR Translate.
Bark Sound Detection for Robot Vacuum Cleaner
21’04, Bespoke Jetbot AI
Harvard Univ. Professor, Computer Science and Applied MathematicsComputational Learning Theory, Symbolic ComputationTurning Award (2010)
How to Augment Supervised Learning with Reasoning
Supervised learning is a cognitive phenomenon that has proved amenable both to theoretical analysis and exploitation as a technology. However, not all of cognition can be accounted for directly by supervised learning. The question we ask here is whether one can build on the success of machine learning to address the broader goals of artificial intelligence. We regard reasoning as the major component of cognition that needs to be added. We suggest that the central challenge therefore is to unify the formulation of these two phenomena, learning and reasoning, into a single framework with a common semantics. In such a framework one would aim to learn rules with the same success that predicates can be learned by means of machine learning, and, at the same time, to reason with the rules with guarantees analogous to those of standard logic. We discuss how Robust Logic fulfils the role of such a theoretical framework. We also discuss the challenges of testing this experimentally on a significant scale, for tasks where one hopes to exceed the performance offered by learning alone.
Leslie Valiant was educated at King's College, Cambridge; Imperial College, London; and at Warwick University where he received his Ph.D. in computer science in 1974. He is currently T. Jefferson Coolidge Professor of Computer Science and Applied Mathematics in the School of Engineering and Applied Sciences at Harvard University, where he has taught since 1982. Before coming to Harvard he had taught at Carnegie Mellon University, Leeds University, and the University of Edinburgh.
His work has ranged over several areas of theoretical computer science, particularly complexity theory, learning, and parallel computation. He also has interests in computational neuroscience, evolution and artificial intelligence and is the author of two books, Circuits of the Mind, and Probably Approximately Correct.
He received the Nevanlinna Prize at the International Congress of Mathematicians in 1986, the Knuth Award in 1997, the European Association for Theoretical Computer Science EATCS Award in 2008, and the 2010 A. M. Turing Award. He is a Fellow of the Royal Society (London) and a member of the National Academy of Sciences (USA).
Professor, Princeton University
The Differentiable Camera
Although today's cameras fuel diverse applications, from personal photography to self-driving vehicles, they are designed in a compartmentalized fashion where the optics, sensor, image processing pipeline, and vision models are often devised in isolation: a camera design is decided by intermediate metrics describing optical performance, signal to noise ratio, and image quality, even though object detection scores may only matter for the camera application. In this talk, I will present a differentiable camera architecture, including compound optics, sensing and exposure control, image processing, and downstream vision models. This architecture allows us to learn cameras akin to neural networks, entirely guided by downstream loss functions. Learned cameras move computation to the optics, with entirely different optical stacks for different vision tasks (and beating existing stacks such as Tesla's Autopilot). The approach allows us to learn entirely new cameras that are ultra-small at a few hundred microns in size while matching the quality achieved with cm-size compound lenses, and learn active illumination together with the image pipeline, achieving accurate dense depth and vision tasks in heavy fog, snow, and rain (beating scanning lidar methods). Finally, I will describe an approach that makes the scene itself differentiable, allowing us to backpropagate gradients through the entire capture and processing chain in an inverse rendering fashion. As such, the proposed novel breed of learned cameras brings unprecedented capabilities in optical design, imaging, and vision.
Felix Heide is a professor at Princeton, where he leads the Princeton Computational Imaging Lab since 2020. He co-founded Algolux, where he lead research and development of the full autonomous driving stack. His group at Princeton explores imaging and computer vision approaches that allow computers to see and understand what seems invisible today — enabling super-human capabilities for the cameras in our vehicles, personal devices, microscopes, telescopes, and the instrumentation we use for fundamental physics. This includes today's capture and vision challenges, including harsh environmental conditions, e.g. imaging under ultra-low or high illumination or computer vision through dense fog, rain, and snow, imaging at ultra-fast or slow time scales, freezing light in motion, imaging at extreme scene scales, from super-resolution microscopy to kilometer-scale depth sensing, and imaging via proxies using nearby object surfaces as sensors instead. Researching vision systems end-to-end, his work lies at the intersection of optics, machine learning, optimization, computer graphics, and computer vision. He received my Ph.D. from the University of British Columbia, and he was a postdoc at Stanford University. His doctoral dissertation won the Alain Fournier Ph.D. Dissertation Award and the SIGGRAPH outstanding doctoral dissertation award.
Google Brain, Research ScientistBuilding Interpretability in Machine Learning
Interpretability for skeptical minds
Interpretable machine learning has been a popular topic of study in the era of machine learning. But what is interpretability? And are we heading in the right direction? In this talk, I start with a skeptically-minded journey of this field on our past-selves, before moving on to recent developments of more user-focused methods. The talk will finish with where we might be heading, and a number of open questions we should think about.
Been Kim is a staff research scientist at Google Brain. Her research focuses on improving interpretability in machine learning: not only by building interpretability methods but also challenging them for their validity. She gave a talk at the G20 meeting in Argentina in 2019. Her work TCAV received UNESCO Netexplo award, was featured at Google I/O 19' and in Brian Christian's book on "The Alignment Problem". Been gave keynote at ECML 2020, tutorials on interpretability at ICML, University of Toronto, CVPR and at Lawrence Berkeley National Laboratory. She was a co-workshop Chair ICLR 2019, and has been an (senior) area chair at conferences including NeurIPS, ICML, ICLR, and AISTATS. She received her PhD. from MIT.
Microsoft Research, Head of Amsterdam LabAmsterdam Univ. Professor, Machine Learning ResearchQualcomm AI Research, Ex-Vice PresidentPhysics, Quantum Computing, Bayesian Deep Learning
Understanding Matter with Deep learning
Everything tangible in the universe is made of molecules. Yet our ability to digitally simulate even small molecules is rather poor due to the complexities of quantum mechanics. However, there are a number of advances that are converging to dramatically improve our ability to understand the behavior of molecules. Firstly, deep learning and in particular equivariant graph neural networks are now an important tool to model molecules. They are for instance the core technology in Deepmind's AlphaFold to predict the 3d shape of a molecule from its amino acid sequence. Second, despite claims to the contrary, Moore's law is still alive, and in particular the design of ASIC architectures for special purpose computation will continue to accelerate our ability to break new computational barriers. And finally there is the rapid advance of quantum computation. While fault tolerant quantum computation might still be a decade away, it is expected that it's first useful application, to simulate (quantum) nature itself, may be much closer.
In this talk I will give my perspective on why I am excited about the opportunities that will come from new breakthroughs in molecular simulation. It may facilitate the search for new sustainable technologies to capture carbon from the air, develop biodegradable plastics, reduce the cost of electrolysis through better catalysts, develop cleaner and cheaper fertilizers, design new drugs to treat disease and so on. Our understanding of matter will be key to unlocking these new materials for the benefit of humanity.
Prof. Dr. Max Welling is a research chair in Machine Learning at the University of Amsterdam and a Distinguished Scientist at MSR. He is a fellow at the Canadian Institute for Advanced Research (CIFAR) and the European Lab for Learning and Intelligent Systems (ELLIS) where he also serves on the founding board. His previous appointments include VP at Qualcomm Technologies, professor at UC Irvine, postdoc at U. Toronto and UCL under supervision of prof. Geoffrey Hinton, and postdoc at Caltech under supervision of prof. Pietro Perona. He finished his PhD in theoretical high energy physics under supervision of Nobel laureate prof. Gerard ‘t Hooft.
Max Welling has served as associate editor in chief of IEEE TPAMI from 2011-2015, he serves on the advisory board of the Neurips foundation since 2015 and has been program chair and general chair of Neurips in 2013 and 2014 respectively. He was also program chair of AISTATS in 2009 and ECCV in 2016 and general chair of MIDL 2018. Max Welling is recipient of the ECCV Koenderink Prize in 2010 and the ICML Test of Time award in 2021. He directs the Amsterdam Machine Learning Lab (AMLAB) and co-directs the Qualcomm-UvA deep learning lab (QUVA) and the Bosch-UvA Deep Learning lab (DELTA).