
In the future, AI technology will become much more prevalent and we will interact with smart devices on a daily basis.
These devices will understand our needs via spoken commands, visual gestures, and other cues and be able to help and guide us appropriately.
New AI technology in the home and in the workplace will enable us to carry out our routines more efficiently; however, we need to ensure that these technologies operate in a safe and secure manner.
We need new AI developed at Samsung Research to provide a beneficial customer experience for all our billions of Samsung users and devices. AI needs to be better aware of what is happening when our customers use our devices and to anticipate and respond their needs. In order to do this, we are researching and developing an “Among-Device AI” that allows for a seamless and shared experience across the myriad of Samsung devices.
We develop fundamental AI methods and integrate various applied AI methods.
• We are exploring the frontier of machine learning and AI. One fundamental challenge we are interested in is integrating learning and perception with high-level knowledge to learn more effectively using fewer data and to reason more robustly, reflecting the user’s context and domain knowledge.
• We are interested in AI that helps users intuitively interact with Samsung’s devices. Such AI can better understand the user’s goal and autonomously execute a sequence of actions that will ease the user to cope with the complexity of modern devices. We also apply various AI technologies to digital humans by integrating speech recognition, language understanding, sensor, and environmental understanding.
• We investigate simulation technologies for AI and robots to rapidly test new technologies in an environment close to the real world.
We develop technologies to allow devices to communicate with people on a human level by interpreting their intent while talking to them. In order to achieve this, we are working on key technologies such as speech recognition and synthesis, language understanding and conversation response, machine translation, and Q&A. These technologies operate efficiently across all devices, at the edge, and on the cloud. Additionally, we are conducting preemptive research for future devices such as lightweight speech and language engines as well as understanding large languages. As a result of this research, all devices around you should provide you with ease of living, which will improve your quality of life. Besides "understanding complex and diverse speech", we also aim to develop user-centered technologies that can "understand even if you say less."
With an increasing number of Samsung devices having cameras, such as smartphones, TVs, and even ovens, and it is time to take a leap from being smart devices to being AI devices, with the help of vision AI. To do so, we provide a holistic vision pipeline that can deliver vision AI services for Samsung devices from low-level camera and sensor processing to high-level visual recognition and visual reasoning. At the low-level side, we focus on neural processing for visual quality enhancement, and at the high-level side, visual understanding of various kinds of visual contexts like object status and people/pets activities, especially in the home environment. In addition, we provide vision AI services that puts multiple devices together in a collaborative way.
As one of the world’s leading technology companies, Samsung Electronics devotes its human resources and technology to develop superior products and services while contributing to society.
One of our goals is to develop and connect AI services across our diverse product portfolio and to distribute such services equally and broadly, to benefit all of humanity.
In this respect, we are committed to developing user-based AI products and services based on our AI Visions of 'User Centric', 'Always There', 'Always Safe', 'Always Helpful', and 'Always Learning'.
AI technology has limitless potential to bring a whole new dimension of experience, but at the same time it may have negative social and ethical implications. For AI and all its applications to be implemented in a sustainable and ethical way, we have announced the principles of 'Fairness', 'Transparency' and 'Accountability‘ for AI ethics. These principles are established not only to comply with applicable laws, but also to fulfill our social and ethical responsibilities.
These principles for AI ethics will be incorporated into our internal guidelines and training to educate and guide our employees for ethical development and use of AI.
1. Fairness
The company will strive to apply the values of equality and diversity in AI system throughout its entire lifecycle.
The company will strive not to reinforce nor propagate negative or unfair bias.
The company will strive to provide easy access to all users.
2. Transparency
Users will be aware that they are interacting with AI.
AI will be explainable for users to understand its decision or recommendation to the extent technologically feasible.
The process of collecting or utilizing personal data will be transparent.
3. Accountability
The company will strive to apply the principles of social and ethical responsibility to AI system
AI system will be adequately protected and have security measures to prevent data breach and cyber attacks.
The company will strive to benefit the society and promote the corporate citizenship though AI system.
[Blog] Hierarchical Timbre-Cadence Speaker Encoder for Zero-shot Speech Synthesis
In recent years, text-to-speech (TTS) has accomplished remarkable improvement with the emergence of various end-to end TTS models [1, 2, 3]. Through these advanced models, TTS expands its field from a model built with a professional voice actor to a personalized TTS.
On September 19, 2023
[Blog] A More Accurate Internal Language Model Score Estimation for the Hybrid Autoregressive Transducer
Language model (LM) adaptation in hybrid autoregressive transducer (HAT) is justified only when the transducer logits and the sum of speech and text logits in the label estimation sub-networks are approximately the same.
On September 19, 2023
[Blog] Website Clustering via Transformer Context Models
Individual characteristics as age, gender etc. are often relevant features in tasks such as lifetime value or uplift modelling. Those variables are sufficiently good in describing some high-level heuristics about a population.
On September 8, 2023
[Blog] Multilingual Open Custom Keyword Spotting Testset
In the ever-growing landscape of technology, voice intelligence-based solutions evolved and reshaped how we interact with devices, applications, and services.
On August 30, 2023
[Blog] RandMasking Augment: A Simple and Randomized Data Augmentation for Acoustic Scene Classification
Sound recognition aims to enable intelligent systems to understand acoustic characteristics of the target sound or the surrounding environment based on acoustic features. Samsung also provides many sound recognition services through various devices.
On August 1, 2023
[Blog] Short-Term Memory Convolutions
The successful deployment of a signal processing system necessitates the consideration of various factors. Among these factors, system latency plays significant role, as humans are highly sensitive to delays in perceived signals.
On July 28, 2023
[Blog] Prompt Based and Cross-Modal Retrieval Enhanced Visual Word Sense Disambiguation
The Visual Word Sense Disambiguation (VWSD) shared task aims at selecting the image among candidates that best interprets the semantics of a target word with a short-length phrase for English, Italian, and Farsi.
On July 25, 2023
[Blog] An AL-R Model for Multilingual Complex Named Entity Recognition
This paper describes our system for SemEval-2023 Task 2 Multilingual Complex Named Entity Recognition (MultiCoNER II).
On July 24, 2023
[Blog] Pretrained Bidirectional Distillation for Machine Translation
Initializing parameters by a pretrained masked language model (LM) [1] is a knowledge transfer method widely applied to natural language processing tasks. Following its success, pretrained neural machine translation (NMT) models have attracted more and more research interest [2,3,4,5].
On July 19, 2023
[Blog] Self-Supervised Accent Learning for Under-Resourced Accents Using Native Language Data
Last decade has seen a rise in the number and quality of neural networks based ASR models, for example the ability to process sequences [1] through recurrent neural networks (RNN) based encoder-decoder models, often using an attention [2] mechanism to preserve the context while training in an end-to-end (E2E) [3] manner.
On July 12, 2023
[Blog] TrickVOS: A Bag of Tricks for Video Object Segmentation
Tracking objects in a video is a foundational task in computer vision and has many practical applications, such as robotics, extended reality and content creation.
On July 5, 2023
[Blog] [CVPR 2023 Series #7] Zero-Shot Everything Sketch-Based Image Retrieval, and in Explainable Style
Zero-shot sketch-based image retrieval (ZS-SBIR) is a central problem to sketch understanding [6]. This paper aims to tackle all problems associated with the current status quo for ZS-SBIR, including category-level (standard) [4], fine-grained [1], and cross-dataset [3].
On June 21, 2023
[Blog] [CVPR 2023 Series #6] LASP: Text-to-Text Optimization for Language-Aware Soft Prompting of Vision & Language Models
Welcome to our research blog post, where we delve into the fascinating world of vision and language models. In recent years, large-scale pre-training of neural networks has paved the way for ground-breaking advancements in Vision & Language (V&L) understanding.
On June 14, 2023
[Blog] [CVPR 2023 Series #5] MobileVOS: Real-Time Video Object Segmentation Contrastive Learning Meets Knowledge Distillation
Video Object Segmentation (VOS) is an important problem in computer vision and it has a lot of interesting applications like video editing, surveillance, autonomous driving, and augmented reality. Basically, VOS is when you try to find and follow objects across multiple frames in a video.
On June 9, 2023
[Blog] [CVPR 2023 Series #4] A Unified Pyramid Recurrent Network for Video Frame Interpolation
Video frame interpolation (VFI) is a classic low-level vision task that synthesizes non-existent intermediate frames between original consecutive frames. Before and after frame interpolation, if the time interval of original frames is fixed, VFI can generate high-rate smoother videos; if the frame rate is fixed, VFI can produce slow-motion videos.
On June 7, 2023
[Blog] [CVPR 2023 Series #3] GENIE: Show Me the Data for Quantization
Since the scale of the state-of-the-art AI models has become deeper, model compression also has been attracting more attention as a method to let models be deployed on edge devices without accessing cloud servers.
On May 31, 2023
[Blog] Dynamic VFI (Video Frame Interpolation) with Integrated Difficulty Pre-Assessment
Video frame interpolation (VFI) aims to generate intermediate frames between consecutive frames. VFI is widely applied in industrial products, including slow-motion video generation, video editing, intelligent display devices, etc. Despite recent advances in deep learning bring performance improvement
On May 26, 2023
[Blog] [CVPR 2023 Series #2] StepFormer: Self-supervised Step Discovery and Localization in Instructional Videos
Observing someone perform a task (e.g., cooking, assembling furniture or fixing an electronic device) is a common approach for humans to acquire new skills. Instructional videos provide an excellent resource to learn such procedural activities for both humans and AI agents.
On May 24, 2023
[Blog] [CVPR 2023 Series #1] SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields
Smartphones have enabled unprecedented accessibility to cameras and images; in conjunction with social media, content creation (e.g., images and videos) has never been more popular.
On May 17, 2023
[Blog] GOHSP: A Unified Framework of Graph and Optimization-based Heterogeneous Structured Pruning for Vision Transformer
The recently proposed Vision transformers (ViTs) have shown very impressive empirical performance in various computer vision tasks, and they are viewed as an important type of foundation model.
On April 27, 2023
[Blog] LP-IOANet: Efficient High Resolution Document Shadow Removal
Imagine you came back from a work trip and need to submit an expense claim with the receipts you’ve kept, or your meeting has just finished and you need to transfer to your PC the meeting minutes you have on a piece of paper.
On April 4, 2023
[Blog] Multi-Stage Progressive Audio Bandwidth Extension
Audio bandwidth extension can enhance subjective sound quality by increasing bandwidth of audio signal. We presents a novel multi-stage progressive method for time domain causal bandwidth extension. Each stage of the progressive model contains a light weight scale-up module to generate high frequency signal and a supervised attention module to guide features propagating between stages.
On February 8, 2023
Samsung Electronics Showcases Award-Winning Machine Translation at WMT
At the Workshop on Machine Translation (WMT), one of the biggest events for machine translation research, Samsung Electronics joined the ranks of researchers from all over the world to discuss new and innovative ways to understand the human language using machines and computer programs.
On January 30, 2023
[Blog] Enabling Accurate Positioning in NLOS Scenarios by Hybrid Machine Learning with Denoising and Inpainting
The growing interests on accurate positioning has been developed from thriving applications/services including the cellular network operation. The classical positioning methods mostly rely on information extracted from channel information, e.g., time of arrival, angle of arrival (or departure).
On January 25, 2023
[Blog] FlowFormer: A Transformer Architecture for Optical Flow
Optical flow targets at estimating per-pixel correspondences between a source image and a target image, in the form of a 2D displacement field.
On January 12, 2023
[Blog] FedMargin - Federated Learning via Attentive Margin of Semantic Feature Representations
Let's play a simple game. Open the photo gallery on your phone and briefly scroll your images, do you see some patterns and recognize the objects you like on the images?
On January 5, 2023
[Blog] Task Generalizable Spatial and Texture Aware Image Downsizing Network
Convolutional neural networks (CNNs) are widely used today in various vision tasks such as classification, detection and segmentation.
On December 14, 2022
[Blog] Resolving Privacy-Personalization Paradox
The advent of technical breakthroughs in communication, the Internet followed by increasing digitisation and proliferation of personal devices such as mobiles, smart appliances, etc. have made it possible to connect and digitize many human activities. This empowered the industries to collect large data of their users with consent, which is used for offering personalised services and products to enhance customer experience.
On November 9, 2022
Samsung Unveils Vision for the Future of AI at Samsung AI Forum 2022
A host of world-renowned academics, researchers from Samsung Electronics and industry experts will come together to share their insights on the future of artificial intelligence at Samsung AI Forum 2022.
On November 8, 2022
[Blog] pMCT - Patched Multi-Condition Training
The clashing of pans and pots as you cook and ask your voice assistant what you can use to replace eggs in the recipe. The excited, overlapping conversations as you ask which of Henry the VIIIs wives survived, trying to settle a bet.
On November 7, 2022
[Blog] [INTERSPEECH 2022 Series #6] Multi-stage Progressive Compression of Conformer Transducer for On-device Speech Recognition
Automatic Speech Recognition (ASR) systems on smart devices have traditionally relied on server based models. This involves sending audio data to the server and receiving text hypothesis once the server model completes decoding.
On October 27, 2022
[Blog] [INTERSPEECH 2022 Series #5] Prototypical Speaker-Interference Loss for Target Voice Separation Using Non-Parallel Audio Samples
Deep learning based audio signal processing is increasingly becoming popular for solving the cocktail party problem [1-5]. A noisy signal, which is mixed by a clean and noise signal, is used as a pair with the clean signal for training a speech enhancement or source separation model [6-7].
On October 20, 2022
[Blog] Enhanced Bi-directional Motion Estimation for Video Frame Interpolation
Video frame interpolation aims to increase the frame rate of videos, by synthesizing non-existent intermediate frames between original successive frames. With recent advances in optical flow, motion-based interpolation has developed into a promising framework.
On October 18, 2022
Under the themes "Shaping the Future with AI and Semiconductor" and "Scaling AI for the Real World," renowned experts will share the latest AI research achievements
Samsung Electronics today announced that it will host the Samsung AI Forum 2022 from November 8 to 9.
On October 18, 2022
[Blog] [INTERSPEECH 2022 Series #4] Cross-Modal Decision Regularization for Simultaneous Speech Translation
In today’s world of virtual meetings, conferences, and multi-media, automatic speech translation offers a wide variety of applications. Traditional offline speech translation models used a cascade of speech recognition and text translation. In our prior works [1], we developed efficient techniques for end-to-end speech translation which outperforms traditional cascaded approaches.
On October 13, 2022
[Blog] [INTERSPEECH 2022 Series #3] Bunched LPCNet2: Efficient Neural Vocoders Covering Devices from Cloud to Edge
Nowadays, as many types of edge devices emerge, people utilize text-to-speech (TTS) services in their daily life without device-constraints. Although most TTS systems now have been launched on cloud servers, running on edge devices resolves significant concerns, such as latency, privacy, and internet connectivity issues.
On October 6, 2022
[Blog] [INTERSPEECH 2022 Series #2]FedNST: Federated Noisy Student Training for Automatic Speech Recognition
Voice assistants have seen an ever-increasing uptake by consumers’ worldwide leading to the availability of vast amounts of real-world speech data. Research efforts around the world focus on effectively leveraging this data to improve the accuracy and robustness of state-of-the-art automatic speech recognition (ASR) models.
On September 29, 2022
[Blog] [INTERSPEECH 2022 Series #1] Human Sound Classification based on Feature Fusion Method with Air and Bone Conducted Signal
Human sound classification is similar to Acoustic scene classification(ASC) and aims to distinguish the kinds of sounds that the human body makes.
On September 22, 2022
[Blog] Detecting Depression, Anxiety and Mental Stress in One Sequential Model with Multi-task Learning
Depression, anxiety and excessive mental stress are three common symptoms of mental disorders in modern life, which threaten people’s health and affect their work and quality of life heavily.
On August 18, 2022
[Blog] Task-Driven and Experience-Based Question Answering Corpus for In-Home Robot Application in the House3D Virtual Environment
At present, more and more work has begun to pay attention to the long-term housekeeping robot scene. Naturally, we wonder whether the robot can answer the questions raised by the owner according to the actual situation at home.
On August 8, 2022
[Blog] Dynamic Multi-scale Network for Dual-pixel Images Defocus Deblurring with Transformer
Recent works achieve excellent results in defocus deblurring task based on dual-pixel data using convolutional neural network (CNN), while the scarcity of data limits the exploration and attempt of vision transformer in this task.
On August 5, 2022
[Blog] [CVPR 2022 Series #6] Gaussian Process Modeling of Approximate Inference Errors for Variational Autoencoders
Variational Autoencoder (VAE) [5,6] is one of the most popular deep latent variable models in modern machine learning. In VAE, we have a rich representational capacity to model a complex generative process of synthesizing images (x) from the latent variables (z).
On July 20, 2022
[Blog] [CVPR 2022 Series #5] P>M>F: The Pre-Training, Meta-Training and Fine-Tuning Pipeline for Few-Shot Learning
How different are few-shot learning (FSL) and classical supervised learning? They are indeed very different in the sense of classical generalization theory, but we would like to argue that they are not that different in practice.
On July 13, 2022
[Blog] [CVPR 2022 Series #2] Day-to-Night Image Synthesis for Training Nighttime Neural ISPs
The Computer Vision and Pattern Recognition Conference (CVPR) is a world-renowned international Artificial Intelligence (AI) conference co-hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Computer Vision Foundation (CVF) which has been running since 1983.
On June 22, 2022
[Blog] TFPSNet Time-Frequency Domain Path Scanning Network for Speech Separation
Deep learning techniques have accomplished a big step forward on speech separation task. The current leading methods are based on the time-domain audio separation network (TasNet) [1]. TasNet uses a learnable encoder and decoder to replace the fixed T-F domain transformation.
On June 16, 2022
[Blog] [CVPR 2022 Series #1] Probabilistic Procedure Planning in Instructional Videos
The IEEE / CVF Computer Vision and Pattern Recognition Conference (CVPR) is a world-renowned international Artificial Intelligence (AI) conference co-hosted by the Institute of Electrical and Electronics Engineers (IEEE) and the Computer Vision Foundation (CVF) which has been running since 1983.
On June 15, 2022
[Blog] Extending NNStreamer: Pipeline Framework and Among-Device AI
This blog introduces a paper “Toward Among-Device AI from On-Device AI with Stream Pipelines” [1], and briefly explains new functions to be released on Tizen 7.0 Machine Learning feature. NNStreamer is a Linux Foundation (LF AI & Data) open source project [2] accepting contributions of the general public: https://github.com/nnstreamer/nnstreamer
On June 7, 2022
[Blog] MetaCC: A Channel Coding Benchmark for Meta-Learning
TL;DR: We propose channel coding as a novel benchmark to study several aspects of meta-learning, including the impact of task distribution breadth and shift on meta-learner performance, which can be controlled in the coding problem.
On June 2, 2022
[Blog] Drop-DTW: A Differentiable Method for Sequence Alignment that can Handle Outliers
The problem of sequence-to-sequence alignment is central to many computational applications. Aligning two sequences (e.g., temporal signals) entails computing the optimal pairwise correspondence between the sequence elements while preserving their match orderings.
On May 18, 2022
[Blog] Empowering the Telecommunication System with Reinforcement Learning
Telecommunications refers to the transmission of information remotely, and help connect humans over a distance greater than that feasible with the human voice or vision. Telecommunications can be implemented through various types of technologies. Depending on the mode of how the signals are transmitted, it can be broadly divided into two categories:
On April 8, 2022
Samsung Ranks 1st in the Top Semantic Evaluation International Competition
Samsung R&D Institute China–Beijing (SRC-B) took part in the SemEval (International Workshop on Semantic Evaluation) 2022 Task 5 MAMI (Multimedia Automatic Misogyny Identification) Reading Comprehension of Abstract Meaning and achieved 1st place in both subtasks. SRC-B’s related paper was also accepted by the said conference.
On April 8, 2022
Dual-Pixel Image Defocus Deblurring Technique at the ICME 2022
The paper published by Samsung R&D Institute China–Beijing (SRC-B) has been recently accepted by the Institute of Electrical and Electronics Engineers (IEEE) International Conference on Multimedia and Expo (ICME) 2022. This paper explores the vision transformer’s potential in the dual-pixel image defocus deblurring field and how it can achieve the best performance.
On April 8, 2022
Samsung's New Predictable Sparse Attention Technique
Language dominates and shapes our lives in its written and spoken forms. Computational linguistics is the scientific study of language from a computational perspective. The Annual Meeting of the Association for Computational Linguistics (ACL) is organized by the Association of Computational Linguistics, which has strict employment standards. A paper published by the multimodal MRC division of Samsung R&D Institute China–Beijing (SRC-B) has been recently accepted by ACL 2022.
On April 8, 2022
[Blog] Feature Kernel Distillation
We study the significance of Feature Learning in Neural Networks (NNs) for Knowledge Distillation (KD), a popular technique to improve an NN model’s generalisation performance using a teacher NN model. We propose a principled framework Feature Kernel Distillation (FKD), which performs distillation directly in the feature space of NNs and is therefore able to transfer knowledge across different datasets.
On April 7, 2022
[Blog] Mobile Twin Recognition
Almost all of us has one or more smartphones, where we keep our personal data such as credit card information, photos, business contacts and so on. It is therefore important to protect this valuable information from unauthorized access.
On March 30, 2022
[Blog] Using sound to add scale to computer vision
Imagine seeing an image of a bright disk on a dark background. Is it a picture of a planet or just a ping pong ball? This example demonstrates that even though we might infer the shape of an object from an image, we cannot obtain the exact scale from a single image.
On March 22, 2022
Samsung AI Center – Montreal Researchers Win the Best Paper Award at IEEE GLOBECOM 2021
Researchers from Samsung AI Center – Montreal (SAIC-Montreal) received the Best Paper Award at the annual IEEE (Institute of Electrical and Electronics Engineers) Global Communications Conference (GLOBECOM) 2021 Mobile and Wireless Networks Symposium. GLOBECOM is the IEEE Communications
On March 7, 2022
[Blog] FjORD: Fair and Accurate Federated Learning under heterogeneous targets with Ordered Dropout
Despite the soaring use of convolutional neural networks (CNNs) in mobile applications, uniformly sustaining high-performance inference on mobile has been elusive due to the excessive computational demands of modern CNNs and the increasing diversity of deployed devices. A popular alternative comprises offloading CNN processing to powerful cloud-based servers.
On March 7, 2022
[Blog] Smart at what cost? Characterising Mobile DNNs in the wild
Smartphones are all around us, today, available in various tiers and form factors. A large part of what makes them ‘smart’ is (i) their integration of sensors, so as to ‘sense”’ their environment, and (ii) the ability to run ML models, so as to ‘understand’ their environment.
On March 7, 2022
[CES 2022] Live Video from Las Vegas
Want to see what's going on at Samsung Research's CES booth, "Future Home Experience"? In the video below, you can see more about our new tech trio, an AI Avatar, the Samsung Bot i, and the Samsung Bot Handy.
On January 7, 2022
[CES 2022] Video : Virtual Booth Tour
At CES 2022, Samsung Research is showcasing AI and robotics technologies that cater to various lifestyles and make smart home experiences truly seamless. We created this virtual walkthrough of our booth, “Future Home Experience”.
On January 6, 2022
[CES 2022] Samsung Research's New Tech Trio ① AI Avatar – Who is he/she?
Samsung Electronics’ CES 2022 booth is full of eye-catching innovations, but which products and technologies will viewers and visitors absolutely not want to miss? The answer is Samsung Research’s latest AI and robotics technology! The theme for Samsung Research’s showcase at CES 2022 is “Digital, Meet Physical”. We show how the personalized experiences that millennials and zoomers value can be seamlessly connected to the digital world.
On January 6, 2022
[CES 2022] Video : Digital, meet Physical
Welcome to Samsung Research's Future Home Experience at CES 2022, where we share how our AI Avatar, Samsung Bot i and Bot Handy are paving the way for optimized, intelligent, and personalized experiences.
On January 6, 2022
[CES 2022] Experts Behind Samsung's Newest Products and Technologies Discuss Innovating for the Future ②
CES 2022 offers visitors and online viewers an opportunity to see for themselves how Samsung has been innovating for the future. From mobile devices, displays and home appliances, to products and services that raise the bar for innovation, the company’s showcase at the world’s largest technology show offers a peek at what our daily lives will look like very soon.
On January 6, 2022
[CES 2022] Samsung Introduces a Futuristic Lifestyle with Innovative Technology, Connecting the Customer Experience at CES 2022
CES 2022, the world’s largest electronics exhibition, will open from January 5 to 7 in Las Vegas, United States (local time). Samsung Electronics will showcase user-personalized solutions based on innovative technology such as AI, IoT, and 5G, and will also invite users to a futuristic lifestyle connecting the customer experience.
On January 5, 2022
[Samsung AI Forum 2021] Advancing AI Technologies That Can Help Humankind
From November 1–2, Samsung Electronics held its fifth Samsung AI Forum (SAIF) entirely online.
On November 8, 2021
[Blog] ASAM: Adaptive Sharpness-Aware Minimization for Scale-Invariant Learning of Deep Neural Networks
Recently we proposed a new optimization algorithm called Adaptive Sharpness-Aware Minimization (ASAM), which pushes the limit of deep learning via PAC-Bayesian theory. ASAM has been improving generalization performance for various tasks leveraging geometry of loss landscape, which is highly correlated with generalization. In recognition of its theoretical contribution and practicality, our paper was published at the 38th International Conference on Machine Learning (ICML 2021).
On November 4, 2021
[Samsung AI Forum 2021] Day 2: Harnessing AI To Improve People's Lives
A host of world-renowned academics and researchers from Samsung Electronics came together to share their insights on the future of artificial intelligence at Samsung AI Forum.
On November 2, 2021
Changing the Daily Life of the Future: SDC21 Experts Discuss Next-Generation Technologies
At the Samsung Developer Conference 2021 (SDC21), Samsung Electronics is presenting its consumer-centric approach to innovation in partnership with its developer communities. In order to learn about the innovative collaborative technologies coming in the near future that are set to transform users’ lives by making them richer and more convenient, Samsung Newsroom sat down with SDC21 session speakers in order to hear more.
On October 27, 2021
Samsung AI Forum 2021 Explores Future of AI Research
Leading academics, industry experts to discuss "AI Research for Tomorrow" and "AI in a Human World”
On October 6, 2021
[Blog] Meta-Learning in Neural Networks
AI methods are advancing across a range of applications from computer vision and natural language processing to autonomous control. There are many facets to AI’s capabilities that determine how useful it is in our lives. Besides the obvious metrics of peak accuracy or efficacy of an AI system at its task, other facets include: How effectively can it learn a new task from a small amount of data or experience?
On September 2, 2021
[Blog] Zero-Cost Neural Architecture Search
Neural architecture search (NAS) is quickly becoming the standard methodology for designing deep neural networks (DNNs). NAS replaces human-designed DNNs by automatically searching for a DNN topology and size that maximize performance.
On August 16, 2021
[Blog] Leveraging the Availability of Two Cameras for Illuminant Estimation
White balance is an essential step in the camera imaging process; it ensures that the colors of the captured images are correct; see Figure 1 for an example. White balance involves estimating the scene illumination, which is then used to correct the image colors. Recent state-of-the-art methods rely on a single input image to estimate the scene illumination and typically involve training models with large numbers of parameters on large datasets.
On July 13, 2021
Open-Source Project for MPEG-5 EVC (Essential Video Coding)
The COVID-19 outbreak has changed our lives and relationships over the past year. We now have no other choice but to communicate more often through the Internet. One of the key technologies for talking via the Internet is, definitely, video compression (coding) technology.
On July 8, 2021
[Blog] Learning to Align Temporal Sequences
Temporal sequences (e.g., videos) are an appealing data source as they provide a rich source of information and additional constraints to leverage in learning. By far the main focus on temporal sequence analysis in computer vision has been on learning representations (i.e., compact abstractions of the input data) targeting high-level distinctions between signals (e.g., action classification, “What action is present in the video?”).
On June 21, 2021
SRC-B Won the 1st Place in CVPR-NAS 2021 Competition
The competition hosted by the 2021 Conference on Computer Vision and Pattern Recognition – Neural Architecture Search (CVPR21-NAS) has a wide influence in the world.
On June 18, 2021
This year SRPOL takes WAT 2021
The Machine Translation team of Samsung R&D Institute Poland (SRPOL) has just written another chapter to its long history of development and improvement. This year SRPOL MT team has competed with the best, in a task where the goal was to translate between 10 Indian languages and English in both directions.
On June 2, 2021
[Blog] VIDEO SCALENET (VSN) – TOWARDS THE NEXT GENERATION VIDEO STREAMING SERVICE
There is no doubt that the pandemic has changed our way of communication dramatically and quickly. It pushed us into a virtual world deeper and deeper. Someone could say that the changes were just on their way not something special, but no one couldn’t argue that the phenomenon has happened too radically.
On May 24, 2021
Samsung R&D Institute China-Beijing Take Top Places in Prominent AI Challenges
Samsung R&D Institute China-Beijing (SRC-B) emerged as one of the leading teams at two recent highly prestigious global AI challenges: the New Trends in Image Restoration and Enhancement (NTIRE) workshop at the Conference on Computer Vision and Pattern Recognition (CVPR) and International Workshop on Semantic Evaluation (SemEval) at the Association for Computational Linguistics (ACL).
On April 27, 2021
[Blog] Towards explainable representations for natural language understanding
Recent successes in machine learning have led to numerous Artificial Intelligence applications, such as automatic translation and chatbots. However, the effectiveness of these systems is limited by their opaqueness: predictions made by the machine learning models cannot be easily understood by humans, and hence it is hard to discern what the model learns well and what it doesn’t — which is a fundamental step to build more robust AI systems.
On April 20, 2021
Leveraging Human Intervention for Continuously Personalized AI Robot: UX Innovation Lab at Samsung Research Wins HCI Korea 2021 Best Paper Award
Researchers from UX Innovation Lab at Samsung Research won the “Best Paper Award” at Human-Computer Interaction Korea 2021 with the theme “Human-Centered Design for Robot-as-a-Service (RaaS): Supporting Caregivers at Home.” The HCI Society of Korea is a group of researchers who study the theory and application of human-computer interactions (HCI).
On March 15, 2021
"When deep learning meets logic": a three days virtual workshop on neural-symbolic integration sponsored by Samsung Research.
The effort to integrate logic with deep learning has intensified in recent years and has the potential to give rise to a new computational paradigm in which symbolic knowledge is used to assist deep learning systems or extend their capabilities, while offering, at the same time, a path towards the grounding of symbols and the induction of knowledge from low-level sensory data.
On January 15, 2021
Samsung's head researcher wants human–AI interactions to be a multisensory experience
Sebastian Seung’s new role at Samsung’s R&D Campus in the South Korean capital of Seoul is a world away from his previous post at Princeton University in New Jersey. Appointed as the head of Samsung Research in June 2020, Seung is leading thousands of people in 15 research centres, who are investigating technologies such as computer vision, augmented reality, robotics and 5G communications networks.
On December 9, 2020
Samsung AI Forum 2020: Humanity Takes Center Stage in Discussing the Future of AI
Each year, Samsung Electronics’ AI Forum brings together experts from all over the world to discuss the latest advancements in artificial intelligence (AI) and share ideas on the next directions for the development of these technologies.
On November 10, 2020
[Samsung AI Forum 2020] Day 2: Putting People at the Center of AI Development
The Samsung AI Forum is an annual event that brings together globally renowned experts in the industry as well as across academia to serve as a platform with which to disseminate the very latest in AI trends, technologies, and research.
On November 3, 2020
'Samsung AI Forum 2020' Explores the Future of Artificial Intelligence
Samsung Electronics announced today that it will hold the Samsung AI Forum 2020 online via its YouTube channel for two days from November 2nd to 3rd. Marking its fourth anniversary this year, the forum gathers world-renowned academics and industry experts on artificial intelligence (AI) and serves as a platform for exchanging ideas, insights and latest research findings, as well as a platform to discuss the future of AI.
On October 6, 2020
LF AI Foundation Announces NNStreamer as Its Newest Incubation Project
The Linux foundation AI Project (LF AI), an organization created to establish an ecosystem for open-source innovation in the fields of artificial intelligence (AI), machine learning and deep learning, recently announced NNStreamer as its latest Incubation Project.
On May 11, 2020
Experts Discuss Taking AI to the Next Level at Samsung AI Forum 2019
Samsung Electronics is committed to leading advancements in the field of artificial intelligence (AI), with the hopes of ushering in a brighter future. To discuss what the future may hold for AI technology, and to address and overcome the technological challenges that researchers are currently facing, the company recently hosted its third annual Samsung AI Forum.
On November 8, 2019
[Hearing from an AI Expert – 6] AI and 5G: A Two-Pronged Revolution
One of the most exciting things about the times we live in is the fact that we stand on the precipice of several major technological shifts. What’s more, the individual innovations that make up these seismic changes are not happening independently, but rather are interweaving to inform and empower one another.
On October 24, 2019
[Hearing from an AI Expert – 5] At the Intersection of Robotics and Innovation
There is much anticipation these days around the field of robotics with its immense potential and promising future applications.
On October 18, 2019
[Hearing from an AI Expert – 4] On-device AI Breathes Life into IoT
As technology has evolved, it has changed our lives dramatically. It’s truly startling to think just how different life was before the invention of innovations like smartphones, the internet and PCs.
On October 11, 2019
[Hearing from an AI Expert – 3] Vision is About Understanding the World
Can you imagine a world where the personal AI assistant on your smartphone is able to understand as much about the world as you do? What about a scenario where communicating with that AI assistant is as natural and easy as interacting with another human?
On October 4, 2019
[Hearing from an AI Expert – 2] How AI Will Change the World
There’s no denying that the age of AI is upon us and that the ways we engage and interact are set to change in big ways. In anticipation of this, Samsung Electronics has opened AI centers across the world to ensure that the company leads the charge on AI. 2019 marks the 50th anniversary of Samsung Electronics,
On September 27, 2019
[Hearing from an AI Expert – 1] The Age of AI is Coming
Nowadays, artificial intelligence (AI) has emerged as a leading global future technology trend. AI is so much at the center of the current technological revolution that it is expected to fundamentally alter not only the IT industry,
On September 20, 2019
Samsung Named Among Winners at DCASE 2019 Challenge
Samsung researchers from Poland and China emerged as one of the leading teams at a recent contest to identify sounds using Artificial Intelligence (AI).
On July 26, 2019
Samsung Electronics Sweeps Coveted Global AI Awards
Samsung Electronics’ artificial intelligence (AI) capabilities are being recognized globally in a competitive field with top researchers all seeking to dominate. Samsung Research, the advanced R&D arm of Samsung Electronics’ device business, has won recent competitions, which will be vital in ultimately rolling out AI in more real-world situations than ever.
On November 23, 2018
Samsung Electronics Joins ‘Partnership on AI’ for the Future of AI Safety
Samsung to participate in AI safety research and advancement along with more than 70 global organizations across academia, civil society and industry
On November 9, 2018
Samsung Electronics Opens another AI Center in Montreal and Expands AI Research Presence in North America
Montreal, Quebec, Canada – October 18, 2018– Samsung Electronics Co., Ltd. announced that it is establishing another artificial intelligence (AI) centre in Montreal, the second biggest city in Canada and home to one of the world’s fastest growing AI communities.
On October 18, 2018
Samsung AI Forum Offers a Roadmap for the Future of AI
It wasn’t that long ago that the idea of building technologies with ‘brains’ that learn and are even structured just like ours seemed like science fiction.
On September 18, 2018
Samsung Artificial Intelligence (AI) Forum 2018
Experts, professors, students and world-renowned scholars gathered to give lectures, debate and share research achievements on the future of AI
On September 14, 2018
Samsung Electronics Opens a New AI Center in New York City
In addition to five facilities in Korea, the U.S., the U.K., Canada and Russia, Samsung Research is establishing another AI Center in New York, focusing on robotics
On September 9, 2018
[Interview]“With Samsung’s Unique Strengths, We Are Developing a User-Oriented AI Algorithm”
The question of how AI technologies understand human dialog and queries to suggest an optimum answer is one of the hot topics in the AI industry. Jihie Kim, Head of the Language Understanding Lab at Samsung Research AI Center, is also ...
On July 9, 2018
Samsung Electronics Wins at Two Top Global AI Machine Reading Comprehension Challenges
Samsung Research, the advanced R&D hub of Samsung Electronics’ SET (end-products) business, has ranked first in two of the world’s top global artificial intelligence (AI) machine reading comprehension competitions ...
On July 9, 2018
World-Renowned AI Scientists, Dr. Sebastian Seung and Dr. Daniel Lee Join Samsung Research
The new recruitment of AI experts to significantly strengthen Samsung’s AI R&D capabilities
On June 4, 2018
Toronto Lab to Help Lead Global AI Research & Development; Joins UK, and Russia as Part of a Network of Global AI Centres
Samsung Research America (SRA), announced that it is establishing a state-of-the-art artificial intelligence (AI) centre in Toronto, as part of a new venture to tap into and contribute to the flourishing AI industry growing in Canada’s largest city.
On May 24, 2018
Samsung Opens Global AI Centers in the U.K., Canada and Russia
In addition to Centers in Korea and the U.S., Samsung Research operates five AI Centers around the world dedicated to exploring the full potential of AI technology
On May 22, 2018
Samsung Advocates for Collaborative AI Research at 2018 Artificial Intelligence Summit
Two-Day Event in Silicon Valley Brought Together Diverse Stakeholders, Including Academic and Technical Leaders, to Explore Areas for Collaboration
On January 19, 2018
Samsung Discusses the Future of AI with Leading Academics, Industry Leaders
An audience of academics, thought leaders and industry experts gathered to discuss the future of artificial intelligence (AI) at an exclusive Samsung event recently.
On September 28, 2017