27.09.25
Moscow and online
Hardware Conference on the Practical Application of Machine Learning. Real-world cases, technical talks, and the experience of leading engineers will help you learn how to derive real value from ML in products and business, and see the scale of the technologies from the inside.

Conference Rules

Data Science NLP RecSys CV Speech MLOps Data Science NLP RecSys CV Speech MLOps Data Science NLP RecSys CV Speech MLOps Data Science NLP RecSys CV Speech MLOps Data Science NLP RecSys CV Speech MLOps Data Science NLP RecSys CV Speech MLOps

Live Stream

What Awaits You?
…and a powerful expo
Reports
Presentations with deep technical content in two halls — «Data» and «Code»: we will talk about how we trained VLMs and integrated them into the content processes of Yandex Market, about efficient data preparation for training modern text-to-image and text-to-video models, and much more.
Workshops
A practical format for offline participants: we will analyze common mistakes in designing a recommendation system, learn how to optimize the training and inference of models for video generation on multiple GPUs, and find out how to replace complex annotations with LLMs.
Discussions
The audience and experts will exchange opinions on key industry challenges, approaches, and trends.
A new format
For the first time at Practical ML Conf, there will be an online «Network» hall — live talks and discussions, the opportunity to join from anywhere in the world and become part of the main ML event, even remotely.

Schedule and Speakers

Hosts:
Vladislav Ofitserov Yandex International Search Head of Neural Technologies Development Team, Yandex International Search
Nikita Ryzhikov Yandex International Search Head of Voice Input Technology Service, Yandex Search
11:00–12:00 Welcome, Guest Gathering
12:00 Conference Opening
12:30 Keynote Offline* Mathematics and Language Mathematicians speak their own language, which somewhat resembles natural languages but is still very different from them. Potential difficulties and confusion are only compounded in mathematical physics, where words like «energy» or «force» have not only a well-established everyday meaning but also their own distinct meaning in various physical contexts and theories.

Reflections on the language of mathematics have direct practical significance in the era of large language models, and I will try to talk both about what mathematicians would like from these models and about the difficulties we may encounter on the path to realizing this dream.

Andrei Okounkov Russian and American mathematician, Fields Medal laureate
His main works are dedicated to representation theory and its applications to algebraic geometry, mathematical physics, probability theory, and the theory of special functions. Member of the US National Academy of Sciences and the American Academy of Arts and Sciences.

Graduated from the Mechanics and Mathematics Faculty of Moscow State University in 1993. Postgraduate graduate of the Independent University of Moscow. Defended his Candidate of Sciences dissertation at Moscow State University in 1995. In 1996, he moved to the USA, where he worked as an associate professor at the University of California, Berkeley, and from 2002 to 2010 as a professor at Princeton University. In 2006, he received the Fields Medal for achievements connecting probability theory, representation theory, and algebraic geometry. Since 2010, he has been a professor at Columbia University, and in February 2014, he became one of the scientific directors of the international laboratory of representation theory and mathematical physics at the Faculty of Mathematics of the Higher School of Economics.

13:10 Report A Smart Tutor in Every Home: How We Created an AI Assistant for Learning Math in Yandex Textbook We will share how Yandex Textbook created a personal AI assistant that helps schoolchildren with their math homework.

We will discuss why we chose mathematics, how we trained language models to find and explain solutions to problems in a methodologically correct way, and what unexpected challenges we encountered. We will discuss our experience using reinforcement learning (RL) to improve answer quality and explain how we built a dialog system with a cascade of models.

Participants will learn how we made product decisions based on experiments (SBS, A/B tests), what data insights we gained, what mistakes we made during the launch, and what we had to change after the first feedback from real schoolchildren and their parents.

NLP Airat Azbukhanov Yandex Textbook ML Team Lead
Started his career as a database specialist. After becoming interested in machine learning, he began building ML infrastructure for oil companies. Since 2024, he has been implementing AI in Yandex Education.

On weekends, he goes hiking in the Southern Urals with his family.

NLP Tagir Kazimagamedov Yandex Textbook ML Engineer
Graduated from the Computational Mathematics and Cybernetics faculty of Moscow State University. Has been actively exploring the field of NLP for the last couple of years. Worked on AI in one of the Marusya intelligence teams — the voice assistant from VK. Currently implements and develops AI in education.
14:20 Keynote Memory and Online-RL: The YandexGPT 5.1 Experience Modern LLMs are versatile models that can plan a wedding, compose a haiku, and count the number of letters in a word. Due to the diversity of tasks, working on a new release always consists of numerous projects: big and small, routine and innovative.

More than fifty ML specialists, analysts, backend developers, and managers worked on the YandexGPT 5.1 release. It’s impossible to cover everything done in a reasonable time, so the talk will focus on two interesting tasks. First, I will tell you how we taught the model to better remember facts and apply knowledge about them. Second — how we finally achieved stable work of online-RL.

NLP Alexey Kolesov Yandex R&D CTO
Has been working for Yandex for over 10 years: started as an intern, grew to head of the speech recognition service, managed the NLP department for the last two years, and became CTO of R&D a month ago. Taught algorithms and machine learning at School of Data Analysis. Lives and works in Minsk.
14:55 Report Creating Memory for LLMs Using GigaChat as an Example LLMs have been at the top of researchers’ and developers’ interests for several years now. But despite all the available computing power, large language models still have underdeveloped memory. When communicating with a user, popular virtual assistants forget even basic information about the person the next day: their name, age, interests. This significantly disimproves the user experience.

In this report, I will discuss research on adding a memory module to a chatbot, the main ways to extract information for memory, and how to store and use it in a dialogue.

We will examine how SberAI implemented adding memory to GigaChat, what problems they encountered, and how they solved them.

NLP Pavel Gulyaev Sber AI Head of Department
An NLP developer with over nine years of experience. Has worked in corporations, labs, and startups. Today, he leads an RnD team for B2C at Sber AI, developing LLMs. In his free time, he goes yachting, jogs, and travels.
15:35 Report Synthetic Data vs. Real Data Shortage: How We Enhance LLMs at T-Bank We are actively developing solutions based on large language models for internal tasks. However, training and fine-tuning such models requires a huge amount of high-quality data, which is not always possible to collect through labeling.

In this report, I will share how we use synthetic data. On one hand, it’s the generation of instructional examples for the general domain. On the other hand, it’s the creation of domain-specific datasets for internal tasks. This approach helps compensate for the lack of real data and improve model quality.

We will discuss the types of synthetic data we create, how we build generation and filtering pipelines, what metrics we use to assess their usefulness, and when synthetic data truly helps. We will separately discuss case studies from T-Bank’s practice — adapting models for specific internal scenarios.

NLP Olga Tsymboy T-Bank Senior Research Developer
Completed her undergraduate studies at MIPT (Moscow Institute of Physics and Technology) and then her master’s degree at MIPT and Skoltech. Senior Research Developer in the fundamental model development team at T-Bank. Works on synthetic data generation pipelines.
16:45 Report How We Trained and Implemented a VLM in Yandex Market’s Content Processes At Yandex Market, we actively use VLM for content-related tasks. Firstly, vision-language models help us match texts, images, and parameters of identical products — in other words, perform product matching. Our internal SmartMatcher pipeline handles this task. To achieve better results, we combined it with a fine-tuned VLM and saw an improvement in key metrics.

Secondly, VLMs help analyze product listings. For example, they help eliminate data inconsistencies that degrade matching quality. We developed a pipeline with task-specific fine-tuned VLM and GPT models. It identifies and filters products with inconsistent content.

CV Ignat Polezhaev Yandex Market ML Developer
ML Developer at Market. In his free time, he tries to travel and relax as much as possible. Has visited almost 20 countries (Japan, England, Iran, and others). In winter, he snowboards, rides powder, and goes freeriding. Last year he started jogging — recently participated in a couple of Moscow races. And, of course, he works on various IT projects: created a smart keyboard for iOS, worked on a passport recognition project, analyzed marketplace reviews, and processed phone conversations.
CV Egor Gorbunov Yandex Market ML Developer
While studying at MIPT, he worked on research ML projects, including in the field of CV. Joined Market after graduation. Works on content matching. In his free time, he is into sports, watches TV series, and buys various trinkets.
17:30 Report Efficient Data Preparation for Training Modern Text-to-Image and Text-to-Video Models Text-to-image and text-to-video models have shown incredible progress in recent years. The development of methods for creating datasets plays a key role in this. In this report, I will explain how to collect high-quality data for training a modern generative model and how to efficiently organize the continuous processing of petabytes of raw information.
CV Ivan Kirillov Sber AI Head of Data Research
Graduate of the Mechanics and Mathematics Faculty of Moscow State University. Worked in international companies in the field of neural network research and development. Solved tasks related to improving image and video quality, neural video compression, and generating virtual avatars for remote learning. Currently involved in the development of Kandinsky — a Russian generative text-to-image and text-to-video model. In his free time, he is engaged in photography and music.
18:10 Report From Classifier-Free Guidance to Dialogue: Where is Image Generation Heading? In recent years, diffusion models have been the main driver of progress in generative image modeling, while the field of Image Understanding has advanced significantly with the emergence and scaling of Visual Language Models.

Now we are seeing many works related to combining discriminative and generative modeling within a single architecture. We will discuss how practically justified this is and whether such models will become the new dominant paradigm.

CV Sergey Ovcharenko Yandex R&D Head of Multimodal Analysis and Generation Department
Graduated from MIPT, worked as a researcher at the Interdepartmental Supercomputer Center of the Russian Academy of Sciences, developed face recognition technologies as a lead researcher and developer at NtechLab. His interests include deep learning, computer vision, and information retrieval. Planned to research computer networks, but ended up working with neural networks. Enjoys fishing and fashion videos.
19:00–23:00 Networking and Afterparty
*The recording of the report will be available to participants after the conference
Hosts:
Vasily Ershov Head of Machine Learning, Yandex Cloud
Darya Vylegzhanina Manager in the Computer Vision Service, Yandex R&D
11:00–12:00 Welcome, Guest Gathering
12:00 Conference Opening
13:00 Report How ML Helps Reduce Accident Rates in Yandex Go In Yandex Urban Services, we have been using algorithms to reduce accident rates for several years. Thanks to them, complex routes are assigned to experienced drivers, while simple ones go to newcomers. Behind this idea lies a long development path: from the first versions with errors and «teething troubles» to a stable system. In this talk, I will share how we improved the technology and what insights we gained.
Data Science Filipp Ulyankin Yandex Urban Services Technology Platform Head of Ride Safety Technology Group
Works on reducing accident rates using ML in Yandex Go. Teaches ML and statistics at HSE — strives to explain complex topics to students in simple terms. Can do handstands. Loves to travel: last week, he spent three days hiking in the tundra in the rain.
13:45 Report Transformers for Vehicle Control
Transformers already write code and paint pictures, but can they control a car? In this talk, I will discuss the unique model architectures used to solve this task.

We will talk about pretraining and reinforcement learning (RL), and also discuss how to proceed when there is no open SOTA in your field. Participants will learn the difference between open loop and closed loop, and how a prediction task differs from a motion planning task.

The story will cover the journey of Yandex Autonomous Transport team: from the first ML experiments with generative neural networks to regular trials of the autopilot in real cars.

Data Science Maxim Sporyshev Yandex Autonomous Transport Head of Behavior and Prediction Service
Graduate of the Yandex School of Data Analysis and Far Eastern Federal University. Won prizes at international student competitions for autonomous underwater vehicles, the Singapore AUV Challenge and RoboSub.

Joined Yandex Autonomous Transport in 2021 to develop the motion planning engine.

Interested in deep learning (DL) and foundation models in robotics. Enjoys outdoor activities and badminton, playing computer games, and cooking delicious food.

15:05 Report Predicting Search Ad CTR Using Neural Networks: Experience and Experiments from AvitoTech
In this report, I will discuss how Avito explores neural network models for predicting CTR in search. I will share the results of research and experiments that led to a consistent improvement in ML metrics compared to current solutions.

I will separately discuss one of the implementation approaches that allowed us to realize the solution with minimal resources and without a major infrastructure overhaul.

Data Science Anton Semenisty AvitoTech Senior DS Engineer in the Monetization Department
Has been working in Data Science for over 5 years. In his free time, he reads articles on arXiv, takes various machine learning courses, and plays volleyball.
15:50 Report Freshness in ASR, or How to Keep Up with Fashionable Trends
«Covid», «pedro-pedro», «tralalero tralala» — terms that rapidly gained popularity on the internet. For example, queries about Pedro to Alice increased 20-fold in just a couple of weeks.

For a voice assistant to be able to engage in dialogue on a trending topic, it needs to learn about new terms at least a few weeks before their peak popularity. In other words, trends and memes have to be predicted. We will explain how to automatically extract them from the Alice query stream and manage to prepare the product for relevant topics in time.

Speech Arseniy Nestyuk Yandex R&D Head of Voice Input Analytics Team
Graduated from MIPT and School of Data Analysis. Worked on developing voice robots at T-Bank. Currently works at Yandex, where he is responsible for voice input analytics in Alice. In his spare time, he runs Dungeons & Dragons games, enjoys dancing, and spends a lot of time walking around Moscow.
17:00 Report Generative Recommender Technologies: What Works at Yandex Generative recommender models are gaining increasing popularity and are being actively adopted in the industry. Some time ago, we introduced Argus — our generative transformer model for personalization.

In my talk, I will share our experience adapting Argus to various products at Yandex. I will discuss how the architecture and training process have evolved, where we managed to significantly improve quality, and where we simplified the model. Furthermore, I will present the latest results of adapting Argus for a single-stage usage scenario.

RecSys Nikolai Savushkin Yandex R&D Head of Recommender Technologies Service
Graduated from the Computational Mathematics and Cybernetics faculty of Moscow State University, worked at SberDevices where he was responsible for search quality. Currently develops Yandex’s recommender technologies. Enjoys track and field athletics, Formula 1, and game theory. Graduated from music school with a degree in violin.
17:45 Report Heteroseqs: A Framework for Transformer-Based Personalization Using Cross-Domain Customer Action Sequences T-Bank has long been more than just a bank; it’s an ecosystem of services built around customers’ lifestyles. By using cross-domain data about users and their behavior, we expand the preference profile and improve the quality of ML models. We also personalize products and services.

In this report I will explain how we managed to combine data from different services to obtain sequences of customer actions. I will share how using transformers on these sequences helped improve the customer experience. We managed to increase not only the quality metrics of ML models in classification, regression, candidate generation, and ranking tasks but also business metrics in the services.

RecSys Andrey Babkin T-Bank Lead Research Developer
Built personal search result ranking from scratch at SberMarket (Cooper), as well as partner cashback and product recommendations in T-Bank.

Currently works on making recommendations real-time and implementing transformer-based personalization for various tasks. In his free time, he enjoys swimming and takes long walks.

19:00–23:00 Networking and Afterparty
Hosts:
Konstantin Lakhman Department Head, Yandex
12:30 ML News Discussion Our experts will break down key news in the field of machine learning and artificial intelligence from the last few weeks. They will talk about what caught their attention and share their personal opinions.
CV Andrey Kuznetsov AIRI Ph.D., Director of the FusionBrain Laboratory
NLP Valentin Malykh ITMO Associate Professor at the Higher School of Digital Culture
Deniz Kuznedelov Yandex Deep Learning Researcher
Konstantin Lakhman Yandex Department Head
13:05 Report How to Create a Large Dataset for Russian TTS with Minimal Resources Progress in Russian speech synthesis is hindered by the lack of large-scale public datasets: available collections are too small and lack diversity. To fill this gap, a group of enthusiasts has published the largest corpus of clean Russian speech to date, containing 4700 hours of audio from open sources.

In this report, we will detail the dataset pipeline: audio normalization, speech and noise separation, diarization, segmentation, automatic quality filtering, and transcription. We will separately show how we solved problems encountered during data collection.

We will demonstrate the practical value of the corpus through experiments with the F5-TTS model for Russian, which is also available for download.

Speech Denis Petrov Audio2Midi Senior Audio ML Engineer
Machine Learning Engineer at Audio2Midi. Specializes in audio ML and speech technologies. Develops the open-source stress marker RUAccent and solutions for Russian speech synthesis. Assembles large open datasets of Russian speech. Actively contributes to the development of open-source ASR and TTS projects. In his free time, he writes music, plays board games, and sometimes plays video games.
13:45 Report Practical Aspects of Pretraining Multimodal LLMs A talk on how Vision-Language Models (VLMs) are created: from concept and architecture to quality evaluation, with a focus on the key pretraining stage. We will examine why pretraining determines the final capabilities of a model, what data is needed, how to select it, and what pitfalls developers may encounter along the way. Finally, an overview of trends and practical possibilities for VLMs in the near future.
CV Danil Kashin Yandex R&D Head of VLM Pretraining Team
Graduated from National Research Nuclear University MEPhI. Worked on computer vision tasks at VK. Currently leads the multimodal LLM pretraining team at Yandex. Has given talks and lectures at MEPhI, BMSTU, HSE, and CU; has won hackathons.

In his free time, he plays Dungeons & Dragons and enjoys diving into mosh pits at concerts.

15:05 Report Real-time Ranking: An Efficient Target-Aware Transformer Architecture for Yandex Music The recommendations in the My Wave recommender system are already great, but how can we make them even better? In this report, I will discuss the successful implementation of a target-aware real-time transformer with early binding.

We will examine Yandex Music’s approach to ranking tasks: why add early binding to transformers and how to implement it efficiently. I will share details of the model architecture, intricacies of the training pipeline, specifics of the production inference infrastructure, and my own insights.

This talk will be useful for ML engineers working with high-load recommender systems.

RecSys Pyotr Zaydel Yandex Music Senior ML Engineer
Graduated from the Department of Applied Mechanics, Mechanics and Mathematics Faculty, Moscow State University. Became interested in ML during his studies: took courses and completed an internship at VK. Has been working at Yandex Music since 2022, improving algorithms and developing the service. In his free time, he is learning to play tennis and travels frequently.
15:45 Report The Evolution of UniSRec: From Recommendations to Universal Behavior Embeddings for Personalized ML Systems This talk will cover the evolution of the UniSRec model — from the classical recommendation task to universal behavioral embeddings applicable in various ML systems, including Trust & Safety.

We will examine how the architecture from the paper «„Towards Universal Sequence Representation Learning for Recommender Systems“» was adapted to industrial requirements, which improvements provided the greatest quality boost, and how these solutions are integrated into a large-scale production infrastructure.

Special attention will be paid to how a unified embedding space can provide personalization in tasks where the separability of behavior clusters is important.

RecSys Karina Romanova Wildberries & Russ Head of CoreLLM: user behavior team
Engaged in the development and implementation of large language models and generative AI.

Throughout her career, she has worked on NLP, LLM, and multimodal tasks, including creating generative assistants at SberDevices, code generation, and integrating models into high-load business products. Currently leads the LLM R&D direction at Wildberries and, together with her team, develops infrastructure for integrating user behavior using LLMs.

Writes articles based on research results, including on adapting architectures to unsolved problems in the field of multi-agent systems. Creates pet projects; recent ones include an agent for finding leaks in a codebase, a library for multimodal LLM training, and a Telegram bot for personalized news distribution. In her free time, she is interested in history and economics, works out at the gym, travels, and reads books.

16:55 Report Flexible ML Pipelines in Practice This talk is dedicated to a successful case study — creating a computer vision system for the Brickit app, which scans a pile of LEGO parts and suggests what can be built from them. We will analyze the solution for a task that required: — classifying and segmenting thousands of unique LEGO parts from photos; — handling a huge number of classes while ensuring high accuracy; — working with dirty real-world data (different shooting angles, overexposure, glare, defects); — supporting continuous model retraining as new types of parts appear.

Special focus will be given to MLOps and DataOps questions: — how we built a data collection and annotation pipeline involving app users; — what approaches we used for incremental model training and accelerated iteration; — how we organized quality monitoring (precision, recall, F1-score) and tracking metrics over time; — what helped scaling the system to millions of users while maintaining high performance on mobile devices.

We will honestly share how we dealt with rare classes and optimized the pipeline for mobile devices. We will share MLOps solutions for scalable retraining and model maintenance in the face of constantly changing data.

CV Andrey Tatarinov Epoch8 CEO and CTO
Previously worked at Yandex and Google. In 2017, he believed the future was in machine learning and founded Epoch8. In his free time, he programs a robot vacuum, rides an enduro motorcycle, and goes mountain biking.
**Online Stream Only
Hosts:
Alena Zaitseva Group Lead for AI and Logistics Projects, Yandex Lavka
Alexander Mamaev Head of ML Service for Taxi, Yandex Urban Services
11:00–12:00 Welcome, Guest Gathering
12:00 Conference Opening
13:10 WorkshopOffline* Generating Narrative Videos: Tool Analysis and Practical Experience It’s been just over a year since the sensational announcement of Sora, and generative video models can already create impressive clips lasting 2–8 seconds.

This limitation isn’t a problem for simple tasks but becomes a serious barrier when it comes to multiple coordinated generations. For instance, creating long videos or narrative stories.

In this workshop, we will examine which approaches can be useful and how they are implemented in open source and on closed platforms. Furthermore — using open-source models — we will try to build our own pipeline for generating a narrative video.

CV Ekaterina Andreychuk X5 Digital ML Engineer
ML Engineer, develops and implements solutions based on language models. Main focus is RAG approaches and dialog agents for automating tasks and improving customer experience in e-commerce. Graduated with a master’s degree from Skoltech; previously worked in data analytics and recommender systems. In her free time, she enjoys painting and playing board games.
14:50 WorkshopOffline* Optimizing Training and Inference of Video Generation Models on Multiple GPUs Генерация видео — это творческая и интересная, Video generation is a creative and interesting, yet complex task requiring significant resources. I will share how the Kandinsky team trains large transformers for video generation: what techniques they use for efficient utilization of a cluster with a huge number of GPUs. We will discuss DDP, FSDP, activation checkpointing, tensor & sequence parallel, and other algorithms. In the practical part of the workshop, I will show how to speed up inference and video generation by parallelizing the transformer via the PyTorch library using the tensor parallel algorithm.
CV Maria Kovaleva Sber AI Lead Data Research Specialist
Lead Data Research Specialist at Sber AI. Works on developing training pipelines and pretraining models for the Kandinsky family of image and video generation, starting from Kandinsky 4.

Loves traveling, contemporary art, and outdoor activities.

16:20 WorkshopOffline* Replacing Complex Annotations with LLMs LLM-as-a-judge is a trendy approach being developed by many companies. Yandex Search also has something to boast about. I will talk about how we are implementing LLMs in annotation: I will share several major successes, but also touch upon failures.

In the workshop, I will analyze in detail a project on implementing LLM hints for annotators. I will share unexpected findings and several aspects of the work that turned out to be more difficult than we initially thought. We will talk about how to write prompts correctly, create an optimal pipeline, and measure the impact of implementation.

Data Science Ilya Katsev Yandex Search Head of Analytics and Metrics Department
Participated in mathematical olympiads since childhood; attended the IMO in 1995. Graduated from the Faculty of Mathematics and Mechanics of St. Petersburg State University in 2000, and received a PhD in game theory from Vrije Universiteit Amsterdam in 2009. Has been working at Yandex since 2010; has been involved in search quality metrics since 2015; and in recent years, like many others, has immersed himself in LLMs. Currently responsible for all metrics and data for training Search, Neuro, and Alice.

A mathematician, spent about 15 years deeply involved in game theory, earned a PhD, and wrote several good papers. Plays Renju and Gomoku professionally. For many years, he ran math circles; among his former students are about 10 international olympiad medalists, and employees of Yandex, Google, and other good companies.

17:50 WorkshopOffline* Analyzing Mistakes in Recommender System Design A large number of errors in modern recommender models arise already at the stages of problem formulation, data preparation, and analysis.

In this workshop, we will together examine the code of a recommender system developed by an intern, identify potential errors, and discuss ways to prevent them.

All the tasks we have prepared are based on real events from practice.

RecSys Sergey Kuznetsov MTS Web Services Technical Lead of RecSys&Search Platform
Has been engaged in machine learning in various forms for over 10 years; graduated from the Faculty of Computer Science at HSE; lectures on recommender systems at HSE, ITMO, and the MTS School of Data Analysts.

Enjoys squash, swimming, loves reading, and hiking with a backpack.

19:00–23:00 Networking and Afterparty
*The recording of the talk will be available to participants after the conference
Expo

Discussions

Program Committee

Head of the Program Committee Pyotr Ermakov Yandex R&D ML Brand Director
Worked in ML at Lamoda, Mail.ru, and HeadHunter, and also taught at HSE and Bauman Moscow State Technical University. Was one of the creators of the ODS community. Currently develops the Yandex machine learning brand and helps organize conferences — such as HighLoad++, PyCon, DUMP, and Data Fest.
Program Director Sofia Ivanova Yandex R&D ML Brand Manager
Worked as a Yandex DevRel manager for ML and mobile development. Built Yandex’s external ML specialist community from scratch and prepared over 100 Yandex speakers for conference presentations. Heads the program committee for Practical ML Conf, manages PR activities for Yandex Research, and the ML editorial team for the channels «Dushny NLP», «Recommendations», ML Underhood, CV Time, Speech Info. Has been producing international festivals and forums in Russia and CIS for 10 years. Passionate about AI and ML, and pets a toy poodle.
CV Andrey Kuznetsov AIRI Ph.D., Director of the FusionBrain Laboratory
Has been working in machine learning since 2010. Defended his Candidate of Sciences dissertation in 2013, currently writing his doctoral dissertation on the application of multimodal architectures for tasks of passive multimedia content safety. Heads the FusionBrain multimodal generative AI laboratory at the AIRI Institute of Artificial Intelligence. One of the founders of the Kandinsky family of models, teaches at Samara University and ITMO, gives lectures, and writes about AI and ML events on the Telegram channel @complete_ai. Author of over 100 scientific publications, including in top-tier journals (Q1/Q2) and Core A/A* conference proceedings. H-index: 14.
NLP Valentin Malykh ITMO Associate Professor at the Higher School of Digital Culture
Over 10 years in AI — has worked at Yandex, VK, and Huawei. Defended a dissertation on text analysis and is currently an associate professor at the Higher School of Digital Culture, ITMO. Author of the Telegram channel @valuableai.
RecSys Daniil Burlakov Head of Recommender Products Sector

Graduate of the Mechanics and Mathematics Faculty of Moscow State University and holds a Candidate of Sciences degree in Physics and Mathematics. Has been working at Yandex since 2016. Led the development of recommendations for Music, Kinopoisk, Afisha, and Bookmate.
MLOps Alexey Morozov Yandex Advertising Head of Recommender Neural Network Training Infrastructure Development Group
Graduated from the Computational Mathematics and Cybernetics faculty of Moscow State University. Currently works on developing training infrastructure for DL models for recommender systems and their implementation in Advertising Technologies. Helps other teams utilize advertising developments in training recommender models.

Organizers

How did it go in 2024

Venue
Moscow, Volochaevskaya st., 48, bld. 1
subway station “Ploshchad Ilyicha”
Thu Feb 05 2026 16:41:59 GMT+0300 (Moscow Standard Time)