The ML Playbook Series: A Conversation with Aleksandr Petiushko

9 min

Welcome to the fourth edition of The ML Playbook blog series, where we delve into the insights and experiences of industry experts shaping the Machine Learning landscape.
 
Our fourth guest is Aleksandr Petiushko, a seasoned Head of ML/AI known for his extensive contributions to both the academic and industrial spaces, with significant work within autonomous vehicles and robustness research. 
 
Recently embarking on a new role at Gatik, Aleksandr's journey includes many publications, patents, founding a research programme, leadership positions, and more! In this edition, I had the privilege of tapping into Aleksandr’s wealth of knowledge as we discuss all things ML, particularly in relation to his sector - autonomous vehicles.

 

What initially drew you to the field of machine learning and how has your journey evolved over the years?

I actually began my career in this field before it was even referred to as "machine learning." At that time, the term "pattern recognition" was more commonly used. It was during my undergraduate studies that I first became familiar with some fundamental concepts such as Hidden Markov Models, Gaussian Mixture Models, and Dynamic Programming, thanks to my Scientific Advisor's projects in this area. My first experience with speech recognition was through a C-based program, after which I went on to prove a theorem that was published alongside other student works. Following this, I shifted my focus towards more theoretical aspects of Computer Science during my PhD research. It wasn't until after defending my thesis that I became interested in the growing influence of neural networks. This led me to co-found the deep learning working group at Huawei Moscow Research Center, which has since become a central part of my professional life, revolving around machine learning, neural networks, artificial intelligence, and the theoretical foundations behind them.

 

Your role as Co-Founder of the School of Huawei Advanced Research Education (SHARE) at Lomonosov MSU was significant. What were the key objectives of this program, and what impact did it have on the students and the industry?

The concept of SHARE actually originated from my disappointment - together with my colleague from Huawei - about the level of machine learning knowledge possessed by MSU students at that time, who were enrolled in one of Russia's top technical universities. Although we both came from the Department of Mechanics and Mathematics and initially focused on teaching there, we decided to expand our reach and create a comprehensive program combining two specialisations: "Machine Learning and Computer Vision" and "Big Data and Information Theory". Ultimately, this collaboration proved to be mutually beneficial for Huawei and MSU - Huawei reinforced its reputation as an innovative technical company that is forward-thinking and eager to embrace the future, while also attracting a continuous stream of talented students who were well-prepared in their field. Meanwhile, MSU was able to offer cutting-edge educational programs developed in partnership with one of the world's leading tech companies.

 

Having worked on projects that span from theoretical foundations to practical implementations, how do you approach bridging the gap between theoretical research and real-world application?

To tackle a challenge theoretically grounded, I find it's often more reliable than taking an empirical approach. A solid foundation of knowledge is essential; always make sure to extensively read and learn from the work of pioneers in the field, as we build upon their discoveries. Before training a model, establish clear evaluation metrics to gauge its performance. Working with data is also crucial - sometimes it requires cleaning or preprocessing, while other times introducing adversarial noise or synthesising out-of-distribution data can be beneficial. The approach involves not just trying out existing methods but also understanding why previous approaches failed and analysing the results to make informed decisions. This process of continuous learning and iteration allows to create new strategies that build upon what's come before, ensuring stability, scalability, and timely adjustments are made as needed. I also believe that with careful hyperparameter tuning, almost any method can solve a practical problem, but the real difference lies in its reliability, scalability, adaptability, and time efficiency.

 

You’ve had a variety of leadership roles, from Technical Lead to Senior Engineering Manager before becoming the Head of AI Research. How have these different roles shaped your leadership style and your approach to driving innovative research?

As a Head of AI Research, it's essential to have a dual passion - working both with people and ideas. This balance can be challenging to achieve, as it's easy to prioritise either aspect over the other. The key lies in striking a balance between devoting time to staying up-to-date with cutting-edge research, charting technical roadmaps, inspiring and motivating team members, and resolving their challenges. To me, success is measured by my ability to maintain strong relationships even after leaving the company where we first met - it's about understanding that our colleagues are individuals, not just roles within a team.

 

Given your current role involves leading research in an autonomous vehicle company, what unique challenges do you face in applying machine learning to autonomous systems compared to other applications?

While common industry-wide issues such as limited real-world data, undefined or unstandardized evaluation metrics, and high computational demands for training models still prevail, autonomous driving poses a distinct challenge: safety. The stakes are extremely high, as software bugs or unforeseen events can have catastrophic consequences. As a result, the entire autonomous driving development pipeline is designed with safety validation at its core. One has to utilise various techniques such as modular and submodular evaluation, comprehensive end-to-end testing in simulators, and structured tests to ensure the highest levels of reliability. In this context, ML models must not only perform well on average but also operate almost flawlessly, even in unexpected scenarios. When we fall short of our safety standards, it's essential that we have robust mechanisms in place for handling unusual events or failures. This is where uncertainty and robustness detection become critical components of autonomous driving research. By prioritizing these aspects, we can develop more reliable and trustworthy systems that are better equipped to handle the complexities of real-world driving environments.

 

With over 30 patents to your name, can you discuss a particularly impactful patent you’ve worked on and its significance in the field of machine learning or autonomous systems?

I'm not certain which particular innovation was the most notable, but there was an idea involving the fusion of two-dimensional and three-dimensional information. What made this patent stand out was being approached by NASA regarding incorporating some of its concepts into their research initiatives. I gave them my consent immediately – it was a very memorable experience because usually patent work involves legal teams and engineers in separate capacities.

 

What trends do you see as most transformative in the machine learning landscape, particularly in the context of autonomous vehicles and robotics?

In autonomous driving and robotics, the emergence and widespread adoption of foundation models represent a significant leap forward in holistic reasoning and explainability. However, despite the current advancements in large language models (LLMs), several pressing challenges remain unresolved: hallucinations, practical application, and consistency. Furthermore, within the broader field of machine learning, diffusion models have recently flourished as the state-of-the-art in generative modelling. One can raise an intriguing question: Should we invest our resources in further developing LLMs or do their current advancements represent a temporary trend? Can we accurately gauge our proximity to achieving Artificial General Intelligence (AGI)?

 

With a long tenure in R&D and leadership roles, how have you seen the field of machine learning evolve, and what do you believe are the most significant advancements that have occurred during your career?

First let me mention the hype surrounding emerging technologies and their potential for practical application; a well-known example that comes to mind is Gartner's Hype Cycle for AI, which outlines a predictable pattern of innovation progression: starting with an "Innovation Trigger," followed by a peak in inflated expectations, then a trough of disillusionment as the reality sets in, and eventually reaching the slope of enlightenment where practical applications are realised. Currently, I believe that practical robotics, including self-driving technologies, hold great promise for the next major breakthrough. Regarding Artificial General Intelligence (AGI), while it's an exciting area of research, its feasibility is still a topic of debate. Recent advancements in deep learning architectures such as transformers and diffusion models, and paradigms such as self-supervised learning have significantly impacted large language model capabilities. Looking back at my early career, I recall the profound impact that AlexNet and Convolutional Neural Networks (CNNs) had on computer vision tasks. This was a pivotal moment in AI's journey towards practical application, demonstrating the potential for deep learning to transform industries.

 

How do you foresee the development of machine learning technologies impacting the future of autonomous delivery systems and local communities?

We're looking at developing autonomous driving systems that can be easily applied to new cities, roads, weather conditions, and diverse driver behaviours. This will enable us to achieve scalable autonomous driving solutions, where we don't have to painstakingly map every area, collect data on every environmental factor, or exhaustively test the system in each location before deploying it on real roads. The next step would be the deployment of these scalable delivery systems that can save time and money for our customers. With these systems, people won't need to spend hours shopping, or won't have to incur significant costs for human-based deliveries.

 

What skills do you believe are essential for the next generation of Machine Learning Researchers, and how can they best develop these skills?

To excel in this field, one requires a solid grasp of classical ML techniques (such as discriminative and generative modelling, architecture design, and loss function optimisation). Additionally, it is essential to stay up-to-date with the latest advancements in specific areas where you aspire to contribute. In terms of practical skills, being able to proficiently implement algorithms from scratch or modify existing ones is crucial. Familiarity with distributed training and evaluation pipelines is also vital for efficient research. To continually develop your expertise, I recommend engaging in a regular regimen of reading literature and recent papers related to ML, as well as hands-on experience through small yet comprehensive projects. And try internships – it is very important!

 

 

If you're seeking top talent in the field of ML or looking for exciting career opportunities, don't hesitate to get in touch with our experienced AI & ML recruitment team. We connect businesses with skilled professionals and provide a platform for individuals to explore promising MLOps jobs. Whether you're looking to hire talent, browse job listings, or even feature in our blog series, we're here to facilitate your journey in the world of MLOps. Reach out to us today to learn more!