The biggest European conference about ML, AI and Deep Learning applications
running in person in Prague and online.

Machine Learning Prague 2024

In cooperation with

, 2024


World class expertise and practical content packed in 3 days!

You can look forward to an excellent lineup of 40 international experts in ML and AI business and academic applications at ML Prague 2024. They will present advanced practical talks, hands-on workshops and other forms of interactive content to you.

What to expect

  • 1000+ Attendees
  • 3 Days
  • 40 Speakers
  • 10 Workshops
  • 1 Party

Phenomenal Speakers

Would you like to become a speaker and talk to 1,000+ attendees? Apply using this form before January 15th, 2024.

Practical & Inspiring Program


O2 Universum, Českomoravská 2345/17a, 190 00, Praha (workshops won't be streamed)


Room D2 Room D3 Room D4 Room D6 Room D7
coffee break

Using Graph Neural Networks to improve telecommunication networks

Room D2

Michele Korosec, Sopra Steria SE
Massimiliano Kuck, Sopra Steria SE

This workshop aims to provide an in-depth understanding of modeling and optimizing telecommunication networks using Graph Neural Networks (GNNs). Telecommunication networks characterized by a complex structure of interconnected nodes can be naturally modelled as graph data structures. However the utilization of graph data structures in Machine Learning poses unique challenges. The workshop focuses on various aspects of integrating GNNs into the telecommunication networks to enhance their performance and customer satisfaction. The implementation of GNNs introduces a new level of sophistication that allows intelligent decision making effective network mapping prediction and optimization. The key component of the workshop is a practical use case where participants will work with a Neo4J graph database. This database aids in visualizing and analyzing the complex connections in the network to provide a base for the GNN. With the use of GNNs valuable insights and predictions on network activity can be made efficiently. They leverage the rich information present in the network structure. Using these predictions to optimize network configuration settings and address network problems could potentially result in lower operational costs improved network efficiency and higher customer satisfaction. The workshop aims for participants to learn the practical application of modeling telco networks as graph structures implementing GNNs and interpreting their predictions for optimization. Moreover it's noteworthy that graph structures are not confined to telecommunications they can be found spanning across any industry sector and GNNs have versatile applicability in different use cases. It is an excellent opportunity for practitioners researchers and anyone interested in graph data modeling and machine learning to learn from industry experts. By the end of this workshop participants will have a more profound understanding of the potential of GNNs enabling them to apply this technology to optimize various network systems.

Automated Evaluation of LLM based systems

Room D3

Marek Matiáš, O2/ Dataclair
Ondřej Finke, Dataclair
Alexandr Vendl, O2/ Dataclair

Development of complex LLM based solutions cannot be done without robust system for evaluation of the outputs. Whenever you make changes to models prompts or other components of the system you need a metric to find out whether the performance improved overall. We will outline possible solution to this problem including broader picture as well as hands-on exercise.

Chatting with your Data: A Hands-on Introduction to LangChain.

Room D4

Marina Volkova, HumbleBuildings

In this interactive workshop we are covering how to handle models to work with your own data whether it's text PDFs CSVs or SQL database. We'll show you how to efficiently run Language Models (LLMs) on a standard 16GB RAM machine. During the workshop you will gain hands-on experience with the LangChain framework and have the opportunity to create an application that allows you to interact with your data through natural language. The beauty of this workshop lies in its accessibility as we leverage low-code frameworks that make it suitable for a broad audience. All you need is a laptop with an internet connection access to the Google Colab environment and optionally an OpenAI API key to achieve even more impressive results. Join us on this exciting journey of data interaction and exploration!

Empowering Question Answering Systems with RAG: A Deep Dive into Retrieval Augmented Generation

Room D6

Peter Krejzl, Emplifi
Jan Rus, Emplifi

Join us for a workshop on Retrieval Augmented Generation (RAG) a groundbreaking approach in machine learning that seamlessly integrates information retrieval with text generation. Dive deep into the construction of advanced question-answering systems that leverage private knowledge bases moving beyond mere document retrieval to generate coherent and natural language responses. Throughout the workshop participants will benefit from hands-on experiences utilizing real datasets and the latest LLM techniques ensuring practical comprehension. By harnessing the power of semantic search and private databases RAG-based system promises a new user experience and proprietary context-specific solutions. - Introduction to RAG: We will begin by introducing the concept of Retrieval Augmented Generation (RAG) and its importance in the landscape of modern machine learning solutions. - Dual Strength: RAG uniquely combines an information retrieval component with a text generation model offering a two-pronged approach to problem-solving. - Private Knowledge Bases: Traditional systems rely heavily on public datasets. In this workshop participants will learn to build systems that leverage private knowledge bases ensuring proprietary and context-specific responses. - Beyond Simple Retrievals: It's not just about finding the right documents. Our RAG-based system will not merely retrieve relevant articles but will craft answers in coherent and natural language enhancing user experience. - Practical Implementation: We will guide attendees through the process of building a question-answering system powered by semantic search and the latest LLM (Language Model) techniques. - Hands-on Experience: Participants will get hands-on experience working with real datasets and observing live demonstrations ensuring a practical understanding of the RAG system. - Future Potential: The workshop will conclude with a discussion on the future potential and advancements of RAG in diverse applications. Join us for the workshop and elevate your understanding of how RAG is reshaping the frontier of question-answering systems.

Real-Time Anomaly Detection in Python

Room D7

Tomáš Neubauer, Quix
Tun Shwe, Quix

Get to grips with real-time anomaly detection in Python by working through a use case for detecting cyclist crashes. In this hands-on workshop we will learn how to build a streaming data pipeline using Kafka to handle telemetry events from a bicycle sensor/fitness app. From here we will collect data to label and train an ML model and deploy it to predict crashes as they happen in real time.

coffee break

Using Machine Learning for filtering itineraries: A real-life use case

Room D2

Thomas Browne,
Lucie Blechova,

In this workshop the participants will get an insight into how we utilize Machine Learning in practice in We will introduce a business case about filtering itineraries that we want to display through our travel search partners based on customers’ interest. The motivation is to increase the attractiveness of our offer. This session will be an opportunity for the attendees to follow the whole life-cycle of a ML project - from the original business problems to the final deployment of our solution taking into account both data science and engineering-related issues. We will start by defining expected goals and wonder how this translates into a ML problem. The latter will involve proper metrics definition (what does an attractive itinerary actually mean in terms of data?) model design (ML classification on whether a given itinerary is attractive) and evaluation approach that is used to guarantee the solution’s added value. We will follow by introducing the dataset on our itineraries and we will go through the featurisation process together. Once features are created we will move towards the model build and selection phase where a comparison of multiple approaches will be provided - a decisive evaluation will help us make the best decision. Afterwards we will define our filtering methodology - throughout simulations on historical data we will estimate the optimal threshold on our classifier. The goal will be to maximise the interest dragged by the itineraries that we decide to display. Just like our other regular projects the attendees will then be taken through the ABtest phase and what is needed to adopt our developed solution. At last we will share the requirements to deploy the solution and give an insight on the frequency of re-training the ML model.

Finetuning Open-Source LLMs to small languages

Room D3

Petr Simecek, Mediaboard

Large Language Models (LLMs) represent a remarkable advancement in artificial intelligence (AI) boasting the capability to generate and comprehend human language. Derived from extensive training on vast text and code datasets these models excel in a variety of tasks such as translation summarization and question answering. However a major limitation arises when these LLMs predominantly trained on English data are applied to other languages particularly smaller languages like Czech. Notable models like ChatGPT Bard and Claude exhibit proficiency in Czech with minimal grammatical and stylistic errors. Yet many contemporary open-source LLMs influenced heavily by English-centric datasets fail to address even basic Czech queries.So what are the choices? At Monitora our initial experiments with ChatGPT for text summarization have now transitioned to Llama2 7B primarily due to privacy considerations. We are also evaluating Mistral models introduced in September. Within this workshop I will give introduction into QLoRA adapters and demonstrate that instruction finetuning of such models is possible even with limited resources (a single consumer GPU). We will see that models originally speaking a very broken language improve significantly by this process. In addition to the technical insights I envision this workshop as a collaborative forum. Rather than just being a traditional presentation it aims to be a platform for knowledge exchange. Attendees are encouraged to contribute their insights plans or experiences related to the application of LLMs in small languages. For structure I would like to limit each interested participant to 5 slides and a presentation time of 5 minutes.

Building OpenAI Plugins: Deep Dive into Microsoft Semantic Kernel (SK)

Room D4

Daniel Costea, European Agency

Microsoft Semantic Kernel (SK) is a new technology that enables the integration of AI Large Language Models (LLMs) like GPT-3.5-Turbo GPT-4 and DALL-E 3 from OpenAI or Azure OpenAI with conventional programming languages like C# Python and Java. SK brings together several key components to provide planning and execution capabilities. These components include a robust kernel that provides the foundation for all other components plugins (formerly known as skills) for performing specific tasks connectors for interfacing with external systems memories for storing information about past events steps for defining individual actions and pipelines for organizing complex multi-stage plans. In this hands-on workshop we explore how to build various plugins for: - building a semantic interface for an existing API using plugins and execution plans containing semantic and native functions. - building a GPT-powered chat enriched by real-time information and memories enhanced through RAG (Retrieval-Augmented Generation) capabilities. - building a cutting-edge generative model using DALL-E 3 and multi-modal input.

Unlocking the Power of Active Learning: A Hands-on Exploration

Room D6

Fabian Kovac, St. Pölten University of Applied Sciences
Oliver Eigner, St. Pölten University of Applied Sciences

In today's ever-evolving landscape of Artificial Intelligence and Machine Learning staying at the cutting edge is not just an advantage it's a necessity. Active Learning has emerged as a powerful technique that has the potential to revolutionize how we train machine learning models. With this hands-on workshop we are providing attendees with a comprehensive understanding of its relevance benefits and practical applications. Active Learning is a paradigm-shifting approach that focuses on enabling machines to learn more efficiently from limited labeled data by actively selecting the most informative examples for annotation. It plays a pivotal role in various industries and research domains offering solutions to some of the most pressing challenges in AI and machine learning. By actively involving human experts in the loop Active Learning not only reduces annotation costs and efforts but also accelerates model development making it particularly relevant for resource-constrained environments where only limited labeled data is available. This workshop will be a hands-on immersive experience providing a solid foundation of the theoretical underpinnings ensuring attendees grasp the core concepts. Participants will have the opportunity to apply Active Learning techniques to real datasets gain practical experience in selecting informative data points training models and observing the impact on model performance. Furthermore we will share best practices and pitfalls to avoid when implementing Active Learning in real applications. This workshop promises to equip attendees with the knowledge and skills they need to harness the full potential of Active Learning in their research and industry applications. By the end of the workshop participants will be well-prepared to incorporate this cutting-edge technique into their AI and Machine Learning endeavors accelerating progress and achieving superior results.

Power of Physics-ML, a hands-on workshop with open-source tools

Room D7

Idrees Muhammad, Turing Artificial Intelligence

In this workshop participants will learn how to combine the power of neural networks with the laws of physics to solve complex scientific and engineering problems using open-source tools. Physics-Informed Neural Networks (PINNs) have gained popularity in various fields including fluid dynamics material science and structural engineering for their ability to incorporate physical principles into machine learning models. Attendees will learn how to leverage open-source tools to build and train PINNs enabling them to model and solve complex physical systems efficiently.


O2 Universum, Českomoravská 2345/17a, 190 00, Praha (and on-line)

Registration from 9:00

Program will be announced.

Conference day 1

O2 Universum, Českomoravská 2345/17a, 190 00, Praha (and on-line)

Doors open at 08:30

Program will be announced.

Have a great time Prague, the city that never sleeps

You can feel centuries of history at every corner in this unique capital. We'll invite you to get a taste of our best pivo (that’s beer in Czech) and then bring you back to the present day to party at one of the local clubs all night long!

Venue ML Prague 2024 will run hybrid, in person and online!

The main conference as well as the workshops will be held at O2 Universum.

We will also livestream the talks for all those participants who prefer to attend the conference online. Our platform will allow interaction with speakers and other participants too. Workshops require intensive interaction and won't be streamed.

Conference building

O2 Universum
Českomoravská 2345/17a, 190 00, Praha 9


O2 Universum
Českomoravská 2345/17a, 190 00, Praha 9

Now or never Registration

Early Bird

Sold Out

  • Conference days € 270
  • Only workshops € 200
  • Conference + workshops € 440



Last 100 registrations

  • Conference days € 320
  • Only workshops € 260
  • Conference + workshops € 520

What You Get

  • Practical and advanced level talks led by top experts.
  • Party in the city with people from around the world. Let’s go wild!
  • Delicious food and snacks throughout the conference.

They’re among us We are in The ML Revolution age

Machines can learn. Incredibly fast. Faster than you. They are getting smarter and smarter every single day, changing the world we’re living in, our business and our life. The artificial intelligence revolution is here. Come, learn and make this threat your biggest advantage.

Our Attendees What they say about ML Prague

Thank you to Our Partners

Co-organizing Partner

Venue Partner

Platinum Partners

Gold Partners

Communities and Further support

Would you like to present your brand to 1000+ Machine Learning enthusiasts? Send us an email at to find out how to become a ML Prague 2024 partner.

Become a partner

Happy to help Contact

If you have any questions about Machine Learning Prague, please e-mail us at


Jiří Materna
Scientific program & Co-Founder

Teresa Caklova
Event production

Gonzalo V. Fernández
Marketing and social media

Jona Azizaj

Ivana Javná
Speaker support

Barbora Toman Hanousková