Programme > Tutoriels

 Liste des tutoriels de la session Tutoriels

 


The power of non-robust cuts in branch-cut-and-price algorithms

Tutoriel de l'axe OM/OCPE (Optimisation Mathématique / Optimisation Combinatoire et Programmation en Nombres Entiers) du GDR RO.

Ruslan Sadykov, INRIA Bordeaux - Sud Ouest

Ruslan.Sadykov@inria.fr


 

 

Abstract - Mixed Integer Programming (MIP) solvers have reached a remarkable efficiency for a wide range of optimization problems in recent years. There are however some classes of problems which “resist”. Among others, these are classes of vehicle routing, scheduling, network design problems. For many of them, there are no known compact MIP formulations with a strong linear relaxation after adding generic or even problem-specific cutting planes. Extended formulations with an exponential number of variables solved by branch-cut-and-price (BCP) algorithms may come to the rescue in this case. BCP algorithms are however competitive with modern MIP solvers only if the extended formulation has much stronger linear relaxation than known compact formulations. Non-robust cuts, i.e. cuts expressed in the space of extended variables, often help us to obtain such relaxation and help BCP algorithms to achieve state-of-the-art efficiency.  In this tutorial, we will review some families of known non-robust cutting planes. We will also discuss issues which may arise while using non-robust cuts, including their impact on the solution time of the pricing problem, implementation difficulties, and generality/efficiency trade-off. Some recommendations on how to overcome these issues will be given.


Decision rules for multi-stage adjustable robust optimization

Tutoriel de l'axe DOR (Décision et Optimisation Robuste) du GDR RO.

Ayse N. Arslan, INSA Rennes

Ayse-Nur.Arslan@insa-rennes.fr

Abstract - Multi-stage adjustable robust optimization, as a subclass of multi-stage optimization under uncertainty problems, constitutes a class of problems that are very difficult to solve in practice. Although, the exact solution of these problems under certain special cases may be possible, for the general case, there are no known exact solution algorithms. Instead, approximate solution methods have been developed, either restricting the functional form or the feasibility space of recourse actions, these are generally referred to as “decision rules“. In this talk, we will present a review of existing decision rule approximations including affine and extended affine decision rules, uncertainty set partitioning schemes and finite-adaptability. We will discuss the reformulations and solution algorithms that result from these approximations. We will point out existing challenges in practical use of these decision rules, and point out current and future research directions. When possible we will point out the connections to multi-stage stochastic programming literature.


The Longest Processing Time rule for identical parallel machines revisited

Tutoriel de l'axe OPA (Ordonnancement, Planification et Applications) du GDR RO.

Federico Della Croce, Politecnico di Torino, Italy

federico.dellacroce@polito.it

 

 

Abstract - We consider the P||Cmax scheduling problem where the goal is to schedule n jobs on m identical parallel machines to minimize makespan. We revisit the famous Longest Processing Time (LPT) rule proposed by Graham in 1969. LPT requires to sort jobs in non-ascending order of processing times and then to assign one job at a time to the machine whose load is smallest so far. We provide new insights into LPT and discuss the approximation ratio of a modification of LPT that improves Graham’s bound. We use linear programming to analyze the approximation ratio of our approach. This performance analysis can be seen as a valid alternative to formal proofs based on analytical derivation. Also, we derive from the proposed approach an O(n log n) time complexity heuristic called SLACK. The heuristic splits the sorted job set in tuples of m consecutive jobs (1,...,m; m+1,...,2m; etc.) and sorts the tuples in non-increasing order of the difference (SLACK) between largest and smallest job in the tuple. Given this new ordering of the jobset, list scheduling is applied. This approach strongly outperforms LPT on benchmark literature instances and is competitive with more involved approaches such as COMBINE and LDM.


Online learning in multi-agent systems: Regret, equilibrium, and the road ahead

Tutoriel de l'axe DMEI (Décision, Modélisation, Evaluation, Incertitude) et de l'action transverse DAAO (Données, Apprentissage Automatique et Optimisation) du GDR RO.

Panayotis Mertikopoulos, Université Grenoble Alpes

panayotis.mertikopoulos@imag.fr

Abstract - From auctions to traffic planning and the training of adversarial neural nets, the goal of multi-agent learning is always the same: to make more informed decisions that lead to better rewards over time. But who benefits in the end? Each agent unilaterally? All agents concurrently? Both / None of the above? This tutorial is intended to explore these questions by providing an introduction to online optimization and learning from a multi-agent, game-theoretic viewpoint. We will cover the basics of learning in finite games (focusing on the multiplicative/exponential weights algorithm and its properties), and the mainstay algorithms for online optimization in continuous games (online gradient descent, follow the regularizer leader, etc.). We will also discuss the impact of the information available to the players, as well as a range of concrete applications to machine learning and operations research.


Learning-Augmented Online Algorithms

Tutoriel de l'axe CAGDO (Complexité, Approximation et Graphes pour la Décision et l’Optimisation) du GDR RO.

Bertrand Simon, Centre de calcul de l’IN2P3 - Villeurbanne

bertrand.simon@cnrs.fr

Abstract - Guaranteed classical online algorithms are often designed in order to minimize the competitive ratio, i.e., the performance in the worst case, so usually do not perform well on easy inputs. An orthogonal approach has been developed in the last decade thanks to the progress in machine learning, which allows to predict key parameters of the instance. Its downside is that no guarantee can be given on the prediction quality, for instance if the training set does not represent the current instance. This statement is at the origin of the recently emerging field of learning-augmented algorithms. In this framework, an online algorithm has access to a predictor, which can give relevant information about the future instance, though without any guarantee on the quality. The objective is then to design an algorithm which performs well when the predictor turns out to be accurate, while being robust to imprecise predictions. This tutorial will introduce this new domain, present some recent works and discuss current research questions.


About Modeling and Solving Combinatorial Constrained Problems (in Python)

Tutoriel de l'axe MH2PPC (Méthodes Hybrides, (Méta)Heuristiques, Programmation Par Contraintes) du GDR RO.

Christophe Lecoutre, Université d'Artois - Lens

lecoutre@cril.fr

Abstract - In this talk, we will start by introducing PyCSP3, which is a Python library that allows us to easily write models of combinatorial constrained problems. We will show how PyCSP3 can be useful to learn more about Constraint Programming (CP), notably by letting the user play with 60 Jupyter notebooks available at pycsp.org (concerning popular constraints, classical models and incremental solving). Then, we will focus on new advanced forms of constraints, combining intensional and extensional representations. These hybrid constraints are promising both for simplifying models and efficiently solving problem instances. Finally, we will discuss several hybrid forms of solving techniques, developed from connex domains: complete operations research approaches, local search and SAT.




Méthodes d'optimisation convexe non différentiable

Tutoriel de l'axe REST (Réseaux, Energie, Services, Transport) du GDR RO en association avec le GDR RSD

Adam Ouorou, Orange Innovation - France

adam.ouorou@orange.com

Abstract - Nous passons en revue les id ées de base qui sous-tendent la vaste famille des algorithmes de minimisation des fonctions convexes non différentiables. Les méthodes les plus simples reposent sur la construction des modèles de la fonction à minimiser. Mais le manque de continuité d’information de premier ordre fait que ces modèles ne sont pas fiables même proche d’un optimum. Ceci motive l’introduction de différents moyens de stabilisation. Les méthodes basées sur les ensembles dits de localisation sont présentées de même que leurs améliorations par la stabilisation. Nous terminons par les approches exploitant les techniques d’accélération proposées par Nesterov pour les méthodes de gradient.




The Christoffel-Darboux Kernel for Data Analysis

Tutoriel de l'axe OM/PMNL (Optimisation Mathématique / Programmation Mathématique Non Linéaire) du GDR RO en association avec le GDR MACS

Jean-Bernard Lasserre, CNRS - LAAS - Toulouse

lasserre@laas.fr

Abstract - If the Christoffel-Darboux (CD) kernel is well-known in theory of approximation and orthogonal polynomials, its striking properties seem to have been largely ignored in the context of data analysis (one main reason being that in data analysis one is faced with measures supported on finitely many points (the data set)). In this talk, we briefly introduce the (CD) kernel, some of its properties, and claim that it can become a simple and easy to use tool in the context of data analysis, e.g. to help solve some problems like outlier detection, manifold learning and density estimation.



 

 

 

 

 

Personnes connectées : 3 Vie privée
Chargement...