Convexification with bounded gap for randomly projected QO and beyond
1 : Graduate School of Information Science and Technology, The University of Tokyo
2 : Center for Advanced Intelligence Project, RIKEN
3 : RIKEN Center for Advanced Intelligence Project [Tokyo]
4 : Graduate School of Information Science and Technology, The University of Tokyo, Tokyo 113-8656, Japan and Center for Advanced Intelligence Project, RIKEN
Random projection techniques based on Johnson-Lindenstrauss lemma are used for randomly aggregating the constraints or variables of optimization problems while approximately preserving their optimal values, that leads to smaller-scale optimization problems. In this talk we apply random projection to a a non-convex quadratic optimization problem and show that such problem can be approximated by a quadratic optimization problem which is convex
To the best of our knowledge, our paper is the first to use random projection for convexification of non-convex optimization problems.
In this talk we will evaluate the approximation error between optimum values of a non-convex optimization problem and its convexified randomly projected problem.