diff --git a/main.pdf b/main.pdf index 76831b5..794833e 100644 Binary files a/main.pdf and b/main.pdf differ diff --git a/main.tex b/main.tex index 2421567..1c4d045 100644 --- a/main.tex +++ b/main.tex @@ -181,7 +181,8 @@ To get a $O(\sqrt{\log n})$ (randomized) approximation algorithm we need to firs \] This is the framework of the proof in \cite{arora_expander_2004}. -I think the intuition behind this SDP relaxation is almost the same as \metric{}. $\ell_1$ metrics are good since they are in the cut cone. If we further require the metric in \metric{} is a $\ell_1$ metric in $\R^d$, the resulting LP is still a relaxation of \nonuscut{} and the integrality gap is upperbounded by the metric embedding theorem. \cite{leighton_multicommodity_1999} showed that the $\Theta(\log n)$ gap is tight for \metric{}, but add extra constraints to \metric{} (while keeping it to be a relaxation of \scut{} and to be polynomially solvable) may provides better gap. The SDP relaxation is in fact trying to enforce the metric to be $\ell_2^2$ in $\R^n$. +I think the intuition behind this SDP relaxation is almost the same as \metric{}. $\ell_1$ metrics are good since they are in the cut cone. However, if we further require that the metric in \metric{} is an $\ell_1$ metric in $\R^d$, then resulting LP is NP-hard, since the integrality gap becomes 1. +\cite{leighton_multicommodity_1999} showed that the $\Theta(\log n)$ gap is tight for \metric{}, but add extra constraints to \metric{} (while keeping it to be a relaxation of \scut{} and to be polynomially solvable) may provides better gap. The SDP relaxation is in fact trying to enforce the metric to be $\ell_2^2$ in $\R^n$. \bibliographystyle{alpha} \bibliography{ref}