reading non uniform scut
This commit is contained in:
parent
0fd4a3410e
commit
9a38a023f4
8
main.tex
8
main.tex
@ -185,7 +185,8 @@ This is the framework of the proof in \cite{arora_expander_2004}.
|
|||||||
I think the intuition behind this SDP relaxation is almost the same as \metric{}. $\ell_1$ metrics are good since they are in the cut cone. However, if we further require that the metric in \metric{} is an $\ell_1$ metric in $\R^d$, then resulting LP is NP-hard, since the integrality gap becomes 1.
|
I think the intuition behind this SDP relaxation is almost the same as \metric{}. $\ell_1$ metrics are good since they are in the cut cone. However, if we further require that the metric in \metric{} is an $\ell_1$ metric in $\R^d$, then resulting LP is NP-hard, since the integrality gap becomes 1.
|
||||||
\cite{leighton_multicommodity_1999} showed that the $\Theta(\log n)$ gap is tight for \metric{}, but add extra constraints to \metric{} (while keeping it to be a relaxation of \scut{} and to be polynomially solvable) may provides better gap. The SDP relaxation is in fact trying to enforce the metric to be $\ell_2^2$ in $\R^n$.
|
\cite{leighton_multicommodity_1999} showed that the $\Theta(\log n)$ gap is tight for \metric{}, but add extra constraints to \metric{} (while keeping it to be a relaxation of \scut{} and to be polynomially solvable) may provides better gap. The SDP relaxation is in fact trying to enforce the metric to be $\ell_2^2$ in $\R^n$.
|
||||||
|
|
||||||
\cite{arora_euclidean_2005} proved that there is an embedding from $\ell_2^2$ to $\ell_1$ with distortion $O(\sqrt{\log n}\log \log n)$. This implies an approximation for \nonuscut{} with the same ratio. $O(\sqrt{\log n})$ is likely to be the optimal bound for the above SDP. To get better gap one can stay with SDP and add more additional constraints (like Sherali-Adams, Lovász-Schrijver and Lasserre relaxations); or think distance as variables in an LP and add force feasible solution to be certain kind of metrics. \cite{arora_towards_2013} is following the former method and considers Lasserre relaxations. For the later method, getting a cut from the optimal metric is the same as embedding it to $\ell_1$. Thus it still relies on progress in metric embedding theory. Note that both methods need to satisfy
|
\begin{remark}
|
||||||
|
$O(\sqrt{\log n})$ is likely to be the optimal bound for the above SDP. To get better gap one can stay with SDP and add more additional constraints (like Sherali-Adams, Lovász-Schrijver and Lasserre relaxations); or think distance as variables in an LP and force feasible solution to be certain kind of metrics. \cite{arora_towards_2013} is following the former method and considers Lasserre relaxations. For the later method, getting a cut from the optimal metric is the same as embedding it to $\ell_1$. Thus it still relies on progress in metric embedding theory. Note that both methods need to satisfy
|
||||||
\begin{enumerate}
|
\begin{enumerate}
|
||||||
\item the further constrained programs is polynomially solvable,
|
\item the further constrained programs is polynomially solvable,
|
||||||
\item it remains a relaxation of \scut{},
|
\item it remains a relaxation of \scut{},
|
||||||
@ -194,6 +195,11 @@ I think the intuition behind this SDP relaxation is almost the same as \metric{}
|
|||||||
The Lasserre relaxation of SDP automatically satisfies 1 and 2. But I believe there may be some very strange kind of metric that embeds into $\ell_1$ well?
|
The Lasserre relaxation of SDP automatically satisfies 1 and 2. But I believe there may be some very strange kind of metric that embeds into $\ell_1$ well?
|
||||||
|
|
||||||
Another possible approach for \nonuscut{} would be making the number of demand vertices small and then applying a metric embedding (contraction) to $\ell_1$ with better distortion on those vertices.
|
Another possible approach for \nonuscut{} would be making the number of demand vertices small and then applying a metric embedding (contraction) to $\ell_1$ with better distortion on those vertices.
|
||||||
|
\end{remark}
|
||||||
|
|
||||||
|
\subsection{SDP \texorpdfstring{$O(\sqrt{\log n}\log \log n)$}{O(√log n log log n)}-\nonuscut}
|
||||||
|
|
||||||
|
Arora, Lee and Naor \cite{arora_euclidean_2005,arora_frechet_2007} proved that there is an embedding from $\ell_2^2$ to $\ell_1$ with distortion $O(\sqrt{\log n}\log \log n)$. This implies an approximation for \nonuscut{} with the same ratio.
|
||||||
|
|
||||||
\section{Nealy uniform \scut{}}
|
\section{Nealy uniform \scut{}}
|
||||||
What is the best approximation ratio for \uscut{} instances where almost all demands are uniform.
|
What is the best approximation ratio for \uscut{} instances where almost all demands are uniform.
|
||||||
|
Loading…
x
Reference in New Issue
Block a user