The last and final post discussing the VF2++ helpers can be found here. Now that we’ve figured out how to solve all the sub-problems that VF2++ consists of, we are ready to combine our implemented functionalities to create the final solver for the Graph Isomorphism problem.
We should quickly review the individual functionalities used in the VF2++ algorithm:
We are going to use all these functionalities to form our Isomorphism solver.
First of all, let’s describe the algorithm in simple terms, before presenting the pseudocode. The algorithm will look something like this:
The official code for the VF2++ is presented below.
# Check if there's a graph with no nodes in it
if G1.number_of_nodes() == 0 or G2.number_of_nodes() == 0:
return False
# Check that both graphs have the same number of nodes and degree sequence
if not nx.faster_could_be_isomorphic(G1, G2):
return False
# Initialize parameters (Ti/Ti_tilde, i=1,2) and cache necessary information about degree and labels
graph_params, state_params = _initialize_parameters(G1, G2, node_labels, default_label)
# Check if G1 and G2 have the same labels, and that number of nodes per label is equal between the two graphs
if not _precheck_label_properties(graph_params):
return False
# Calculate the optimal node ordering
node_order = _matching_order(graph_params)
# Initialize the stack to contain node-candidates pairs
stack = []
candidates = iter(_find_candidates(node_order[0], graph_params, state_params))
stack.append((node_order[0], candidates))
mapping = state_params.mapping
reverse_mapping = state_params.reverse_mapping
# Index of the node from the order, currently being examined
matching_node = 1
while stack:
current_node, candidate_nodes = stack[-1]
try:
candidate = next(candidate_nodes)
except StopIteration:
# If no remaining candidates, return to a previous state, and follow another branch
stack.pop()
matching_node -= 1
if stack:
# Pop the previously added u-v pair, and look for a different candidate _v for u
popped_node1, _ = stack[-1]
popped_node2 = mapping[popped_node1]
mapping.pop(popped_node1)
reverse_mapping.pop(popped_node2)
_restore_Tinout(popped_node1, popped_node2, graph_params, state_params)
continue
if _feasibility(current_node, candidate, graph_params, state_params):
# Terminate if mapping is extended to its full
if len(mapping) == G2.number_of_nodes() - 1:
cp_mapping = mapping.copy()
cp_mapping[current_node] = candidate
yield cp_mapping
continue
# Feasibility rules pass, so extend the mapping and update the parameters
mapping[current_node] = candidate
reverse_mapping[candidate] = current_node
_update_Tinout(current_node, candidate, graph_params, state_params)
# Append the next node and its candidates to the stack
candidates = iter(
_find_candidates(node_order[matching_node], graph_params, state_params)
)
stack.append((node_order[matching_node], candidates))
matching_node += 1
This section is dedicated to the performance comparison between VF2 and VF2++. The comparison was performed in random graphs without labels, for number of nodes anywhere between the range $(100-2000)$. The results are depicted in the two following diagrams.
We notice that the maximum speedup achieved is 14x, and continues to increase as the number of nodes increase. It is also highly prominent that the increase in number of nodes, doesn’t seem to affect the performance of VF2++ to a significant extent, when compared to the drastic impact on the performance of VF2. Our results are almost identical to those presented in the original VF2++ paper, verifying the theoretical analysis and premises of the literature.
The achieved boost is due to some key improvements and optimizations, specifically:
res = []
for node in G2.nodes():
if G1.degree[u] == G2.degree[node]:
res.append(node)
# do stuff with res ...
to get the nodes of same degree as u (which happens a lot of times in the implementation), we just do:
res = G2_nodes_of_degree[G1.degree[u]]
# do stuff with res ...
where “G2_nodes_of_degree” stores set of nodes for a given degree. The same is done with node labels.
candidates = set(G2.nodes())
for candidate in candidates:
if feasibility(u, candidate):
do_stuff()
we take a huge set of candidates, which results in poor performance due to maximizing calls of “feasibility”, thus performing the feasibility checks in a very large set. Now compare that to the following alternative:
candidates = [
n
for n in G2_nodes_of_degree[G1.degree[u]].intersection(
G2_nodes_of_label[G1_labels[u]]
)
]
for candidate in candidates:
if feasibility(u, candidate):
do_stuff()
Immediately we have drastically reduced the number of checks performed and calls to the function, as now we only apply them to nodes of the same degree and label as $u$. This is a simplification for demonstration purposes. In the actual implementation there are more checks and extra shrinking of the candidate set.
Let’s demonstrate our VF2++ solver on a real graph. We are going to use the graph from the Graph Isomorphism wikipedia.
Let’s start by constructing the graphs from the image above. We’ll call
the graph on the left G
and the graph on the left H
:
import networkx as nx
G = nx.Graph(
[
("a", "g"),
("a", "h"),
("a", "i"),
("g", "b"),
("g", "c"),
("b", "h"),
("b", "j"),
("h", "d"),
("c", "i"),
("c", "j"),
("i", "d"),
("d", "j"),
]
)
H = nx.Graph(
[
(1, 2),
(1, 5),
(1, 4),
(2, 6),
(2, 3),
(3, 7),
(3, 4),
(4, 8),
(5, 6),
(5, 8),
(6, 7),
(7, 8),
]
)
res = nx.vf2pp_is_isomorphic(G, H, node_label=None)
# res: True
res = nx.vf2pp_isomorphism(G, H, node_label=None)
# res: {1: "a", 2: "h", 3: "d", 4: "i", 5: "g", 6: "b", 7: "j", 8: "c"}
res = list(nx.vf2pp_all_isomorphisms(G, H, node_label=None))
# res: all isomorphic mappings (there might be more than one). This function is a generator.
# Assign some label to each node
G_node_attributes = {
"a": "blue",
"g": "green",
"b": "pink",
"h": "red",
"c": "yellow",
"i": "orange",
"d": "cyan",
"j": "purple",
}
nx.set_node_attributes(G, G_node_attributes, name="color")
H_node_attributes = {
1: "blue",
2: "red",
3: "cyan",
4: "orange",
5: "green",
6: "pink",
7: "purple",
8: "yellow",
}
nx.set_node_attributes(H, H_node_attributes, name="color")
res = nx.vf2pp_is_isomorphic(G, H, node_label="color")
# res: True
res = nx.vf2pp_isomorphism(G, H, node_label="color")
# res: {1: "a", 2: "h", 3: "d", 4: "i", 5: "g", 6: "b", 7: "j", 8: "c"}
res = list(nx.vf2pp_all_isomorphisms(G, H, node_label="color"))
# res: {1: "a", 2: "h", 3: "d", 4: "i", 5: "g", 6: "b", 7: "j", 8: "c"}
Notice how in the first case, our solver may return a different mapping every time, since the absence of labels results in nodes that can map to more than one others. For example, node 1 can map to both a and h, since the graph is symmetrical.
On the second case though, the existence of a single, unique label per node imposes that there’s only one match for each node, so the mapping returned is deterministic. This is easily observed from
output of list(nx.vf2pp_all_isomorphisms)
which, in the first case, returns all possible mappings while in the latter, returns a single, unique isomorphic mapping.
The previous post can be found here, be sure to check it out so you can follow the process step by step. Since then, another two very significant features of the algorithm have been implemented and tested: node pair candidate selection and feasibility checks.
As previously described, in the ISO problem we are basically trying to create a mapping such that, every node from the first graph is matched to a node from the second graph. This searching for “feasible pairs” can be visualized by a tree, where each node is the candidate pair that we should examine. This can become much clearer if we take a look at the below figure.
In order to check if the graphs $G_1$, $G_2$ are isomorphic, we check every candidate pair of nodes and if it is feasible, we extend the mapping and go deeper into the tree of pairs. If it’s not feasible, we climb up and follow a different branch, until every node in $G_1$ is mapped to a node $G_2$. In our example, we start by examining node 0 from G1, with node 0 of G2. After some checks (details below), we decide that the nodes 0 and 0 are matching, so we go deeper to map the remaining nodes. The next pair is 1-3, which fails the feasibility check, so we have to examine a different branch as shown. The new branch is 1-2, which is feasible, so we continue on using the same logic until all the nodes are mapped.
Although in our example we use a random candidate pair of nodes, in the actual implementation we are able to target specific pairs that are more likely to be matched, hence boost the performance of the algorithm. The idea is that, in every step of the algorithm, given a candidate
$$u\in V_1$$
we compute the candidates
$$v\in V_2$$
where $V_1$ and $V_2$ are the nodes of $G_1$ and $G_2$ respectively. Now this is a puzzle that does not require a lot of specific knowledge on graphs or the algorithm itself. Keep up with me, and you will realize it yourself. First, let $M$ be the mapping so far, which includes all the “covered nodes” until this point. There are actually three different types of $u$ nodes that we might encounter.
Node $u$ has no neighbors (degree of $u$ equals to zero). It would be redundant to test as candidates for $u$, nodes from $G_2$ that have more than zero neighbors. That said, we eliminate most of the possible candidates and keep those that have the same degree as $u$ (in this case, zero). Pretty easy right?
Node $u$ has neighbors, but none of them belong to the mapping. This situation is illustrated in the following figure.
The grey lines indicate that the nodes of $G_1$ (left 1,2) are mapped to the nodes of $G_2$ (right 1,2). They are basically the mapping. Again, given $u$, we make the observation that candidates $v$ of u, should also have no neighbors in the mapping, and also have the same degree as $u$ (as in the figure). Notice how if we add a neighbor to $v$, or if we place one of its neighbors inside the mapping, there is no point examining the pair $u-v$ for matching.
Node $u$ has neighbors and some of them belong to the mapping. This scenario is also depicted in the below figure.
In this case, to obtain the candidates for $u$, we must look into the neighborhoods of nodes from $G_2$, which map back to the covered neighbors of $u$. In our example, $u$ has one covered neighbor (1), and 1 from $G_1$ maps to 1 from $G_2$, which has $v$ as neighbor. Also, for v to be considered as candidate, it should have the same degree as $u$, obviously. Notice how every node that is not in the neighborhood of 1 (in $G_2$) cannot be matched to $u$ without breaking the isomorphism.
Let’s assume that given a node $u$, we obtained its candidate $v$ following the process described in the previous section. At this point, the Feasibility Rules are going to determine whether the mapping should be extended by the pair $u-v$ or if we should try another candidate. The feasibility of a pair $u-v$ is examined by consistency and cutting checks.
At, first I am going to present the mathematical expression of the consistency check. It may seem complicated at first, but it’s going to be made simple by using a visual illustration. Using the notation $nbh_i(u)$ for the neighborhood of u in graph $G_i$, the consistency rule is:
$$\forall\tilde{v}\in nbh_2(v)\cap M:(u, M^{-1}(\tilde{v}))\in E_1) \wedge \forall\tilde{u}\in nbh_1(u)\cap M:(u, M(\tilde{u}))\in E_2)$$
We are going to use the following simple figure to demystify the above equation.
The mapping is depicted as grey lines between the nodes that are already mapped, meaning that 1 maps to A and 2 to B. What is implied by the equation is that, for two nodes $u$ and $v$ to pass the consistency check, the neighbors of $u$ that belong in the mapping, should map to neighbors of $v$ (and backwards). This could be checked by code as simple as:
for neighbor in G1[u]:
if neighbor in mapping:
if mapping[neighbor] not in G2[v]:
return False
elif G1.number_of_edges(u, neighbor) != G2.number_of_edges(
v, mapping[neighbor]
):
return False
where the final two lines also check the number of edges between node $u$ and its neighbor $\tilde{u}$, which should be the same as those between $v$ and its neighbor which $\tilde{u}$ maps to. At a very high level, we could describe this check as a 1-look-ahead check.
We have previously discussed what $T_i$ and $\tilde{T_i}$ represent (see previous post). These sets are used in the cutting checks as follows: the number of neighbors of $u$ that belong to $T_1$, should be equal to the number of neighbors of $v$ that belong to $T_2$. Take a moment to observe the below figure.
Once again, node 1 maps to A and 2 to B. The red nodes (4,5,6) are basically $T_1$ and the yellow ones (C,D,E) are $T_2$. Notice that in order for $u-v$ to be feasible, $u$ should have the same number of neighbors, inside $T_1$, as $v$ in $T_2$. In every other case, the two graphs are not isomorphic, which can be verified visually. For this example, both nodes have 2 of their neighbors (4,6 and C,E) in $T_1$ and $T_2$ respectively. Careful! If we delete the $V-E$ edge and connect $V$ to $D$, the cutting condition is still satisfied. However, the feasibility is going to fail, by the consistency checks of the previous section. A simple code to apply the cutting check would be:
if len(T1.intersection(G1[u])) != len(T2.intersection(G2[v])) or len(
T1out.intersection(G1[u])
) != len(T2out.intersection(G2[v])):
return False
where T1out
and T2out
correspond to $\tilde{T_1}$ and $\tilde{T_2}$ respectively. And yes, we have to check for
those as well, however we skipped them in the above explanation for simplicity.
At this point, we have successfully implemented and tested all the major components of the algorithm VF2++,
This means that, in the next post, hopefully, we are going to discuss our first, full and functional implementation of VF2++.
]]>This post includes all the major updates since the last post about VF2++. Each section is dedicated to a different sub-problem and presents the progress on it so far. General progress, milestones and related issues can be found here.
The node ordering is one major modification that VF2++ proposes. Basically, the nodes are examined in an order that makes the matching faster by first examining nodes that are more likely to match. This part of the algorithm has been implemented, however there is an issue. The existence of detached nodes (not connected to the rest of the graph) causes the code to crash. Fixing this bug will be a top priority during the next steps. The ordering implementation is described by the following pseudocode.
Matching Order
- Set $M = \varnothing$.
- Set $\bar{V1}$ : nodes not in order yet
- while $\bar{V1}$ not empty do
- $rareNodes=[$nodes from $V_1$ with the rarest labels$]$
- $maxNode=argmax_{degree}(rareNodes)$
- $T=$ BFSTree with $maxNode$ as root
- for every level in $T$ do
- $V_d=[$nodes of the $d^{th}$ level$]$
- $\bar{V_1} \setminus V_d$
- $ProcessLevel(V_d)$
- Output $M$: the matching order of the nodes.
Process Level
- while $V_d$ not empty do
- $S=[$nodes from $V_d$ with the most neighbors in M$]$
- $maxNodes=argmax_{degree}(S)$
- $rarestNode=[$node from $maxNodes$ with the rarest label$]$
- $V_d \setminus m$
- Append m to M
According to the VF2++ paper notation:
$$T_1=(u\in V_1 \setminus m: \exists \tilde{u} \in m: (u,\tilde{u}\in E_1))$$
where $V_1$ and $E_1$ contain all the nodes and edges of the first graph respectively, and $m$ is a dictionary, mapping every node of the first graph to a node of the second graph. Now if we interpet the above equation, we conclude that $T_1$ contains uncovered neighbors of covered nodes. In simple terms, it includes all the nodes that do not belong in the mapping $m$ yet, but are neighbors of nodes that are in the mapping. In addition,
$$\tilde{T_1}=(V_1 \setminus m \setminus T_1)$$
The following figure is meant to provide some visual explanation of what exactly $T_i$ is.
The blue nodes 1,2,3 are nodes from graph G1 and the green nodes A,B,C belong to the graph G2. The grey lines connecting those two indicate that in this current state, node 1 is mapped to node A, node 2 is mapped to node B, etc. The yellow edges are just the neighbors of the covered (mapped) nodes. Here, $T_1$ contains the red nodes (4,5,6) which are neighbors of the covered nodes 1,2,3, and $T_2$ contains the grey ones (D,E,F). None of the nodes depicted would be included in $\tilde{T_1}$ or $\tilde{T_2}$. The latter sets would contain all the remaining nodes from the two graphs.
Regarding the computation of these sets, it’s not practical to use the brute force method and iterate over all nodes in every step of the algorithm to find the desired nodes and compute $T_i$ and $\tilde{T_i}$. We use the following observations to implement an incremental computation of $T_i$ and $\tilde{T_i}$ and make VF2++ more efficient.
We can conclude that in every step, $T_i$ and $\tilde{T_i}$ can be incrementally updated. This method avoids a ton of redundant operations and results in significant performance improvement.
The above graph shows the difference in performance between using the exhaustive brute force and incrementally updating $T_i$ and $\tilde{T_i}$. The graph used to obtain these measurements was a regular GNP Graph with a probability for an edge equal to $0.7$. It can clearly be seen that execution time of the brute force method increases much more rapidly with the number of nodes/edges than the incremental update method, as expected. The brute force method looks like this:
def compute_Ti(G1, G2, mapping, reverse_mapping):
T1 = {nbr for node in mapping for nbr in G1[node] if nbr not in mapping}
T2 = {
nbr
for node in reverse_mapping
for nbr in G2[node]
if nbr not in reverse_mapping
}
T1_out = {n1 for n1 in G1.nodes() if n1 not in mapping and n1 not in T1}
T2_out = {n2 for n2 in G2.nodes() if n2 not in reverse_mapping and n2 not in T2}
return T1, T2, T1_out, T2_out
If we assume that G1 and G2 have the same number of nodes (N), the average number of nodes in the mapping is $N_m$, and the average node degree of the graphs is $D$, then the time complexity of this function is:
$$O(2N_mD + 2N) = O(N_mD + N)$$
in which we have excluded the lookup times in $T_i$, $mapping$ and $reverse\_mapping$ as they are all $O(1)$. Our incremental method works like this:
def update_Tinout(
G1, G2, T1, T2, T1_out, T2_out, new_node1, new_node2, mapping, reverse_mapping
):
# This function should be called right after the feasibility is established and node1 is mapped to node2.
uncovered_neighbors_G1 = {nbr for nbr in G1[new_node1] if nbr not in mapping}
uncovered_neighbors_G2 = {
nbr for nbr in G2[new_node2] if nbr not in reverse_mapping
}
# Add the uncovered neighbors of node1 and node2 in T1 and T2 respectively
T1.discard(new_node1)
T2.discard(new_node2)
T1 = T1.union(uncovered_neighbors_G1)
T2 = T2.union(uncovered_neighbors_G2)
# todo: maybe check this twice just to make sure
T1_out.discard(new_node1)
T2_out.discard(new_node2)
T1_out = T1_out - uncovered_neighbors_G1
T2_out = T2_out - uncovered_neighbors_G2
return T1, T2, T1_out, T2_out
which based on the previous notation, is:
$$O(2D + 2(D + M_{T_1}) + 2D) = O(D + M_{T_1})$$
where $M_{T_1}$ is the expected (average) number of elements in $T_1$.
Certainly, the complexity is much better in this case, as $D$ and $M_{T_1}$ are significantly smaller than $N_mD$ and $N$.
In this post we investigated how node ordering works at a high level, and also how we are able to calculate some important parameters so that the space and time complexity are reduced. The next post will continue with examining two more significant components of the VF2++ algorith: the candidate node pair selection and the cutting/consistency rules that decide when the mapping should or shouldn’t be extended. Stay tuned!
]]>I got accepted as a GSoC contributor, and I am so excited to spend the summer working on such an incredibly interesting project. The mentors are very welcoming, communicative, fun to be around, and I really look forward to collaborating with them. My application for GSoC 2022 can be found here.
My name is Konstantinos Petridis, and I am an Electrical Engineering student at the Aristotle University of Thessaloniki. I am currently on my 5th year of studies, with a Major in Electronics & Computer Science. Although a wide range of scientific fields fascinate me, I have a strong passion for Computer Science, Physics and Space. I love to study, learn new things and don’t hesitate to express my curiosity by asking a bunch of questions to the point of being annoying. You can find me on GitHub @kpetridis24.
The project I’ll be working on, is the implementation of VF2++, a state-of-the-art algorithm used for the Graph Isomorphism problem, which lies in the complexity class NP. The functionality of the algorithm is similar to a regular, but more complex form of a DFS, but performed on the possible solutions rather than the graph nodes. In order to verify/reject the isomorphism between two graphs, we examine every possible candidate pair of nodes (one from the first and one from the second graph) and check whether going deeper into the DFS tree is feasible using specific rules. In case of feasibility establishment, the DFS tree is expanded, investigating deeper pairs. When one pair is not feasible, we go up the tree and follow a different branch, just like in a regular DFS. More details about the algorithm can be found here.
The major reasons I chose this project emanate from both my love for Graph Theory, and the fascinating nature of this individual project. The algorithm itself is so recent, that NetworkX is possibly going to hold one of the first implementations of it. This might become a reference that is going to help to further develop and optimize future implementations of the algorithm by other organisations. Regarding my personal gain, I will become more familiar with the open source communities and their philosophy, I will collaborate with highly skilled individuals and cultivate a significant amount of experience on researching, working as a team, getting feedback and help when needed, contributing to an actual scientific library.
]]>Welcome! This post is not going to be discussing technical implementation details or theortical work for my Google Summer of Code project, but rather serve as a summary and recap for the work that I did this summer.
I am very happy with the work I was able to accomplish and believe that I successfully completed my project.
My project was titled NetworkX: Implementing the Asadpour Asymmetric Traveling Salesman Problem Algorithm. The updated abstract given on the Summer of Code project project page is below.
This project seems to implement the asymmetric traveling salesman problem developed by Asadpour et al, originally published in 2010 and revised in 2017. The project is broken into multiple methods, each of which has a set timetable during the project. We start by solving the Held-Karp relaxation using the Ascent method from the original paper by Held and Karp. Assuming the result is fractional, we continue into the Asadpour algorithm (integral solutions are optimal by definition and immediately returned). We approximate the distribution of spanning trees on the undirected support of the Held Karp solution using a maximum entropy rounding method to construct a distribution of trees. Roughly speaking, the probability of sampling any given tree is proportional to the product of all its edge lambda values. We sample 2 log n trees from the distribution using an iterative approach developed by V. G. Kulkarni and choose the tree with the smallest cost after returning direction to the arcs. Finally, the minimum tree is augmented using a minimum network flow algorithm and shortcut down to an O(log n / log log n) approximation of the minimum Hamiltonian cycle.
My proposal PDF for the 2021 Summer of Code can be found here.
All of my changes and additions to NetworkX are part of this pull request and can also be found on this branch in my fork of the GitHub repository, but I will be discussing the changes and commits in more detail later.
Also note that for the commits I listed in each section, this is an incomplete list only hitting on focused commits to that function or its tests.
For the complete list, please reference the pull request or the bothTSP
GitHub branch on my fork of NetworkX.
My contributions to NetworkX this summer consist predominantly of the following functions and classes, each of which I will discuss in their own sections of this blog post. Functions and classes which are front-facing are also linked to the developer documentation for NetworkX in the list below and for their section headers.
SpanningTreeIterator
ArborescenceIterator
held_karp_ascent
spanning_tree_distribution
sample_spanning_tree
asadpour_atsp
These functions have also been unit tested, and those tests will be integrated into NetworkX once the pull request is merged.
The following papers are where all of these algorithms originate form and they were of course instrumental in the completion of this project.
[1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, p. 379 - 389 https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
[2] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[3] M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
[4] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning trees in order of increasing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
[5] V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), p. 185–207.
SpanningTreeIterator
The SpanningTreeIterator
was the first contribution I completed as part of my GSoC project.
This class takes a graph and returns every spanning tree in it in order of increasing cost, which makes it a direct implementation of [4].
The interesting thing about this iterator is that it is not used as part of the Asadpour algorithm, but served as an intermediate step so that I could develop the ArborescenceIterator
which is required for the Held Karp relaxation.
It works by partitioning the edges of the graph as either included, excluded or open and then finding the minimum spanning tree which respects the partition data on the graph edges.
In order to get this to work, I created a new minimum spanning tree function called kruskal_mst_edges_partition
which does exactly that.
To prevent redundancy, all kruskal minimum spanning trees now use this function (the original kruskal_mst_edges
function is now just a wrapper for the partitioned version).
Once a spanning tree is returned from the iterator, the partition data for that tree is split so that the union of the newly generated partitions is the set of all spanning trees in the partition except the returned minimum spanning tree.
As I mentioned earlier, the SpanningTreeIterator
is not directly used in my GSoC project, but I still decided to implement it to understand the partition process and be able to directly use the examples from [4] before moving onto the ArborescenceIterator
.
This class I’m sure will be useful to the other users of NetworkX and provided a strong foundation to build the ArborescenceIterator
off of.
Blog Posts about SpanningTreeIterator
5 Jun 2021 - Finding All Minimum Arborescences
10 Jun 2021 - Implementing The Iterators
Commits about SpanningTreeIterator
Now, at the beginning of this project, my commit messages were not very good… I had some problems about merge conflicts after I accidentally committed to the wrong branch and this was the first time I’d used a pre-commit hook.
I have not changed the commit messages here, so that you may be assumed by my troughly unhelpful messages, but did annotate them to provide a more accurate description of the commit.
Testing - Rewrote Kruskal’s algorithm to respect partitions and tested that while stubbing the iterators in a separate file
I’m not entirly sure how the commit hook works… - Added test cases and finalized implementation of Spanning Tree Iterator in the incorrect file
Moved iterators into the correct files to maintain proper codebase visibility - Realized that the iterators need to be in mst.py
and branchings.py
respectively to keep private functions hidden
Documentation update for the iterators - No explanation needed
Update mst.py to accept suggestion - Accepted doc string edit from code review
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Cleaned code, merged functions if possible and opened partition functionality to all
Implement suggestions from boothby
ArborescenceIterator
The ArborescenceIterator
is a modified version of the algorithm discussed in [4] so that it iterates over the spanning arborescences.
This iterator was a bit more difficult to implement, but that is due to how the minimum spanning arborescence algorithm is structured rather than the partition scheme not being applicable to directed graphs.
In fact the partition scheme is identical to the undirected SpanningTreeIterator
, but Edmonds’ algorithm is more complex and there are several edge cases about how nodes can be contracted and what it means for respecting the partition data.
In order to fully understand the NetworkX implementation, I had to read the original Edmonds paper, [2].
The most notable change was that when the iterator writes the next partition onto the edges of the graph just before Edmonds’ algorithm is executed, if any incoming edge is marked as included, all of the others are marked as excluded.
This is an implicit part of the SpanningTreeIterator
, but needed to be explicitly done here so that if the vertex in question was merged during Edmonds’ algorithm we could not choose two of the incoming edges to the same vertex once the merging was reversed.
As a final note, the ArborescenceIterator
has one more initial parameter than the SpanningTreeIterator
, which is the ability to give it an initial partition and iterate over all spanning arborescence with cost greater than the initial partition.
This was used as part of the branch and bound method, but is no longer a part of the my Asadpour algorithm implementation.
Blog Posts about ArborescenceIterator
5 Jun 2021 - Finding All Minimum Arborescences
10 Jun 2021 - Implementing The Iterators
Commits about ArborescenceIterator
My commits listed here are still annotated and much of the work was done at the same time.
Testing - Rewrote Kruskal’s algorithm to respect partitions and tested that while stubbing the iterators in a separate file
Moved iterators into the correct files to maintain proper codebase visibility - Realized that the iterators need to be in mst.py
and branchings.py
respectivly to keep private functions hidden
Including Black reformat - Modified Edmonds’ algorithm to respect partitions
Modified the ArborescenceIterator to accept init partition - No explanation needed
Documentation update for the iterators - No explanation needed
Update branchings.py accept doc string edit - No explanation needed
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Cleaned code, merged functions if possible and opened partition functionality to all
Implemented review suggestions from rossbar
Implement suggestions from boothby
held_karp_ascent
The Held Karp relaxation was the most difficult part of my GSoC project and the part that I was the most worried about going into this May.
My plans on how to solve the relaxation evolved over the course of the summer as well, finally culminating in held_karp_ascent
.
In my GSoC proposal, I discuss using scipy
to solve the relaxation, but the Held Karp relaxation is a semi-infinite linear problem (that is, it is finite but exponential) so I would quickly surpass the capabilities of virtually any computer that the code would be run on.
Fortunately I realized that while I was still writing my proposal and was able to change it.
Next, I wanted to use the ellipsoid algorithm because that is the suggested method in the Asadpour paper [1].
As it happens, the ellipsoid algorithm is not implemented in numpy
or scipy
and after discussing the practicality of implementing the algorithm as part of this project, we decided that a robust ellipsoid solver was a GSoC project onto itself and beyond the scope of the Asadpour algorithm.
Another method was needed, and was found.
In the original paper by Held and Karp [3], they present three different algorithms for solving the relaxation, the column-generation technique, the ascent method and the branch and bound method.
After reading the paper and comparing all of the methods, I decided that the branch and bound method was the best in terms of performance and wanted to implement that one.
The branch and bound method is a modified version of the ascent method, so I started by implementing the ascent method, then the branch and bound around it. This had the extra benefit of allowing me to compare the two and determine which is actually better.
Implementing the ascent method proved difficult. There were a number of subtle bugs in finding the minimum 1-arborescences and finding the value of epsilon by not realizing all of the valid edge substitutions in the graph. More information about these problems can be found in my post titled Understanding the Ascent Method. Even after this the ascent method was not working proper, but I decided to move onto the branch and bound method in hopes of learning more about the process so that I could fix the ascent method.
That is exactly what happened! While debugging the branch and bound method, I realized that my function for finding the set of minimum 1-arborescences would stop searching too soon and possibly miss the minimum 1-arborescences. Once I fixed that bug, both the ascent as well as the branch and bound method started to produce the correct results.
But which one would be used in the final project?
Well, that came down to which output was more compatible with the rest of the Asadpour algorithm. The ascent method could find a fractional solution where the edges are not totally in or out of the solution while the branch and bound method would take the time to ensure that the solution was integral. As it would happen, the Asadpour algorithm expects a fractional solution to the Held Karp relaxation so in the end the ascent method one out and the branch and bound method was removed from the project.
All of this is detailed in the (many) blog posts I wrote on this topic, which are listed below.
Blog posts about the Held Karp relaxation
My first two posts were about the scipy
solution and the ellipsoid algorithm.
11 Apr 2021 - Held Karp Relaxation
8 May 2021 - Held Karp Separation Oracle
This next post discusses the merits of each algorithm presenting in the original Held and Karp paper [3].
3 Jun 2021 - A Closer Look At Held Karp
And finally, the last three Held Karp related posts are about the debugging of the algorithms I did implement.
22 Jun 2021 - Understanding The Ascent Method
28 Jun 2021 - Implementing The Held Karp Relaxation
7 Jul 2021 - Finalizing Held Karp
Commits about the Held Karp relaxation
Annotations only provided if needed.
Grabbing black reformats - Initial Ascent method implementation
Working on debugging ascent method plus black reformats
Ascent method terminating, but at non-optimal solution
minor edits - Removed some debug statements
Fixed termination condition, still given non-optimal result
Minor bugfix, still non-optimal result - Ensured reported answer is the cycle if multiple options
Fixed subtle bug in find_epsilon() - Fixed the improper substitute detection bug
Cleaned code and tried something which didn’t work
Black formats - Initial branch and bound implementation
Branch and bound returning optimal solution
black formatting changes - Split ascent and branch and bound methods into different functions
Performance tweaks and testing fractional answers
Asadpour output for ascent method
Removed branch and bound method. One unit test misbehaving
Added asymmetric fractional test for the ascent method
Removed printn statements and tweaked final test to be more asymmetric
Changed HK to only report on the support of the answer
spanning_tree_distribution
Once we have the support of the Held Karp relaxation, we calculate edge weights \(\gamma\) for support so that the probability of any tree being sampled is proportional to the product of \(e^{\gamma}\) across its edges. This is called a maximum entropy distribution in the Asadpour paper. This procedure was included in the Asadpour paper [1] on page 386.
- Set \(\gamma = \vec{0}\).
- While there exists an edge \(e\) with \(q_e(\gamma) > (1 + \epsilon)z_e\):
- Compute \(\delta\) such that if we define \(\gamma’\) ad \(\gamma_e’ = \gamma_e - \delta\) and \(\gamma_f’ = \gamma_e\) for all \(f \in E \backslash {e}\), then \(q_e(\gamma’) = (1 + \epsilon / 2)z_e\)
- Set \(\gamma \leftarrow \gamma’\)
- Output \(\tilde{\gamma} := \gamma\).
Where \(q_e(\gamma)\) is the probability that any given edge \(e\) will be in a sampled spanning tree chosen with probability proportional to \(\exp(\gamma(T))\). \(\delta\) is also given as
\[ \delta = \frac{q_e(\gamma)(1-(1+\epsilon/2)z_e)}{(1-q_e(\gamma))(1+\epsilon/2)z_e} \]
so the Asadpour paper did almost all of the heavy lifting for this function. However, they were not very clear on how to calculate \(q_e(\gamma)\) other than that Krichhoff’s Tree Matrix Theorem can be used.
My original method for calculating \(qe(\gamma)\) was to apply Krichhoff’s Theorem to the original laplacian matrix and the laplacian produced once the edge \(e\) is contracted from the graph. Testing quickly showed that once the edge is contracted from the graph, it cannot affect the value of the laplacian and thus after subtracting \(\delta\) the probability of that edge would increase rather than decrease. Multiplying my original value of \(q_e(\gamma)\) by \(\exp(\gamma_e)\) proved to be the solution here for reasons extensively discussed in my blog post _The Entropy Distribution and in particular the “Update! (28 July 2021)” section.
Blog posts about spanning_tree_distribution
13 Jul 2021 - Entropy Distribution Setup
20 Jul 2021 - The Entropy Distribution
Commits about spanning_tree_distribution
Draft of spanning_tree_distribution
Changed HK to only report on the support of the answer - Needing to limit \(\gamma\) to only the support of the Held Karp relaxation is what caused this change
Fixed contraction bug by changing to MultiGraph. Problem with prob > 1 - Because the probability is only proportional to the product of the edge weights, this was not actually a problem
Black reformats - Rewrote the test and cleaned the code
Fixed pypi test error - The pypi tests do not have numpy
or scipy
and I forgot to flag the test to be skipped if they are not available
Further testing of dist fix - Fixed function to multiply \(q_e(\gamma)\) by \(\exp(\gamma_e)\) and implemented exception if \(\delta\) ever misbehaves
Can sample spanning trees - Streamlined finding \(q_e(\gamma)\) using new helper function
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Implement suggestions from boothby
sample_spanning_tree
What good is a spanning tree distribution if we can’t sample from it?
While the Asadpour paper [1] provides a rough outline of the sampling process, the bulk of their methodology comes from the Kulkarni paper, Generating random combinatorial objects [5]. That paper had a much more detailed explanation and even this pseudo code from page 202.
\(U = \emptyset,\) \(V = E\)
Do \(i = 1\) to \(N\);
\(\qquad\)Let \(a = n(G(U, V))\)
\(\qquad\qquad a’\) \(= n(G(U \cup {i}, V))\)
\(\qquad\)Generate \(Z \sim U[0, 1]\)
\(\qquad\)If \(Z \leq \alpha_i \times \left(a’ / a\right)\)
\(\qquad\qquad\)then \(U = U \cup {i}\),
\(\qquad\qquad\)else \(V = V - {i}\)
\(\qquad\)end.
Stop. \(U\) is the required spanning tree.
The only real difficulty here was tracking how the nodes were being contracted.
My first attempt was a mess of if
statements and the like, but switching it to a merge-find data structure (or disjoint set data structure) proved to be a wise decision.
Of course, it is one thing to be able to sample a spanning tree and another entirely to know if the sampling technique matches the expected distribution.
My first iteration test for sample_spanning_tree
just sampled a large number of trees (50000) and they printed the percent error from the normalized distribution of spanning tree.
With a sample size of 50000 all of the errors were under 10%, but I still wanted to find a better test.
From my AP statistics class in high school I remembered the \(X^2\) (Chi-squared) test and realized that it would be perfect here.
scipy
even had the ability to conduct one.
By converting to a chi-squared test I was able to reduce the sample size down to 1200 (near the minimum required sample size to have a valid chi-squared test) and use a proper hypothesis test at the \(\alpha = 0.01\) significance level.
Unfortunately, the test would still fail 1% of the time until I added the @py_random_state
decorator to sample_spanning_tree
, and then the test can pass in a Random
object to produce repeatable results.
Blog posts about sample_spanning_tree
21 Jul 2021 - Preliminaries For Sampling A Spanning Tree
28 Jul 2021 - Sampling A Spanning Tree
Commits about sample_spanning_tree
Developing test for sampling spanning tree
Changed sample_spanning_tree test to Chi squared test
Adding test cases - Implemented @py_random_state
decorator
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
asadpour_atsp
This function was the last piece of the puzzle, connecting all of the others together and producing the final result!
Implementation of this function was actually rather smooth.
The only technical difficulty I had was reading the support of the flow_dict
and the theoretical difficulties were adapting the min_cost_flow
function to solve the minimum circulation problem.
Oh, and that if the flow is greater than 1 I need to add parallel edges to the graph so that it is still eulerian.
A brief overview of the whole algorithm is given below:
Blog posts about asadpour_atsp
29 Jul 2021 - Looking At The Big Picture
10 Aug 2021 - Completing The Asadpour Algorithm
Commits about asadpour_atsp
untested implementation of asadpour_tsp
Fixed runtime errors in asadpour_tsp - General traveling salesman problem function assumed graph were undirected. This is not work with an atsp algorithm
black reformats - Fixed parallel edges from flow support bug
Fixed rounding error with tests
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Implemented review suggestions from rossbar
Overall, I really enjoyed this Summer of Code. I was able to branch out, continue to learn python and more about graphs and graph algorithms which is an area of interest for me.
Assuming that I have any amount of free time this coming fall semester, I’d love to stay involved with NetworkX. In fact, there are already some things that I have in mind even though my current code works as is.
Move sample_spanning_tree
to mst.py
and rename it to random_spanning_tree
.
The ability to sample random spanning trees is not a part of the greater NetworkX library and could be useful to others.
One of my mentors mentioned it being relevant to Steiner trees and if I can help other developers and users out, I will.
Adapt sample_spanning_tree
so that it can use both additive and multiplicative weight functions.
The Asadpour algorithm only needs the multiplicative weight, but the Kulkarni paper [5] does talk about using an additive weight function which may be more useful to other NetworkX users.
Move my Krichhoff’s Tree Matrix Theorem helper function to laplacian_matrix.py
so that other NetworkX users can access it.
Investigate the following article about the Held Karp relaxation. While I have no definite evidence for this one, I do believe that the Held Karp relaxation is the slowest part of my implementation of the Asadpour algorithm and thus is the best place for improving it. The ascent method I am using comes from the original Held and Karp paper [3], but they did release a part II which may have better algorithms in it. The citation is given below.
M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees: Part II. Mathematical Programming, 1971, 1(1), p. 6–25. https://doi.org/10.1007/BF01584070
Refactor the Edmonds
class in branchings.py
.
That class is the implementation for Edmonds’ branching algorithm but uses an iterative approach rather than the recursive one discussed in Edmonds’ paper [2].
I did also agree to work with another person, lkora to help rework this class and possible add a minimum_maximal_branching
function to find the minimum branching which still connects as many nodes as possible.
This would be analogous to a spanning forest in an undirected graph.
At the moment, neither of us have had time to start such work.
For more information please reference issue #4836.
While there are areas of this problem which I can improve upon, it is important for me to remember that this project was still a complete success. NetworkX now has an algorithm to approximate the traveling salesman problem in asymmetric or directed graphs.
]]>My implementation of asadpour_atsp
is now working!
Recall that my pseudo code for this function from my last post was
def asadpour_tsp
Input: A complete graph G with weight being the attribute key for the edge weights.
Output: A list of edges which form the approximate ATSP solution.
z_star = held_karp(G)
# test to see if z_star is a graph or dict
if type(z_star) is nx.DiGraph
return z_star.edges
z_support = nx.MultiGraph()
for u, v in z_star
if not in z_support.edges
edge_weight = min(G[u][v][weight], G[v][u][weight])
z_support.add_edge(u, v, weight=edge_weight)
gamma = spanning_tree_distribution(z_support, z_star)
for u, v in z_support.edges
z_support[u][v][lambda] = exp(gamma[(u, v)])
for _ in range 1 to 2 ceil(log(n))
sampled_tree = sample_spanning_tree(G)
sampled_tree_weight = sampled_tree.size()
if sampled_tree_weight < minimum_sampled_tree_weight
minimum_sampled_tree = sampled_tree.copy()
minimum_sampled_tree_weight = sampled_tree_weight
t_star = nx.DiGraph
for u, v, d in minimum_sampled_tree.edges(data=weight)
if d == G[u][v][weight]
t_star.add_edge(u, v, weight=d)
else
t_star.add_edge(v, u, weight=d)
for n in t_star
node_demands[n] = t_star.out_degree(n) - t_star.in_degree(n)
nx.set_node_attributes(G, node_demands)
flow_dict = nx.min_cost_flow(G)
for u, v in flow_dict
if edge not in t_star.edges and flow_dict[u, v] > 0
t_star.add_edge(u, v)
eulerian_curcuit = nx.eulerian_circuit(t_star)
return _shortcutting(eulerian_curcuit)
And this was more or less correct. A few issues were present, as they always were going to be.
First, my largest issue came from a part of a word being in parenthesis in the Asadpour paper on page 385.
This integral circulation \(f^*\) corresponds to a directed (multi)graph \(H\) which contains \(\vec{T}^*\).
Basically if the minimum flow is every larger than 1 along an edge, I need to add that many parallel edges in order to ensure that everything is still Eulerian. This became a problem quickly while developing my test cases as shown in the below example.
As you can see, for the incorrect circulation, vertices 2 and 3 are not eulerian as they in and out degrees do not match.
All of the others were just minor points where the pseudo code didn’t directly translate into python (because, after all, it isn’t python).
The first thing I did once asadpour_atsp
was take the fractional, symmetric Held Karp relaxation test graph and run it through the general traveling_salesman_problem
function.
Since there are random numbers involved here, the results were always within the \(O(\log n / \log \log n)\) approximation factor but were different.
Three examples are shown below.
The first thing we want to check is the approximation ratio.
We know that the minimum cost output of the traveling_saleman_problem
function is 304 (This is actually lower than the optimal tour in the undirected version, more on this later).
Next we need to know what our maximum approximation factor is.
Now, the Asadpour algorithm is \(O(\log n / \log \log n)\) which for our six vertex graph would be \(\ln(6) / \ln(\ln(6)) \approx 3.0723\).
However, on page 386 they give the coefficients of the approximation as \((2 + 8 \log n / \log \log n)\) which would be \(2 + 8 \times \ln(6) / \ln(\ln(6)) \approx 26.5784\).
(Remember that all \(\log\)’s in the Asadpour paper refer to the natural logarithm.)
All of our examples are well below even the lower limit.
For example 1:
\[ \begin{array}{r l} \text{actual}: & 504 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{504}{304} \approx 1.6578 < 3.0723 \end{array} \]
Example 2:
\[ \begin{array}{r l} \text{actual}: & 404 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{404}{304} \approx 1.3289 < 3.0723 \end{array} \]
Example 3:
\[ \begin{array}{r l} \text{actual}: & 304 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{304}{304} = 1.0000 < 3.0723 \end{array} \]
At this point, you’ve probably noticed that the examples given are strictly speaking, not hamiltonian cycles: they visit vertices multiple times.
This is because the graph we have is not complete.
The Asadpour algorithm only works on complete graphs, so the traveling_salesman_problem
function finds the shortest cost path between every pair of vertices and inserts the missing edges.
In fact, if the asadpour_atsp
function is given an incomplete graph, it will raise an exception.
Take example three, since there is only one repeated vertex, 5.
Behind the scenes, the graph is complete and the solution may contain the dashed edge in the below image.
But that edge is not in the original graph, so during the post-processing done by the traveling_salesman_problem
function, the red edges are inserted instead of the dashed edge.
Before I could write any tests, I needed to ensure that the tests were consistent from execution to execution.
At the time, this was not the case since there were random numbers being generated in order to sample the spanning trees.
So I had to learn how to use the @py_random_state
decorator.
When this decorator is added to the top of a function, we pass it either the position of the argument in the function signature or the name of the keyword for that argument. It then takes that argument and configures a python Random object based on the input parameter.
None
, use a new Random
object.int
, use a new Random
object with that seed.Random
object, use that object as is.So I changed the function signature of sample_spanning_tree
to have random=None
at the end.
For most use cases, the default value will not be changed and the results will be different every time the method is called, but if we give it an int
, the same tree will be sampled every time.
But, for my tests I can give it a seed to create repeatable behaviour.
Since the sample_spanning_tree
function is not visible outside of the treveling_salesman
file, I also had to create a pass-through parameter for asadpour_atsp
so that my seed could have any effect.
Once this was done, I modified the test for sample_spanning_tree
so that it would not have a 1 in 100 chance of spontaneously failing.
At first I just passed it an int
, but that forced every tree sampled to be the same (since the edges were shuffled the same and sampled from the same sequence of numbers) and the test failed.
So I tweaked it to use a Random
object from the random package and this worked well.
From here, I wrap the complete asadpour_atsp
parameters I want in another function fixed_asadpour
like this:
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, 56)
path = nx_app.traveling_salesman_problem(
G, weight="weight", cycle=False, method=fixed_asadpour
)
I tested using both traveling_salesman_problem
and asadpour_atsp
.
The tests included:
There is even a bonus feature!
The asadpour_atsp
function accepts a fourth argument, source
!
Since both of the return methods use eulerian_circuit
and the _shortcutting
functions, I can pass a source
vertex to the circuit function and ensure that the returned path starts and returns to the desired vertex.
Access it by wrapping the method, just be sure that the source vertex is in the graph to avoid an exception.
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, source=0)
path = nx_app.traveling_salesman_problem(
G, weight="weight", cycle=False, method=fixed_asadpour
)
A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
]]>Well, we’re finally at the point in this GSoC project where the end is glimmering on the horizon. I have completed the Held Karp relaxation, generating a spanning tree distribution and now sampling from that distribution. That means that it is time to start thinking about how to link these separate components into one algorithm.
Recall that from the Asadpour paper the overview of the algorithm is
Algorithm 1 An \(O(\log n / \log \log n)\)-approximation algorithm for the ATSP
Input: A set \(V\) consisting of \(n\) points and a cost function \(c\ :\ V \times V \rightarrow \mathbb{R}^+\) satisfying the triangle inequality.
Output: \(O(\log n / \log \log n)\)-approximation of the asymmetric traveling salesman problem instance described by \(V\) and \(c\).
Solve the Held-Karp LP relaxation of the ATSP instance to get an optimum extreme point solution \(x^*\). Define \(z^*\) as in (5), making it a symmetrized and scaled down version of \(x^*\). Vector \(z^*\) can be viewed as a point in the spanning tree polytope of the undirected graph on the support of \(x^*\) that one obtains after disregarding the directions of arcs (See Section 3.)
Let \(E\) be the support graph of \(z^*\) when the direction of the arcs are disregarded. Find weights \({\tilde{\gamma}}_{e \in E}\) such that the exponential distribution on the spanning trees, \(\tilde{p}(T) \propto \exp(\sum_{e \in T} \tilde{\gamma}_e)\) (approximately) preserves the marginals imposed by \(z^*\), i.e. for any edge \(e \in E\),
\\(\sum\_{T \in \mathcal{T} : T \ni e} \tilde{p}(T) \leq (1 + \epsilon) z^\*\_e\\), for a small enough value of \\(\epsilon\\). (In this paper we show that \\(\epsilon = 0.2\\) suffices for our purpose. See Section 7 and 8 for a description of how to compute such a distribution.)Sample \(2\lceil \log n \rceil\) spanning trees \(T_1, \dots, T_{2\lceil \log n \rceil}\) from \(\tilde{p}(.)\). For each of these trees, orient all its edges so as to minimize its cost with respect to our (asymmetric) cost function \(c\). Let \(T^*\) be the tree whose resulting cost is minimal among all of the sampled trees.
Find a minimum cost integral circulation that contains the oriented tree \(\vec{T}^*\). Shortcut this circulation to a tour and output it. (See Section 4.)
We are now firmly in the steps 3 and 4 area.
Going all the way back to my post on 24 May 2021 titled Networkx Function stubs the only function left is asadpour_tsp
, the main function which needs to accomplish this entire algorithm.
But before we get to creating pseudo code for it there is still step 4 which needs a thorough examination.
Once we have sampled enough spanning trees from the graph and converted the minimum one into \(\vec{T}^*\) we need to find the minimum cost integral circulation in the graph which contains \(\vec{T}^*\).
While NetworkX a minimum cost circulation function, namely, min_cost_flow
, it is not suitable for the Asadpour algorithm out of the box.
The problem here is that we do not have node demands, we have edge demands.
However, after some reading and discussion with one of my mentors Dan, we can convert the current problem into one which can be solved using the min_cost_flow
function.
The problem that we are trying to solve is called the minimum cost circulation problem and the one which min_cost_flow
is able to solve is the, well, minimum cost flow problem.
As it happens, these are equivalent problems, so I can convert the minimum cost circulation into a minimum cost flow problem by transforming the minimum edge demands into node demands.
Recall that at this point we have a directed minimum sampled spanning tree \(\vec{T}^*\) and that the flow through each of the edges in \(\vec{T}^*\) needs to be at least one. From the perspective of a flow problem, \(\vec{T}^*\) is moving some flow around the graph. However, in order to augment \(\vec{T}^*\) into an Eulerian graph so that we can walk it, we need to counteract this flow so that the net flow for each node is 0 \((f(\delta^+(v)) = f(\delta^-(v))\) in the Asadpour paper).
So, we find the net flow of each node and then assign its demand to be the negative of that number so that the flow will balance at the node in question. If the total flow at any node \(i\) is \(\delta^+(i) - \delta^-(i)\) then the demand we assign to that node is \(\delta^-(i) - \delta^+(i)\). Once we assign the demands to the nodes we can temporarily ignore the edge lower capacities to find the minimum flow.
For more information on the conversion process, please see [2].
After the minimum flow is found, we take the support of the flow and add it to the \(\vec{T}^*\) to create a multigraph \(H\).
Now we know that \(H\) is weakly connected (it contains \(\vec{T^*}\)) and that it is Eulerian because for every node the in-degree is equal to the out-degree.
A closed eulerian walk or eulerian circuit can be found in this graph with eulerian_circuit
.
Here is an example of this process on a simple graph. I suspect that the flow will not always be the back edges from the spanning tree and that the only reason that is the case here is due to the small number of vertcies.
Finally, we take the eulerian circuit and shortcut it.
On the plus side, the shortcutting process is the same as the Christofides algorithm so that is already the _shortcutting
helper function in the traveling salesman file.
This is really where it is critical that the triangle inequality holds so that the shortcutting cannot increase the cost of the circulation.
Let’s start with the function signature.
def asadpour_tsp
Input: A complete graph G with weight being the attribute key for the edge weights.
Output: A list of edges which form the approximate ATSP solution.
This is exactly what we’d expect, take a complete graph \(G\) satisfying the triangle inequality and return the edges in the approximate solution to the asymmetric traveling salesman problem.
Recall from my post Networkx Function Stubs what the primary traveling salesman function, traveling_salesman_problem
will ensure that we are given a complete graph that follows the triangle inequality by using all-pairs shortest path calculations and will handle if we are expected to return a true cycle or only a path.
The first step in the Asadpour algorithm is the Held Karp relaxation. I am planning on editing the flow of the algorithm here a bit. If the Held Karp relaxation finds an integer solution, then we know that is one of the optimal TSP routes so there is no point in continuing the algorithm: we can just return that as an optimal solution. However, if the Held Karp relaxation finds a fractional solution we will press on with the algorithm.
z_star = held_karp(G)
# test to see if z_star is a graph or dict
if type(z_star) is nx.DiGraph
return z_star.edges
Once we have the Held Karp solution, we create the undirected support of z_star
for the next step of creating the exponential distribution of spanning trees.
z_support = nx.MultiGraph()
for u, v in z_star
if not in z_support.edges
edge_weight = min(G[u][v][weight], G[v][u][weight])
z_support.add_edge(u, v, weight=edge_weight)
gamma = spanning_tree_distribution(z_support, z_star)
This completes steps 1 and 2 in the Asadpour overview at the top of this post. Next we sample \(2 \lceil \log n \rceil\) spanning trees.
for u, v in z_support.edges
z_support[u][v][lambda] = exp(gamma[(u, v)])
for _ in range 1 to 2 ceil(log(n))
sampled_tree = sample_spanning_tree(G)
sampled_tree_weight = sampled_tree.size()
if sampled_tree_weight < minimum_sampled_tree_weight
minimum_sampled_tree = sampled_tree.copy()
minimum_sampled_tree_weight = sampled_tree_weight
Now that we have the minimum sampled tree, we need to orient the edge directions to keep the cost equal to that minimum tree.
We can do this by iterating over the edges in minimum_sampled_tree
and checking the edge weights in the original graph \(G\).
Using \(G\) is required here if we did not record the minimum direction which is a possibility when we create z_support
.
t_star = nx.DiGraph
for u, v, d in minimum_sampled_tree.edges(data=weight)
if d == G[u][v][weight]
t_star.add_edge(u, v, weight=d)
else
t_star.add_edge(v, u, weight=d)
Next we create a mapping of nodes to node demands for the minimum cost flow problem which was discussed earlier in this post.
I think that using a dict is the best option as it can be passed into set_node_attributes
all at once before finding the minimum cost flow.
for n in t_star
node_demands[n] = t_star.out_degree(n) - t_star.in_degree(n)
nx.set_node_attributes(G, node_demands)
flow_dict = nx.min_cost_flow(G)
Take the Eulerian circuit and shortcut it on the way out.
Here we can add the support of the flow directly to t_star
to simulate adding the two graphs together.
for u, v in flow_dict
if edge not in t_star.edges and flow_dict[u, v] > 0
t_star.add_edge(u, v)
eulerian_curcuit = nx.eulerian_circuit(t_star)
return _shortcutting(eulerian_curcuit)
That should be it.
Once the code for asadpour_tsp
is written it will need to be tested.
I’m not sure how I’m going to create the test cases yet, but I do plan on testing it using real world airline ticket prices as that is my go to example for the asymmetric traveling salesman problem.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
D. Williamson, ORIE 633 Network Flows Lecture 11, 11 Oct 2007, https://people.orie.cornell.edu/dpw/orie633/LectureNotes/lecture11.pdf.
]]>The heavy lifting I did in the preliminary post certainly paid off here!
In just one day I was able to implement sample_spanning_tree
and its two helper functions.
This was a very easy function to implement.
It followed exactly from the pesudo code and was working with spanning_tree_distribution
before I started on sample_spanning_tree
.
This function was more difficult than I originally anticipated.
The code for the main body of the function only needed minor tweaks to work with the specifics of python such as shuffle
being in place and returning None
and some details about how sets work.
For example, I add edge \(e\) to \(U\) before calling prepare_graph
on in and then switch the if
statement to be the inverse to remove \(e\) from \(U\).
Those portions are functionally the same.
The issues I had with this function all stem back to contracting multiple nodes in a row and how that affects the graph.
As a side note, the contracted_edge
function in NetworkX is a wrapper for contracted_node
and the latter has a copy
keyword argument that is assumed to be True
by the former function.
It was a trivial change to extend this functionality to contracted_edge
but in the end I used contracted_node
so the whole thing is moot.
First recall how edge contraction, or in this case node contraction, works. Two nodes are merged into one which is connected by the same edges which connected the original two nodes. Edges between those two nodes become self loops, but in this case I prevented the creation of self loops as directed by Kulkarni. If a node which is not contracted has edges to both of the contracted nodes, we insert a parallel edge between them. I struggled with NetworkX’s API about the graph classes in a past post titled The Entropy Distribution.
For NetworkX’s implementation, we would call nx.contracted_nodes(G, u, v)
and u
and v
would always be merged into u
, so v
is the node which is no longer in the graph.
Now imagine that we have three edges to contract because they are all in \(U\) which look like the following.
If we process this from left to right, we first contract nodes 0 and 1. At this point, the \(\{1, 2\}\) no longer exists in \(G\) as node 1 itself has been removed. However, we would still need to contract the new \(\{0, 2\}\) edge which is equivalent to the old \(\{1, 2\}\) edge.
My first attempt to solve this was… messy and didn’t work well.
I developed an if-elif
chain for which endpoints of the contracting edge no longer existed in the graph and tried to use dict comprehension to force a dict to always be up to date with which vertices were equivalent to each other.
It didn’t work and was very messy.
Fortunately there was a better solution. This next bit of code I actually first used in my Graph Algorithms class from last semester. In particular it is the merge-find or disjoint set data structure from the components algorithm (code can be found here and more information about the data structure here).
Basically we create a mapping from a node to that node’s representative.
In this case a node’s representative is the node that is still in \(G\) but the input node has been merged into through a series of contractions.
In the above example, once node 1 is merged into node 0, 0 would become node 1’s representative.
We search recursively through the merged_nodes
dict until we find a node which is not in the dict, meaning that it is still its own representative and therefore in the graph.
This will let us handle a representative node later being merged into another node.
Finally, we take advantage of path compression so that lookup times remain good as the number of entries in merged_nodes
grows.
This worked well once I caught a bug where the prepare_graph
function tried to contract a node with itself.
However, the function was running and returning a result but it could have one or two more edges than needed which of course means it is not a tree.
I was testing on the symmetric fractional Held Karp graph by the way, so with six nodes it should have five edges per tree.
I seeded the random number generator for one of the seven edge results and started to debug! Recall that once we generate a uniform decimal between 0 and 1 we compare it to
\[ \lambda_e \times \frac{K_{G \backslash {e}}}{K_G} \]
where \(K\) is the result of Krichhoff’s Theorem on the subscripted graph. One probability that caught my eye had the fractional component equal to 1. This means that adding \(e\) to the set of contracted edges had no effect on where that edge should be included in the final spanning tree. Closer inspection revealed that the edge \(e\) in question already could not be picked for the spanning tree since it did not exist in \(G\) it could not exist in \(G \backslash {e}\).
Imagine the following situation. We have three edges to contract but they form a cycle of length three.
If we contract \(\{0, 1\}\) and then \(\{0, 2\}\) what does that mean for \(\{1, 2\}\)? Well, \({1, 2}\) would become a self loop on vertex 0 but we are deleting self loops so it cannot exist. It has to have a probability of 0. Yet in the current implementation of the function, it would have a probability of \(\lambda_{\{1, 2\}}\). So, I have to check to see if a representative edge exists for the edge we are considering in the current iteration of the main for loop.
The solution to this is to return the merge-find data structure with the prepared graph for \(G\) and then check that an edge with endpoints at the two representatives for the endpoints of the original edge persent.
If so, use the kirchhoff value as normal but if not make G_e_total_tree_weight
equal to zero so that this edge cannot be picked.
Finally I was able to sample trees from G
consistently, but did they match the expected probabilities?
The first test I was working with sampled one tree and checked to see if it was actually a tree. I first expanded it to sample 1000 trees and make sure that they were all trees. At this point, I thought that the function will always return a tree, but I need to check the tree distribution.
So after a lot of difficulty writing the test itself to check which of the 75 possible spanning trees I had sampled I was ready to check the actual distribution. First, the test iterates over all the spanning trees, records the products of edge weights and normalizes the data. (Remember that the actual probability is only proportional to the product of edge weights). Then I sample 50000 trees and record the actual frequency. Next, it calculates the percent error from the expected probability to the actual frequency. The sample size is so large because at 1000 trees the percent error was all over the place but, as the Law of Large Numbers dictates, the larger sample shows the actual results converging to the expected results so I do believe that the function is working.
That being said, seeing the percent error converge to be less than 15% for all 75 spanning trees is not a very rigorous test. I can either implement a formal test using the percent error or try to create a Chi squared test using scipy.
This morning I was able to get a Chi squared test working and it was definatly the correct dicision. I was able to reduce the sample size from 50,000 to 1200 which is a near minimum sample. In order to run a Chi squared test you need an expected frequency of at least 5 for all of the categories so I had to find the number of samples to ganturee that for a tree with a probabilty of about 0.4% which was 1163 that I rounded to 1200.
I am testing at the 0.01 signigance level, so this test may fail without reason 1% of the time but it is still a overall good test for distribution.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, pp. 379-389, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207.
]]>In order to test the exponential distribution that I generate using spanning_tree_distribution
, I need to be able to sample a tree from the distribution.
The primary citation used in the Asadpour paper is Generating Random Combinatorial Objects by V. G. Kulkarni (1989).
While I was not able to find an online copy of this article, the Michigan Tech library did have a copy that I was able to read.
Kulkarni gave a general overview of the algorithm in Section 2, but Section 5 is titled `Random Spanning Trees’ and starts on page 200. First, let’s check that the preliminaries for the Kulkarni paper on page 200 match the Asadpour algorithm.
Let \(G = (V, E)\) be an undirected network of \(M\) nodes and \(N\) arcs… Let \(\mathfrak{B}\) be the set of all spanning trees in \(G\). Let \(\alpha_i\) be the positive weight of arc \(i \in E\). Defined the weight \(w(B)\) of a spanning tree \(B \in \mathfrak{B}\) as
\[w(B) = \prod_{i \in B} \alpha_i\]
Also define
\[n(G) = \sum_{B \in \mathfrak{B}} w(B)\]
In this section we describe an algorithm to generate \(B \in \mathfrak{B}\) so that
\[P\{B \text{ is generated}\} = \frac{w(B)}{n(G)}\]
Immediately we can see that \(\mathfrak{B}\) is the same as \(\mathcal{T}\) from the Asadpour paper, the set of all spanning trees. The weight of each edge is \(\alpha_i\) for Kulkarni and \(\lambda_e\) to Asadpour. As for the product of the weights of the graph being the probability, the Asadpour paper states on page 382
Given \(\lambdae \geq 0\) for \(e \in E\), a \(\lambda\)-random tree_ \(T\) of \(G\) is a tree \(T\) chosen from the set of all spanning trees of \(G\) with probability proportional to \(\prod_{e \in T} \lambda_e\).
So this is not a concern. Finally, \(n(G)\) can be written as
\[\sum_{T \in \mathcal{T}} \prod_{e \in T} \lambda_e\]
which does appear several times throughout the Asadpour paper. Thus the preliminaries between the Kulkarni and Asadpour papers align.
The specialized version of the general algorithm which Kulkarni gives is Algorithm A8 on page 202.
\(U = \emptyset,\) \(V = E\)
Do \(i = 1\) to \(N\);
\(\qquad\)Let \(a = n(G(U, V))\)
\(\qquad\qquad a’\) \(= n(G(U \cup {i}, V))\)
\(\qquad\)Generate \(Z \sim U[0, 1]\)
\(\qquad\)If \(Z \leq \alpha_i \times \left(a’ / a\right)\)
\(\qquad\qquad\)then \(U = U \cup {i}\),
\(\qquad\qquad\)else \(V = V - {i}\)
\(\qquad\)end.
Stop. \(U\) is the required spanning tree.
Now we have to understand this algorithm so we can create pseudo code for it.
First as a notational explanation, the statement “Generate \(Z \sim U[0, 1]\)” means picking a uniformly random variable over the interval \([0, 1]\) which is independent of all the random variables generated before it (See page 188 of Kulkarni for more information).
The built-in python module random
can be used here.
Looking at real-valued distributions, I believe that using random.uniform(0, 1)
is preferable to random.random()
since the latter does not have the probability of generating a ‘1’ and that is explicitly part of the interval discussed in the Kulkarni paper.
The other notational oddity would be statements similar to \(G(U, V)\) which is this case does not refer to a graph with \(U\) as the vertex set and \(V\) as the edge set as \(U\) and \(V\) are both subsets of the full edge set \(E\).
\(G(U, V)\) is defined in the Kulkarni paper on page 201 as
Let \(G(U, V)\) be a subgraph of \(G\) obtained by deleting arcs that are not in \(V\), and collapsing arcs that are in \(U\) (i.e., identifying the end nodes of arcs in \(U\)) and deleting all self-loops resulting from these deletions and collapsing.
This language seems a bit… clunky, especially for the edges in \(U\).
In this case, “collapsing arcs that are in \(U\)” would be contracting those edges without self loops.
Fortunately, this functionality is a part of NetworkX using networkx.algorithms.minors.contracted_edge
with the self_loops
keyword argument set to False
.
As for the edges in \(E - V\), this can be easily accomplished by using networkx.MultiGraph.remove_edges_from
.
Once we have generated \(G(U, V)\), we need to find \(n(G(U, V)\).
This can be done with something we are already familiar with: Kirchhoff’s Tree Matrix Theorem.
All we need to do is create the Laplacian matrix and then find the determinant of the first cofactor.
This code will probably be taken directly from the spanning_tree_distribution
function.
Actually, this is a place to create a broader helper function called krichhoffs
which will take a graph and return the number of weighted spanning trees in it which would then be used as part of q
in spanning_tree_distribution
and in sample_spanning_tree
.
From here we compare \(Z\) to \(\alpha_i \left(a’ / a\right)\) so see if that edge is added to the graph or discarded. Understanding the process of the algorithm gives context to the meaning of \(U\) and \(V\). \(U\) is the set of edges which we have decided to include in the spanning tree while \(V\) is the set of edges yet to be considered for \(U\) (roughly speaking).
Now there is still a bit of ambiguity in the algorithm that Kulkarni gives, mainly about \(i\). In the loop condition, \(i\) is an integer from 1 to \(N\), the number of arcs in the graph but it is later being added to \(U\) so it has to be an edge. Referencing the Asadpour paper, it starts its description of sampling the \(\lambda\)-random tree on page 383 by saying “The idea is to order the edges \(e_1, \dots, e_m\) of \(G\) arbitrarily and process them one by one”. So I believe that the edge interpretation is correct and the integer notation used in Kulkarni was assuming that a mapping of the edges to \({1, 2, \dots, N}\) has occurred.
Time to write some pseudo code! Starting with the function signature
def sample_spanning_tree
Input: A multigraph G whose edges contain a lambda value stored at lambda_key
Output: A new graph which is a spanning tree of G
Next up is a bit of initialization
U = set()
V = set(G.edges)
shuffled_edges = shuffle(G.edges)
Now the definitions of U
and V
come directly from Algorithm A8, but shuffled_edges
is new.
My thoughts are that this will be what we use for \(i\).
We shuffle the edges of the graph and then in the loop we iterate over the edges within shuffled_edges
.
Next we have the loop.
for edge e in shuffled_edges
G_total_tree_weight = kirchhoffs(prepare_graph(G, U, V))
G_i_total_tree_weight = kirchhoffs(prepare_graph(G, U.add(e), V))
z = uniform(0, 1)
if z <= e[lambda_key] * G_i_total_tree_weight / G_total_tree_weight
U = U.add(e)
if len(U) == G.number_of_edges - 1
# Spanning tree complete, no need to continue to consider edges.
spanning_tree = nx.Graph
spanning_tree.add_edges_from(U)
return spanning_tree
else
V = V.remove(e)
The main loop body does use two other functions which are not part of the standard NetworkX libraries, krichhoffs
and prepare_graph
.
As I mentioned before, krichhoffs
will apply Krichhoff’s Theorem to the graph.
Pseudo code for this is below and strongly based off of the existing code in q
of spanning_tree_distribution
which will be updated to use this new helper.
def krichhoffs
Input: A multigraph G and weight key, weight
Output: The total weight of the graph's spanning trees
G_laplacian = laplacian_matrix(G, weight=weight)
G_laplacian = G_laplacian.delete(0, 0)
G_laplacian = G_laplacian.delete(0, 1)
return det(G_laplacian)
The process for the other helper, prepare_graph
is also given.
def prepare_graph
Input: A graph G, set of contracted edges U and edges which are not removed V
Output: A subgraph of G in which all vertices in U are contracted and edges not in V are
removed
result = G.copy
edges_to_remove = set(result.edges).difference(V)
result.remove_edges_from(edges_to_remove)
for edge e in U
nx.contracted_edge(e)
return result
There is one other change to the NetworkX API that I would like to make.
At the moment, networkx.algorithms.minors.contracted_edge
is programmed to always return a copy of a graph.
Since I need to be contracting multiple edges at once, it would make a lot more sense to do the contraction in place.
I would like to add an optional keyword argument to contracted_edge
called copy
which will default to True
so that the overall functionality will not change but I will be able to perform in place contractions.
The most obvious one is to implement the functions that I have laid out in the pseudo code step, but testing is still a concerning area. My best bet is to sample say 1000 trees and check that the probability of each tree is equal to the product of all of the lambda’s on it’s edges.
That actually just caused me to think of a new test of spanning_tree_distribution
.
If I generate the distribution and then iterate over all of the spanning trees with a SpanningTreeIterator
I can sum the total probability of each tree being sampled and if that is not 1 (or very close to it) than I do not have a valid distribution over the spanning trees.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, pp. 379-389, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207.
]]>Implementing spanning_tree_distribution
proved to have some NetworkX difficulties and one algorithmic difficulty.
Recall that the algorithm for creating the distribution is given in the Asadpour paper as
- Set \(\gamma = \vec{0}\).
- While there exists an edge \(e\) with \(q_e(\gamma) > (1 + \epsilon) z_e\):
- Compute \(\delta\) such that if we define \(\gamma’\) as \(\gamma_e’ = \gamma_e - \delta\), and \(\gamma_f’ = \gamma_f\) for all \(f \in E\ \backslash {e}\), then \(q_e(\gamma’) = (1 + \epsilon/2)z_e\).
- Set \(\gamma \leftarrow \gamma’\).
- Output \(\tilde{\gamma} := \gamma\).
Now, the procedure that I laid out in my last blog titled Entropy Distribution Setup worked well for the while loop portion.
All of my difficulties with the NetworkX API happened in the q
inner function.
After I programmed the function, I of course needed to run it and at first I was just printing the gamma
dict out so that I could see what the values for each edge were.
My first test uses the symmetric fractional Held Karp solution and to my surprise, every value of \(\gamma\) returned as 0.
I didn’t think that this was intended behavior because if it was, there would be no reason to include this step in the overall Asadpour algorithm, so I started to dig around the code with PyCharm’s debugger.
The results were, as I suspected, not correct.
I was running Krichhoff’s tree matrix theorem on the original graph, so the returned probabilities were an order of magnitude smaller than the values of \(z_e\) that I was comparing them to.
Additionally, all of the values were the same so I knew that this was a problem and not that the first edge I checked had unusually small probabilities.
So, I returned to the Asadpour paper and started to ask myself questions like
It was pretty easy to dismiss the first question, if normalization was required it would be mentioned in the Asadpour paper and without a description of how to normalize it the chances of me finding the `correct’ way to do so would be next to impossible. The second question did take some digging. The sections of the Asadpour paper which talk about using Krichhoff’s theorem all discuss it using the graph \(G\) which is why I was originally using all edges in \(G\) rather than the edges in \(E\). A few hints pointed to the fact that I needed to only consider the edges in \(E\), the first being the algorithm overview which states
Find weights \({\tilde{\gamma}}_{e \in E}\)
In particular the \(e \in E\) statement says that I do not need to consider the edges which are not in \(E\). Secondly, Lemma 7.2 starts by stating
Let \(G = (V, E)\) be a graph with weights \(\gamma_e\) for \(e \in E\)
Based on the current state of the function and these hints, I decided to reduce the input graph to spanning_tree_distribution
to only edges with \(z_e > 0\).
Running the test on the symmetric fractional solution now, it still returned \(\gamma = \vec{0}\) but the probabilities it was comparing were much closer during that first iteration.
Due to the fact that I do not have an example graph and distribution to work with, this could be the correct answer, but the fact that every value was the same still confused me.
My next step was to determine the actual probability of an edge being in the spanning trees for the first iteration when \(\gamma = \vec{0}\).
This can be easily done with my SpanningTreeIterator
and exploits the fact that \(\gamma = \vec{0} \equiv \lambda_e = 1\ \forall\ e \in \gamma\) so we can just iterate over the spanning trees and count how often each edge appears.
That script is listed below
import networkx as nx
edges = [
(0, 1),
(0, 2),
(0, 5),
(1, 2),
(1, 4),
(2, 3),
(3, 4),
(3, 5),
(4, 5),
]
G = nx.from_edgelist(edges, create_using=nx.Graph)
edge_frequency = {}
sp_count = 0
for tree in nx.SpanningTreeIterator(G):
sp_count += 1
for e in tree.edges:
if e in edge_frequency:
edge_frequency[e] += 1
else:
edge_frequency[e] = 1
for u, v in edge_frequency:
print(
f"({u}, {v}): {edge_frequency[(u, v)]} / {sp_count} = {edge_frequency[(u, v)] / sp_count}"
)
This output revealed that the probabilities returned by q
should vary from edge to edge and that the correct solution for \(\gamma\) is certainly not \(\vec{0}\).
(networkx-dev) mjs@mjs-ubuntu:~/Workspace$ python3 spanning_tree_frequency.py
(0, 1): 40 / 75 = 0.5333333333333333
(0, 2): 40 / 75 = 0.5333333333333333
(0, 5): 45 / 75 = 0.6
(1, 4): 45 / 75 = 0.6
(2, 3): 45 / 75 = 0.6
(1, 2): 40 / 75 = 0.5333333333333333
(5, 3): 40 / 75 = 0.5333333333333333
(5, 4): 40 / 75 = 0.5333333333333333
(4, 3): 40 / 75 = 0.5333333333333333
Let’s focus on that first edge, \((0, 1)\). My brute force script says that it appears in 40 of the 75 spanning trees of the below graph where each edge is labelled with its \(z_e\) value.
Yet q
was saying that the edge was in 24 of 75 spanning trees.
Since the denominator was correct, I decided to focus on the numerator which is the number of spanning trees in \(G\ \backslash\ \{(0, 1)\}\).
That graph would be the following.
An argument can be made that this graph should have a self-loop on vertex 0, but this does not affect the Laplacian matrix in any way so it is omitted here. Basically, the \([0, 0]\) entry of the adjacency matrix would be 1 and the degree of vertex 0 would be 5 and \(5 - 1 = 4\) which is what the entry would be without the self loop.
What was happening was that I was giving nx.contracted_edge
a graph of the Graph class (not a directed graph since \(E\) is undirected) and was getting a graph of the Graph class back.
The Graph class does not support multiple edges between two nodes so the returned graph only had one edge between node 0 and node 2 which was affecting the overall Laplacian matrix and thus the number of spanning trees.
Switching from a Graph to a MultiGraph did the trick, but this subtle change should be mentioned in the NetworkX documentation for the function, linked here.
I definitely believed that if a contracted an edge the output should automatically include both of the \((0, 2)\) edges.
An argument can be made for changing the default behavior to match this, but at the very least the documentation should explain this problem.
Now the q
function was returning the correct \(40 / 75\) answer for \((0, 1)\) and correct values for the rest of the edges so long as all of the \(\gamma_e\)’s were 0.
But the test was erroring out with a ValueError
when I tried to compute \(\delta\).
q
was returning a probability of an edge being in a sampled spanning tree of more than 1, which is clearly impossible but also caused the denominator of \(\delta\) to become negative and violate the domain of the natural log.
During my investigation of this problem, I noticed that after computing \(\delta\) and subtracting it from \(\gamma_e\), it did not have the desired effect on \(q_e\). Recall that we define \(\delta\) so that \(\gamma_e - \delta\) yields a \(q_e\) of \((1 + \epsilon / 2) z_e\). In other words, the effect of \(\delta\) is to decrease an edge probability which is too high, but in my current implementation it was having the opposite effect. The value of \(q_{(0, 1)}\) was going from 0.5333 to just over 0.6. If I let this trend continue, the program would eventually hit one of those cases where \(q_e \geq 1\) and crash the program.
Here I can use edge \((0, 1)\) as an example to show the problem. The original Laplacian matrix for \(G\) with \(\gamma = \vec{0}\) is
\[ \begin{bmatrix} 3 & -1 & -1 & 0 & 0 & -1 \\\ -1 & 3 & -1 & 0 & -1 & 0 \\\ -1 & -1 & 3 & -1 & 0 & 0 \\\ 0 & 0 & -1 & 3 & -1 & -1 \\\ 0 & -1 & 0 & -1 & 3 & -1 \\\ -1 & 0 & 0 & -1 & -1 & 3 \end{bmatrix} \]
and the Laplacian for \(G\ \backslash\ \{(0, 1)\}\) is
\[ \begin{bmatrix} 4 & -2 & -1 & -1 & 0 \\\ -2 & 3 & 0 & 0 & -1 \\\ -1 & 0 & 3 & -1 & -1 \\\ -1 & 0 & -1 & 3 & -1 \\\ 0 & -1 & -1 & -1 & 3 \end{bmatrix} \]
The determinant of the first cofactor is how we get the \(40 / 75\). Now consider the Laplacian matrices after we updated \(\gamma_{(0, 1)}\) for the first time. The one for \(G\) becomes
\[ \begin{bmatrix} 2.74 & -0.74 & -1 & 0 & 0 & -1 \\\ -0.74 & 2.74 & -1 & 0 & -1 & 0 \\\ -1 & -1 & 3 & -1 & 0 & 0 \\\ 0 & 0 & -1 & 3 & -1 & -1 \\\ 0 & -1 & 0 & -1 & 3 & -1 \\\ -1 & 0 & 0 & -1 & -1 & 3 \end{bmatrix} \]
and its first cofactor determinant is reduced from 75 to 61.6. What do we expect the value of the matrix for \(G\ \backslash\ \{(0, 1)\}\) to be? Well, we know that the final value of \(q_e\) needs to be \((1 + \epsilon / 2) z_e\) or \(1.1 \times 0.41\overline{6}\) which is \(0.458\overline{3}\). So
\[ \begin{array}{r c l} \displaystyle\frac{x}{61.6} &=& 0.458\overline{3} \\\ x &=& 28.2\overline{3} \end{array} \]
and the value of the first cofactor determinant should be \(28.2\overline{3}\). However, the contracted Laplacian for \((0, 1)\) after the value of \(\gamma_e\) is updated is
\[ \begin{bmatrix} 4 & -2 & -1 & -1 & 0 \\\ -2 & 3 & 0 & 0 & -1 \\\ -1 & 0 & 3 & -1 & -1 \\\ -1 & 0 & -1 & 3 & -1 \\\ 0 & -1 & -1 & -1 & 3 \end{bmatrix} \]
the same as before! The only edge with a different \(\gamma_e\) than before is \((0, 1)\), but since it is the contracted edge it is no longer in the graph any more and thus cannot affect the value of the first cofactor’s determinant!
But if we change the algorithm to add \(\delta\) to \(\gamma_e\) rather than subtract it, the determinant of the first cofactor for \(G\ \backslash\ \{e\}\)’s Laplacian will not change but the determinant for the Laplacian of \(G\)’s first cofactor will increase. This reduces the overall probability of picking \(e\) in a spanning tree. And, if we happen to use the same formula for \(\delta\) as before for our example of \((0, 1)\) then \(q_{(0, 1)}\) becomes \(0.449307\). Recall our target value of \(0.458\overline{3}\). This anwser has a \(-1.96%\) error.
\[ \begin{array}{r c l} \text{error} &=& \frac{0.449307 - 0.458333}{0.458333} \times 100 \\\ &=& \frac{-0.009026}{0.458333} \times 100 \\\ &=& -0.019693 \times 100 \\\ &=& -1.9693% \end{array} \]
Also, the test now completes without error.
Further research and discussion with my mentors revealed just how flawed my original analysis was. In the next step, sampling the spanning trees, adding anything to \(\gamma\) would directly increase the probability that the edge would be sampled. That being said, the original problem that I found was still an issue.
Going back to the notion that we a graph on which every spanning tree maps to every spanning tree which contains the desired edge, this is still the key idea which lets us use Krichhoff’s Tree Matrix Theorem. And, contracting the edge will still give a graph in which every spanning tree can be mapped to a corresponding spanning tree which includes \(e\). However, the weight of those spanning trees in \(G \backslash \{e\}\) do not quite map between the two graphs.
Recall that we are dealing with a multiplicative weight function, so the final weight of a tree is the product of all the \(\lambda\)’s on its edges.
\[ c(T) = \prod_{e \in E} \lambda_e \]
The above statement can be expanded into
\[ c(T) = \lambda_1 \times \lambda_2 \times \dots \times \lambda_{|E|} \]
with some arbitary ordering of the edges \(1, 2, \dots |E|\). Because the ordering of the edges is arbitary and due to the associative property of multiplcation, we can assume without loss of generality that the desired edge \(e\) is the last one in the sequence.
Any spanning tree in \(G \backslash \{e\}\) cannot include that last \(\lambda\) in it becuase that edge does not exist in the graph. Therefore in order to convert the weight from a tree in \(G \backslash \{e\}\) we need to multiply \(\lambda_e\) back into the weight of the contracted tree. So, we can now state that
\[ c(T \in \mathcal{T}: T \ni e) = \lambda_e \prod_{f \in E} \lambda_f\ \forall\ T \in G \backslash \{e\} \]
or that for all trees in \(G \backslash \{e\}\), the cost of the corresponding tree in \(G\) is the product of its edge \(\lambda\)’s times the weight of the desired edge. Now recall that \(q_e(\gamma)\) is
\[ \frac{\sum_{T \ni e} \exp(\gamma(T))}{\sum_{T \in \mathcal{T}} \exp(\gamma(T))} \]
In particular we are dealing with the numerator of the above fraction and using \(\lambda_e = \exp(\gamma_e)\) we can rewrite it as
\[ \sum_{T \ni e} \exp(\gamma(T)) = \sum_{T \ni e} \prod_{e \in T} \lambda_e \]
Since we now know that we are missing the \(\lambda_e\) term, we can add it into the expression.
\[ \sum_{T \ni e} \lambda_e \times \prod_{f \in T, f \not= e} \lambda_f \]
Using the rules of summation, we can pull the \(\lambda_e\) factor out of the summation to get
\[ \lambda_e \times \sum_{T \ni e} \prod_{f \in T, f \not= e} \lambda_f \]
And since we use that applying Krichhoff’s Theorem to \(G \backslash \{e\}\) will yeild everything except the factor of \(\lambda_e\), we can just multiply it back manually.
This would let the peusdo code for q
become
def q
input: e, the edge of interest
# Create the laplacian matrices
write lambda = exp(gamma) into the edges of G
G_laplace = laplacian(G, lambda)
G_e = nx.contracted_edge(G, e)
G_e_laplace = laplacian(G, lambda)
# Delete a row and column from each matrix to made a cofactor matrix
G_laplace.delete((0, 0))
G_e_laplace.delete((0, 0))
# Calculate the determinant of the cofactor matrices
det_G_laplace = G_laplace.det
det_G_e_laplace = G_e_laplace.det
# return q_e
return lambda_e * det_G_e_laplace / det_G_laplace
Making this small change to q
worked very well.
I was able to change back to subtracting \(\delta\) as the Asadpour paper does and even added a check to code so that everytime we update a value in \(\gamma\) we know that \(\delta\) has had the correct effect.
# Check that delta had the desired effect
new_q_e = q(e)
desired_q_e = (1 + EPSILON / 2) * z_e
if round(new_q_e, 8) != round(desired_q_e, 8):
raise Exception
And the test passes without fail!
I technically do not know if this distribution is correct until I can start to sample from it. I have written the test I have been working with into a proper test but since my oracle is the program itself, the only way it can fail is if I change the function’s behavior without knowing it.
So I must press onwards to write sample_spanning_tree
and get a better test for both of those functions.
As for the tests of spanning_tree_distribution
, I would of course like to add more test cases.
However, if the Held Karp relaxation returns a cycle as an answer, then there will be \(n - 1\) path spanning trees and the notion of creating this distribution in the first place as we have already found a solution to the ATSP.
I really need more truly fractional Held Karp solutions to expand the test of these next two functions.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
]]>Finally moving on from the Held Karp relaxation, we arrive at the second step of the Asadpour asymmetric traveling salesman problem algorithm. Referencing the Algorithm 1 from the Asadpour paper, we are now finally on step two.
Algorithm 1 An \(O(\log n / \log \log n)\)-approximation algorithm for the ATSP
Input: A set \(V\) consisting of \(n\) points and a cost function \(c\ :\ V \times V \rightarrow \mathbb{R}^+\) satisfying the triangle inequality.
Output: \(O(\log n / \log \log n)\)-approximation of the asymmetric traveling salesman problem instance described by \(V\) and \(c\).
Solve the Held-Karp LP relaxation of the ATSP instance to get an optimum extreme point solution \(x^*\). Define \(z^*\) as in (5), making it a symmetrized and scaled down version of \(x^*\). Vector \(z^*\) can be viewed as a point in the spanning tree polytope of the undirected graph on the support of \(x^*\) that one obtains after disregarding the directions of arcs (See Section 3.)
Let \(E\) be the support graph of \(z^*\) when the direction of the arcs are disregarded. Find weights \({\tilde{\gamma}}_{e \in E}\) such that the exponential distribution on the spanning trees, \(\tilde{p}(T) \propto \exp(\sum_{e \in T} \tilde{\gamma}_e)\) (approximately) preserves the marginals imposed by \(z^*\), i.e. for any edge \(e \in E\), \[\sum_{T \in \mathcal{T} : T \ni e} \tilde{p}(T) \leq (1 + \epsilon) z^*_e\] for a small enough value of \(\epsilon\). (In this paper we show that \(\epsilon = 0.2\) suffices for our purpose. See Section 7 and 8 for a description of how to compute such a distribution.)
Sample \(2\lceil \log n \rceil\) spanning trees \(T_1, \dots, T_{2\lceil \log n \rceil}\) from \(\tilde{p}(.)\). For each of these trees, orient all its edges so as to minimize its cost with respect to our (asymmetric) cost function \(c\). Let \(T^*\) be the tree whose resulting cost is minimal among all of the sampled trees.
Find a minimum cost integral circulation that contains the oriented tree \(\vec{T}^*\). Shortcut this circulation to a tour and output it. (See Section 4.)
Sections 7 and 8 provide two different methods to find the desired probability distribution, with section 7 using a combinatorial approach and section 8 the ellipsoid method. Considering that there is no ellipsoid solver in the scientific python ecosystem, and my mentors and I have already decided not to implement one within this project, I will be using the method in section 7.
The algorithm given in section 7 is as follows:
- Set \(\gamma = \vec{0}\).
- While there exists an edge \(e\) with \(q_e(\gamma) > (1 + \epsilon) z_e\):
- Compute \(\delta\) such that if we define \(\gamma’\) as \(\gamma_e’ = \gamma_e - \delta\), and \(\gamma_f’ = \gamma_f\) for all \(f \in E\ \backslash {e}\), then \(q_e(\gamma’) = (1 + \epsilon/2)z_e\).
- Set \(\gamma \leftarrow \gamma’\).
- Output \(\tilde{\gamma} := \gamma\).
This structure is fairly straightforward, but we need to know what \(q_e(\gamma)\) is and how to calculate \(\delta\).
Finding \(\delta\) is very easy, the formula is given in the Asadpour paper (Although I did not realize this at the time that I wrote my GSoC proposal and re-derived the equation for delta. Fortunately my formula matches the one in the paper.)
\[ \delta = \ln \frac{q_e(\gamma)(1 - (1 + \epsilon / 2)z_e)}{(1 - q_e(\gamma))(1 + \epsilon / 2) z_e} \]
Notice that the formula for \(\delta\) is reliant on \(q_e(\gamma)\). The paper defines \(q_e(\gamma)\) as
\[ q_e(\gamma) = \frac{\sum_{T \ni e} \exp(\gamma(T))}{\sum_{T \in \mathcal{T}} \exp(\gamma(T))} \]
where \(\gamma(T) = \sum_{f \in T} \gamma_f\).
The first thing that I noticed is that in the denominator the summation is over all spanning trees for in the graph, which for the complete graphs we will be working with is exponetial so a `brute force’ approach here is useless. Fortunately, Asadpour and team realized we can use Kirchhoff’s matrix tree theorem to our advantage.
As an aside about Kirchhoff’s matrix tree theorem, I was not familiar with this theorem before this project so I had to do a bit of reading about it. Basically, if you have a laplacian matrix (the adjacency matrix minus the degree matrix), the absolute value of any cofactor is the number of spanning trees in the graph. This was something completely unexpected to me, and I think that it is very cool that this type of connection exists.
The details of using Kirchhoff’s theorem are given in section 5.3. We will be using a weighted laplacian \(L\) defined by
\[ L_{i, j} = \left\{ \begin{array}{l l} -\lambda_e & e = (i, j) \in E \\\ \sum_{e \in \delta({i})} \lambda_e & i = j \\\ 0 & \text{otherwise} \end{array} \right. \]
where \(\lambda_e = \exp(\gamma_e)\).
Now, we know that applying Krichhoff’s theorem to \(L\) will return
\[ \sum_{t \in \mathcal{T}} \prod_{e \in T} \lambda_e \]
but which part of \(q_e(\gamma)\) is that?
If we apply \(\lambda_e = \exp(\gamma_e)\), we find that
\[ \begin{array}{r c l} \sum_{T \in \mathcal{T}} \prod_{e \in T} \lambda_e &=& \sum_{T \in \mathcal{T}} \prod_{e \in T} \exp(\gamma_e) \\\ && \sum_{T \in \mathcal{T}} \exp\left(\sum_{e \in T} \gamma_e\right) \\\ && \sum_{T \in \mathcal{T}} \exp(\gamma(T)) \\\ \end{array} \]
So moving from the first row to the second row is a confusing step, but essentially we are exploiting the properties of exponents. Recall that \(\exp(x) = e^x\), so could have written it as \(\prod_{e \in T} e^{\gamma_e}\) but this introduces ambiguity as we would have multiple meanings of \(e\). Now, for all values of \(e\), \(e_1, e_2, \dots, e_{n-1}\) in the spanning tree \(T\) that product can be expanded as
\[ \prod_{e \in T} e^{\gamma_e} = e^{\gamma_{e_1}} \times e^{\gamma_{e_2}} \times \dots \times e^{\gamma_{e_{n-1}}} \]
Each exponential factor has the same base, so we can collapse that into
\[ e^{\gamma_{e_1} + \gamma_{e_2} + \dots + \gamma_{e_{n-1}}} \]
which is also
\[ e^{\sum_{e \in T} \gamma_e} \]
but we know that \(\sum_{e \in T} \gamma_e\) is \(\gamma(T)\), so it becomes
\[ e^{\gamma(T)} = \exp(\gamma(T)) \]
Once we put that back into the summation we arrive at the denominator in \(q_e(\gamma)\), \(\sum_{T \in \mathcal{T}} \exp(\gamma(T))\).
Next, we need to find the numerator for \(q_e(\gamma)\). Just as before, a `brute force’ approach would be exponential in complexity, so we have to find a better way. Well, the only difference between the numerator and denominator is the condition on the outer summation, which the \(T \in \mathcal{T}\) being changed to \(T \ni e\) or every tree containing edge \(e\).
There is a way to use Krichhoff’s matrix tree theorem here as well. If we had a graph in which every spanning tree could be mapped in a one-to-one fashion onto every spanning tree in the original graph which contains the desired edge \(e\). In order for a spanning tree to contain edge \(e\), we know that the endpoints of \(e\), \((u, v)\) will be directly connected to each other. So we are then interested in every spanning tree in which we reach vertex \(u\) and then leave from vertex \(v\). (As opposed to the spanning trees where we reach vertex \(u\) and then leave from that same vertex). In a sense, we are treating vertices \(u\) and \(v\) is the same vertex. We can apply this literally by contracting \(e\) from the graph, creating \(G / {e}\). Every spanning tree in this graph can be uniquely mapped from \(G / {e}\) onto a spanning tree in \(G\) which contains the edge \(e\).
From here, the logic to show that a cofactor from \(L\) is actually the numerator of \(q_e(\gamma)\) parallels the logic for the denominator.
At this point, we have all of the needed information to create some pseudo code for the next function in the Asadpour method, spanning_tree_distribution()
.
Here I will use an inner function q()
to find \(q_e\).
def spanning_tree_distribution
input: z, the symmetrized and scaled output of the Held Karp relaxation.
output: gamma, the maximum entropy exponential distribution for sampling spanning trees
from the graph.
def q
input: e, the edge of interest
# Create the laplacian matrices
write lambda = exp(gamma) into the edges of G
G_laplace = laplacian(G, lambda)
G_e = nx.contracted_edge(G, e)
G_e_laplace = laplacian(G, lambda)
# Delete a row and column from each matrix to made a cofactor matrix
G_laplace.delete((0, 0))
G_e_laplace.delete((0, 0))
# Calculate the determinant of the cofactor matrices
det_G_laplace = G_laplace.det
det_G_e_laplace = G_e_laplace.det
# return q_e
return det_G_e_laplace / det_G_laplace
# initialize the gamma vector
gamma = 0 vector of length G.size
while true
# We will iterate over the edges in z until we complete the
# for loop without changing a value in gamma. This will mean
# that there is not an edge with q_e > 1.2 * z_e
valid_count = 0
# Search for an edge with q_e > 1.2 * z_e
for e in z
q_e = q(e)
z_e = z[e]
if q_e > 1.2 * z_e
delta = ln(q_e * (1 - 1.1 * z_e) / (1 - q_e) * 1.1 * z_e)
gamma[e] -= delta
else
valid_count += 1
if valid_count == number of edges in z
break
return gamma
The clear next step is to implement the function spanning_tree_distribution
using the pseudo code above as an outline.
I will start by writing q
and testing it with the same graphs which I am using to test the Held Karp relaxation.
Once q
is complete, the rest of the function seems fairly straight forward.
One thing that I am concerned about is my ability to test spanning_tree_distribution
.
There are no examples given in the Asadpour research paper and no other easy resources which I could turn to in order to find an oracle.
The only method that I can think of right now would be to complete this function, then complete sample_spanning_tree
.
Once both functions are complete, I can sample a large number of spanning trees to find an experimental probability for each tree, then run a statistical test (such as an h-test) to see if the probability of each tree is near \(\exp(\gamma(T))\) which is the desired distribution.
An alternative test would be to use the marginals in the distribution and have to manually check that
\[ \sum_{T \in \mathcal{T} : T \ni e} p(T) \leq (1 + \epsilon) z^*_e,\ \forall\ e \in E \]
where \(p(T)\) is the experimental data from the sampled trees.
Both methods seem very computationally intensive and because they are sampling from a probability distribution they may fail randomly due to an unlikely sample.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
]]>This should be my final post about the Held-Karp relaxation! Since my last post titled Implementing The Held Karp Relaxation, I have been testing both the ascent method as well as the branch and bound method.
My first test was to use a truly asymmetric graph rather than a directed graph where the cost in each direction happened to be the same.
In order to create such a test, I needed to know the solution to any such proposed graphs.
I wrote a python script called brute_force_optimal_tour.py
which will generate a random graph, print its adjacency matrix and then check every possible combination of edges to find the optimal tour.
import networkx as nx
from itertools import combinations
import numpy as np
import math
import random
def is_1_arborescence(G):
"""
Returns true if `G` is a 1-arborescence
"""
return (
G.number_of_edges() == G.order()
and max(d for n, d in G.in_degree()) <= 1
and nx.is_weakly_connected(G)
)
# Generate a random adjacency matrix
size = (7, 7)
G_array = np.empty(size, dtype=int)
random.seed()
for r in range(size[0]):
for c in range(size[1]):
if r == c:
G_array[r][c] = 0
continue
G_array[r][c] = random.randint(1, 100)
# Print that adjacency matrix
print(G_array)
G = nx.from_numpy_array(G_array, create_using=nx.DiGraph)
num_nodes = G.order()
combo_count = 0
min_weight_tour = None
min_tour_weight = math.inf
test_combo = nx.DiGraph()
for combo in combinations(G.edges(data="weight"), G.order()):
combo_count += 1
test_combo.clear()
test_combo.add_weighted_edges_from(combo)
# Test to see if test_combo is a tour.
# This means first that it is an 1-arborescence
if not is_1_arborescence(test_combo):
continue
# It also means that every vertex has a degree of 2
arborescence_weight = test_combo.size("weight")
if (
len([n for n, deg in test_combo.degree if deg == 2]) == num_nodes
and arborescence_weight < min_tour_weight
):
# Tour found
min_weight_tour = test_combo.copy()
min_tour_weight = arborescence_weight
print(
f"Minimum tour found with weight {min_tour_weight} from {combo_count} combinations of edges\n"
)
for u, v, d in min_weight_tour.edges(data="weight"):
print(f"({u}, {v}, {d})")
This is useful information as every though the ascent method returns a vector, because if the ascent method returns this solution (a.k.a \(f(\pi) = 0\)) we can calculate that vector off of the edges in the solution without having to explicitly enumerate the dict returned by held_karp_ascent()
.
The first output from the program was a six vertex graph and is presented below.
~ time python3 brute_force_optimal_tour.py
[[ 0 45 39 92 29 31]
[72 0 4 12 21 60]
[81 6 0 98 70 53]
[49 71 59 0 98 94]
[74 95 24 43 0 47]
[56 43 3 65 22 0]]
Minimum tour found with weight 144.0 from 593775 combinations of edges
(0, 5, 31)
(5, 4, 22)
(1, 3, 12)
(3, 0, 49)
(2, 1, 6)
(4, 2, 24)
real 0m9.596s
user 0m9.689s
sys 0m0.241s
First I checked that the ascent method was returning a solution with the same weight, 144, which it was.
Also, every entry in the vector was \(0.866\overline{6}\) which is \(\frac{5}{6}\) or the scaling factor from the Asadpour paper so I know that it was finding the exact solution.
Because if this, my test in test_traveling_salesman.py
checks that for all edges in the solution edge set both \((u, v)\) and \((v, u)\) are equal to \(\frac{5}{6}\).
For my next test, I created a \(7 \times 7\) matrix to test with, and as expected the running time of the python script was much slower.
~ time python3 brute_force_optimal_tour.py
[[ 0 26 63 59 69 31 41]
[62 0 91 53 75 87 47]
[47 82 0 90 15 9 18]
[68 19 5 0 58 34 93]
[11 58 53 55 0 61 79]
[88 75 13 76 98 0 40]
[41 61 55 88 46 45 0]]
Minimum tour found with weight 190.0 from 26978328 combinations of edges
(0, 1, 26)
(1, 3, 53)
(3, 2, 5)
(2, 5, 9)
(5, 6, 40)
(4, 0, 11)
(6, 4, 46)
real 7m28.979s
user 7m29.048s
sys 0m0.245s
Once again, the value of \(f(\pi)\) hit 0, so the ascent method returned an exact solution and my testing procedure was the same as for the six vertex graph.
The branch and bound method was not working well with the two example graphs I generated. First, on the seven vertex matrix, I programmed the test and let it run… and run… and run… until I stopped it at just over an hour of execution time. If it took one eight of that time to brute force the solution, then the branch and bound method truly is not efficient.
I moved to the six vertex graph with high hopes, I already had a six vertex graph which was correctly executing in a reasonable amount of time. The six vertex graph created a large number of exceptions and errors when I ran the tests. I was able to determine why the errors were being generated, but the context did not conform which my expectations for the branch and bound method.
Basically, direction_of_ascent_kilter()
was finding a vertex which was out-of-kilter and returning the corresponding direction of ascent, but find_epsilon()
was not finding any valid cross over edges and returning a maximum direction of travel of \(\infty\).
While I could change the default value for the return value of find_epsilon()
to zero, that would not solve the problem because the value of the vector \(\pi\) would get stuck and the program would enter an infinite loop.
I do have an analogy for this situation. Imagine that you are in an unfamiliar city and you have to meet somebody at the tallest building in that city. However, you don’t know the address and have no way to get a GPS route to that building. Instead of wandering around aimlessly, you decide to scan the skyline for the tallest building you can see and start walking down the street which is the closest to matching that direction. Additionally, you have the ability to tell at any given direction how far down the chosen street to go before you need to re-evaluate and pick a new street.
This hypothetical is a better approximation of the ascent method, but the problem here can be demostrated non the less.
After this procedure works for a while, you suddenly find yourself in an unusual situation. You can still see the tallest building, so you know you are not there yet. You know what street will take you closer to the building, but for some reason you cannot move down that street.
From my understanding of the ascent and branch and bound methods, if the direction of ascent exists, then we have to be able to move some amount in that direction without fail, but the branch and bound method was failing to provide an adequate distance to move.
Considering the trouble with the branch and bound method, and that it is not going to be used in the final Asadpour algorithm, I plan on removing it from the NetworkX pull request and moving onwards using only the ascent method for the rest of the Ascent method.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
M. Held, R. M. Karp, The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>I have now completed my implementation of the ascent and the branch and bound method detailed in the 1970 paper The Traveling-Salesman Problem and Minimum Spanning Trees by Micheal Held and Richard M. Karp.
In my last post, titled Understanding the Ascent Method, I completed the first iteration of the ascent method and found an important bug in the find_epsilon()
method and found a more efficient way to determine substitutes in the graph.
However the solution being given was still not the optimal solution.
After discussing my options with my GSoC mentors, I decided to move onto the branch and bound method anyways with the hope that because the method is more human-computable and an example was given in the paper by Held and Karp that I would be able to find the remaining flaws. Fortunately, this was indeed the case and I was able to correctly implement the branch and bound method and fix the last problem with the ascent method.
The branch and bound method follows from the ascent method, but tweaks how we determine the direction of ascent and simplifies the expression used for \(\epsilon\). As a reminder, we use the notion of an out-of-kilter vertex to find directions of ascent which are unit vectors or negative unit vectors. An out-of-kilter vertex is a vertex which is consistently not connected enough or connected too much in the set of minimum 1-arborescences of a graph. The formal definition is given on page 1151 as
Vertex \(i\) is said to be out-of-kilter high at the point \(\pi\), if, for all \(k \in K(\pi), v_{ik} \geqq 1\); similarly, vertex \(i\) is out-of-kilter low at the point \(\pi\) if, for all \(k \in K(\pi), v_{ik} = -1\).
Where \(v_{ik}\) is the degree of the vertex minus two.
First, I created a function called direction_of_ascent_kilter()
which returns a direction of ascent based on whether a vertex is out-of-kilter.
However, I did not use the method mentioned on the paper by Held and Karp, which is to find a member of \(K(\pi, u_i)\) where \(u_i\) is the unit vector with 1 in the \(i\)th location and check if vertex \(i\) had a degree of 1 or more than two.
Instead, I knew that I could find the elements of \(K(\pi)\) with existing code and decided to check the value of \(v_{ik}\) for all \(k \in K(\pi)\) and once it is determined that a vertex is out-of-kilter simply move on to the next vertex.
Once I have a mapping of all vertices to their kilter state, find one which is out-of-kilter and return the corresponding direction of ascent.
The changes to find_epsilon()
were very minor, basically removing the denominator from the formula for \(\epsilon\) and adding a check to see if we have a negative direction of ascent so that the crossover distances become positive and thus valid.
The brand new function which was needed was branch()
, which well… branches according to the Held and Karp paper.
The first thing it does is run the linear program to form the ascent method to determine if a direction of ascent exists.
If the direction does exist, branch.
If not, search the set of minimum 1-arborescences for a tour and then branch if it does not exist.
The branch process itself is rather simple, find the first open edge (an edge not in the partition sets \(X\) and \(Y\)) and then create two new configurations where that edges is either included or excluded respectively.
Finally the overall structure of the algorithm, written in pseudocode is
Initialize pi to be the zero vector.
Add the configuration (∅, ∅, pi, w(0)) to the configuration priority queue.
while configuration_queue is not empty:
config = configuration_queue.get()
dir_ascent = direction_of_ascent_kilter()
if dir_ascent is None:
branch()
if solution returned by branch is not None
return solution
else:
max_dist = find_epsilon()
update pi
update edge weights
update config pi and bound value
My initial implementation of the branch and bound method returned the same, incorrect solution is the ascent method, but with different edge weights. As a reminder, I wanted a solution which looked like this:
and I now had two algorithms returning this solution:
As I mentioned before, the branch and bound method is more human-computable than the ascent method, so I decided to follow the execution of my implementation with the one given in [1]. Below, the left side is the data from the Held and Karp paper and on the right my program’s execution on the directed version.
Undirected Graph | Directed Graph |
---|---|
Iteration 1: | |
Starting configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, 196)\) | Starting configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, 196)\) |
Minimum 1-Trees: | Minimum 1-Arborescences: |
Vertex 3 out-of-kilter LOW | Vertex 3 out-of-kilter LOW |
\(d = \begin{bmatrix} 0 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}\) | \(d = \begin{bmatrix} 0 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}\) |
\(\epsilon(\pi, d) = 5\) | \(\epsilon(\pi, d) = 5\) |
New configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & -5 & 0 & 0 \end{bmatrix}, 201)\) | New configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & -5 & 0 & 0 \end{bmatrix}, 212)\) |
Iteration 2: | |
Minimum 1-Trees: | Minimum 1-Arborescences: |
In order to get these results, I forbid the program from being able to choose to connect vertex 0 to the same other vertex for both the incoming and outgoing edge. However, it is very clear that from the start, iteration two was not going to be the same.
I noticed that in the first iteration, there were twice as many 1-arborescences as 1-trees and that the difference was that the cycle can be traversed in both directions. This creates a mapping between 1-trees and 1-arborescences. In the second iteration, there is not as twice as many 1-arborescences and that mapping is not present. Vertex 0 always connects to vertex 3 in the arborescences and vertex 5 in the trees. Additionally, the cost of the 1-arborescences are higher than the costs of the 1-trees.
I knew that the choice of root node in the arborescences affects the total price from working on the ascent method. I now wondered if a minimum 1-arborescence could come from a non-minimum spanning arborescence. So it would be, the answer is yes.
In order to test this hypothesis, I created a simple python script using a modified version of k_pi()
.
The entire thing is longer than I’d like to put here, but the gist was simple; iterate over all of the spanning arborescences in the graph, tracking the minimum weight and then printing the minimum 1-arborescences that this program finds to compare to the ones that the unaltered one finds.
The output is below:
Adding arborescence with weight 212.0
Adding arborescence with weight 212.0
Adding arborescence with weight 212.0
Adding arborescence with weight 204.0
Adding arborescence with weight 204.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Found 6 minimum 1-arborescences
(1, 5, 30)
(2, 1, 41)
(2, 3, 21)
(4, 2, 35)
(5, 0, 52)
(0, 4, 17)
(1, 2, 41)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
(0, 5, 52)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
(5, 2, 41)
(0, 5, 52)
(2, 4, 35)
(3, 2, 16)
(4, 0, 17)
(5, 1, 30)
(5, 3, 46)
(0, 5, 52)
(2, 3, 21)
(3, 5, 41)
(4, 2, 35)
(5, 1, 30)
(5, 0, 52)
(0, 4, 17)
(2, 3, 21)
(2, 5, 41)
(4, 2, 35)
(5, 1, 30)
(5, 0, 52)
(0, 4, 17)
This was very enlightening. The 1-arborescences of weight 212 were the ones that my branch and bound method was using in the second iteration, but not the true minimum ones. Graphically, those six 1-arborescences look like this:
And suddenly that mapping between the 1-trees and 1-arborescences is back! But why can minimum 1-arborescences come from non-minimum spanning arborescences? Remember that we create 1-arborescences by find spanning arborescences on the vertex set \({2, 3, \dots, n}\) and then connecting that missing vertex to the root of the spanning arborescence and the minimum weight incoming edge.
This means that even among the true minimum spanning arborescences, the final weight of the 1-arborescence can vary based on the cost of connecting ‘vertex 1’ to the root of the arborescence. I already had to deal with this issue earlier in the implementation of the ascent method. Now suppose that not every vertex in the graph is a root of an arborescence in the set of minimum spanning arborescences. Let the minimum root be the root vertex of the arborescence which is the cheapest to connect to and the maximum root the root vertex which is the most expensive to connect to. If we needed to, we could order the roots from minimum to maximum based on the weight of the edge from ‘vertex 1’ to that root.
Finally, suppose that the result of considering only the set of minimum spanning arborescences results in a set of minimum 1-arborescenes which do not use the minimum root and have a total cost \(c\) more than the cost of the minimum spanning arborescence plus the cost of connecting to the minimum root.
Continue to consider spanning arborescences in increasing weight, such as the ones returned by the ArborescenceIterator
.
Eventually the ArborescenceIterator
will return a spanning arborescence which has the minimum root.
If the cost of the minimum spanning arborescence is \(c_{min}\) and the cost of this arborescence is less than \(c_{min} + c\) then a new minimum 1-arborescence has been found from a non-minimum spanning arborescence.
It is obviously impractical to consider all of the spanning arborescences in the graph, and because ArborescenceIterator
returns arborescences in order of increasing weight, there is a weight after which it is impossible to produce a minimum 1-arborescence.
Let the cost of a minimum spanning arborescence be \(c_{min}\) and the total costs of connecting the roots range from \(r_{min}\) to \(r_{max}\).
The worst case cost of the minimum 1-arborescence is \(c_{min} + r_{max}\) which would connect the minimum spanning arborescence to the most expensive root and the best case minimum 1-arborescence would be \(c_{min} + r_{min}\).
With regard to the weight of the spanning arborescence itself, once it exceeds \(c_{min} + r_{max} - r_{min}\) we know that even if it uses the minimum root that the total weight will be greater than worst case minimum 1-arborescence so that is the bound which we use the ArborescenceIterator
with.
After implementing this boundary for checking spanning arborescences to find minimum 1-arborescences, both methods executed successfully on the test graph.
Now that both the ascent and branch and bound methods are working, they must be tested both for accuracy and performance. Surprisingly, on the test graph I have been using, which is originally from the Held and Karp paper, the ascent method is between 2 and 3 times faster than the branch and bound method. However, this six vertex graph is small and the branch and bound method may yet have better performance on larger graphs. I will have to create larger test graphs and then select whichever method has better performance overall.
Additionally, this is an example where \(f(\pi)\), the gap between a tour and 1-arborescence, converges to 0. This is not always the case, so I will need to test on an example where the minimum gap is greater than 0.
Finally, the output of my Held Karp relaxation program is a tour. This is just one part of the Asadpour asymmetric traveling salesperson problem and that algorithm takes a modified vector which is produced based on the final result of the relaxation. I still need to convert the output to match the expectation of the overall algorithm I am seeking to implement this summer of code.
I hope to move onto the next step of the Asadpour algorithm on either June 30th or July 1st.
[1] Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>It has been far longer than I would have prefered since I wrote a blog post. As I expected in my original GSoC proposal, the Held-Karp relaxation is proving to be quite difficult to implement.
My mentors and I agreed that the branch and bound method discussed in Held and Karp’s 1970 paper The Traveling-Salesman Problem and Minimum Spanning Trees which first required the implementation of the ascent method because it is used in the branch and bound method. For the last week and a half I have been implementing and debugging the ascent method and wanted to take some time to reflect on what I have learned.
I will start by saying that as of the writing of this post, my version of the ascent method is not giving what I expect to be the optimal solution. For my testing, I took the graph which Held and Karp use in their example of the branch and bound method, a weighted \(\mathcal{K}_6\), and converted to a directed but symmetric version given in the following adjacency matrix.
\[ \begin{bmatrix} 0 & 97 & 60 & 73 & 17 & 52 \\\ 97 & 0 & 41 & 52 & 90 & 30 \\\ 60 & 41 & 0 & 21 & 35 & 41 \\\ 73 & 52 & 21 & 0 & 95 & 46 \\\ 17 & 90 & 35 & 95 & 0 & 81 \\\ 52 & 30 & 41 & 46 & 81 & 0 \end{bmatrix} \]
The original solution is an undirected tour but in the directed version, the expected solutions depend on which way they are traversed. Both of these cycles have a total weight of 207.
This is the cycle returned by the program, which has a total weight of 246.
All of this code goes into the function _held_karp()
within traveling_saleaman.py
in NetworkX and I tried to follow the algorithm outlined in the paper as closely as I could.
The _held_karp()
function itself has three inner functions, k_pi()
, direction_of_ascent()
and find_epsilon()
which represent the main three steps used in each iteration of the ascent method.
k_pi()
k_pi()
uses the ArborescenceIterator
I implemented during the first week of coding for the Summer of Code to find all of the minimum 1-arborescences in the graph.
My original assessment of creating 1-arborescences was slightly incorrect.
I stated that
In order to connect vertex 1, we would choose the outgoing arc with the smallest cost and the incoming arc with the smallest cost.
In reality, this method would produce graphs which are almost arborescences based solely on the fact that the outgoing arc would almost certainly create a vertex with two incoming arcs. Instead, we need to connect vertex 1 with the incoming edge of lowest cost and the edge connecting to the root node of the arborescence on nodes \({2, 3, \dots, n}\) that way the in-degree constraint is not violated.
For the test graph on the first iteration of the ascent method, k_pi()
returned 10 1-arborescences but the costs were not all the same.
Notice that because we have no agency in choosing the outgoing edge of vertex 1 that the total cost of the 1-arborescence will vary by the difference between the cheapest root to connect to and the most expensive node to connect to.
My original writing of this function was not very efficient and it created the 1-arborescence from all of the minimum spanning arborescences and then iterated over them to delete all of the non-minimum ones.
Yesterday I re-wrote this function so that once a 1-arborescence of lower weight was found it would delete all of the current minimum ones in favor on the new one and not add any 1-arborescences it found with greater weight to the set of minimum 1-arborescences.
The real reason that I re-wrote the method was to try something new in hopes of pushing the program from a suboptimal solution to the optimal one.
As I mentioned early, the forced choice of connecting to the root node created 1-arborescences of different weight.
I suspected then that different choices of vertex 1 would be able to create 1-arborescences of even lower weight than just arbitrarily using the one returned by next(G.__iter__())
.
So I wrapped all of k_pi()
with a for
loop over the vertices of the graph and found that the choice of vertex 1 made a difference.
Excluded node: 0, Total Weight: 161.0
Chosen incoming edge for node 0: (4, 0), chosen outgoing edge for node 0: (0, 4)
(2, 3, 21)
(2, 5, 41)
(4, 2, 35)
(4, 0, 17)
(5, 1, 30)
(0, 4, 17)
Excluded node: 0, Total Weight: 161.0
Chosen incoming edge for node 0: (4, 0), chosen outgoing edge for node 0: (0, 4)
(1, 5, 30)
(2, 1, 41)
(2, 3, 21)
(4, 2, 35)
(4, 0, 17)
(0, 4, 17)
Excluded node: 1, Total Weight: 174.0
Chosen incoming edge for node 1: (5, 1), chosen outgoing edge for node 1: (1, 5)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 2, 41)
(5, 1, 30)
(1, 5, 30)
Excluded node: 2, Total Weight: 187.0
Chosen incoming edge for node 2: (3, 2), chosen outgoing edge for node 2: (2, 3)
(0, 4, 17)
(3, 5, 46)
(3, 2, 21)
(5, 0, 52)
(5, 1, 30)
(2, 3, 21)
Excluded node: 3, Total Weight: 165.0
Chosen incoming edge for node 3: (2, 3), chosen outgoing edge for node 3: (3, 2)
(1, 5, 30)
(2, 1, 41)
(2, 4, 35)
(2, 3, 21)
(4, 0, 17)
(3, 2, 21)
Excluded node: 3, Total Weight: 165.0
Chosen incoming edge for node 3: (2, 3), chosen outgoing edge for node 3: (3, 2)
(2, 4, 35)
(2, 5, 41)
(2, 3, 21)
(4, 0, 17)
(5, 1, 30)
(3, 2, 21)
Excluded node: 4, Total Weight: 178.0
Chosen incoming edge for node 4: (0, 4), chosen outgoing edge for node 4: (4, 0)
(0, 5, 52)
(0, 4, 17)
(1, 2, 41)
(2, 3, 21)
(5, 1, 30)
(4, 0, 17)
Excluded node: 4, Total Weight: 178.0
Chosen incoming edge for node 4: (0, 4), chosen outgoing edge for node 4: (4, 0)
(0, 5, 52)
(0, 4, 17)
(2, 3, 21)
(5, 1, 30)
(5, 2, 41)
(4, 0, 17)
Excluded node: 5, Total Weight: 174.0
Chosen incoming edge for node 5: (1, 5), chosen outgoing edge for node 5: (5, 1)
(1, 2, 41)
(1, 5, 30)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
Note that because my test graph is symmetric it likes to make cycles with only two nodes. The weights of these 1-arborescences range from 161 to 178, so I tried to run the test which had been taking about 300 ms using the new approach… and the program was non-terminating. I created breakpoints in PyCharm after 200 iterations of the ascent method and found that the program was stuck in a loop where it alternated between two different minimum 1-arborescences. This was a long shot, but it did not work out so I reverted the code to always pick the same vertex for vertex 1.
Either way, the fact that I had almost entirely re-written this function without a change in output suggests that this function is not the source of the problem.
direction_of_ascent()
This was the one function which has pseudocode in the Held and Karp paper:
- Set \(d\) equal to the zero \(n\)-vector.
- Find a 1-tree \(T^k\) such that \(k \in K(\pi, d)\). [A method of executing Step 2 follows from the results of Section 6 (the greedy algorithm).]
- If \(\sum_{i=1}^{i=n} d_i v_{i k} > 0\), STOP.
- \(d_i \rightarrow d_i + v_{i k}\), for \(i = 2, 3, \dots, n\)
- GO TO 2.
Using this as a guide, the implementation of this function was simple until I got to the terminating condition, which is a linear program discussed on page 1149 as
Thus, when failure to terminate is suspected, it is necessary to check whether no direction of ascent exists; by the Minkowski-Farkas lemma this is equivalent to the existence of nonnegative coefficients \(\alpha_k\) such that
\( \sum_{k \in K(\pi)} \alpha_kv_{i k} = 0, \quad i = 1, 2, \dots, n \)
This can be checked by linear programming.
While I was able to implement this without much issue, one very important constraint of the linear program was not mentioned here, but rather the page before during a proof. That constraint is
\[ \sum_{k \in K(\pi)} \alpha_k = 1 \]
Once I spent several hours trying to debug the original linear program and noticed the missing constraint. The linear program started to behave correctly, terminating the program when a tour is found.
find_epsilon()
This function requires a completely different implementation compared to the one described in the Held and Karp paper.
The basic idea in both my implementation for directed graphs and the description for undirected graphs is finding edges which are substitutes for each other, or an edge outside the 1-arborescence which can replace an edge in the arborescence and will result in a 1-arborescence.
The undirected version uses the idea of fundamental cycles in the tree to find the substitutes, and I tried to use this idea as will with the find_cycle()
function in the NetworkX library.
I executed the first iteration of the ascent method by hand and noticed that what I computed for all of the possible values of \(\epsilon\) and what the program found did not match.
I had found several that it had missed and it found several that I missed.
For the example graph, I found that the following edge pairs are substitutes where the first edge is not in the 1-arborescence and the second one is the one in the 1-arborescence which it can replace using the below minimum 1-arborescence.
\[ \begin{array}{l} (0, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = 56 \\\ (0, 2) \rightarrow (4, 2) \text{ valid: } \epsilon = 25 \\\ (0, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 52 \\\ (0, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = \frac{30 - 52}{0 - 0} \text{, not valid} \\\ (1, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 15.5 \\\ (2, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = 5.5 \\\ (3, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = 5.5 \\\ (3, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = \frac{30 - 46}{-1 + 1} \text{, not valid} \\\ (4, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = \frac{41 - 90}{1 - 1} \text{, not valid} \\\ (4, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = \frac{30 - 95}{1 - 1} \text{, not valid} \\\ (4, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = -25.5 \text{, not valid (negative }\epsilon) \\\ (5, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 25 \\\ \end{array} \]
I missed the following substitutes which the program did find.
\[ \begin{array}{l} (1, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 80 \\\ (1, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = 73 \\\ (2, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = \frac{17 - 60}{1 - 1} \text{, not valid} \\\ (2, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = -18 \text{, not valid (negative }\epsilon) \\\ (3, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 28 \\\ (3, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = 78 \\\ (5, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 35 \\\ (5, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = \frac{17 - 81}{0 - 0} \text{, not valid} \\\ \end{array} \]
Notice that some substitutions do not cross over if we move in the direction of ascent, which are the pairs which have a zero as the denominator. Additionally, \(\epsilon\) is a distance, and the concept of a negative distance does not make sense. Interpreting a negative distance as a positive distance in the opposite direction, if we needed to move in that direction, the direction of ascent vector would be pointing the other way.
The reason that my list did not match the list of the program was because find_cycle()
did not always return the fundamental cycle containing the new edge.
If I called find_cycle()
on a vertex in the other cycle in the graph (in this case \({(0, 4), (4, 0)}\)), it would return that rather than the true fundamental cycle.
This prompted me to think about what really determines if edges in a 1-arborescence are substitutes for each other. In every case where a substitute was valid, both of those edges lead to the same vertex. If they did not, then the degree constraint of the arborescence would be violated because we did not replace the edge leading into a node with another edge leading into the same node. This is true regardless of if the edges are part of the same fundamental cycle or not.
Thus, find_epsilon()
now takes every edge in the graph but not the chosen 1-arborescence \(k \in K(\pi, d)\) and find the other edge in \(k\) pointing to the same vertex, swaps them and then checks that the degree constraint is not violated, it has the correct number of edges and it is still connected.
This is a more efficient method to use, and it found more valid substitutions as well so I was hopeful that it would finally bring the returned solution down to the optimal solution, perhaps because it was missing the correct value of \(\epsilon\) on even just one of the iterations.
It did not.
At this point I have no real course forward, but two unappealing options.
find_epsilon()
by executing the first iteration of the ascent method by hand. It took about 90 minutes.
I could try to continue this process and hope that while iteration 1 is executing correctly I find some other bug in the code, but I doubt that I will ever reach the 9 iterations the program needs
to find the faulty solution.I will be discussing the next steps with my GSoC mentors soon.
Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>We are coming into the end of the first week of coding for the Summer of Code, and I have implemented two new, but related, features in NetworkX. In this post, I will discuss how I implemented them, some of the challenges and how I tested them. Those two new features are a spanning tree iterator and a spanning arborescence iterator.
The arborescence iterator is the feature that I will be using directly in my GSoC project, but I though that it was a good idea to implement the spanning tree iterator first as it would be easier and I could directly refer back to the research paper as needed. The partition schemes between the two are the same, so once I figured it out for the spanning tress what I learned there would directly port into the arborescence iterator and there I could focus on modifying Edmond’s algorithm to respect the partition.
This was the first of the new freatures. It follows the algorithm detailed in a paper by Sörensen and Janssens from 2005 titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost which can be found here [2].
Now, I needed to tweak the implementation of the algorithm because I wanted to implement a python iterator, so somebody can write
for tree in nx.SpanningTreeIterator(G):
pass
and that loop would return spanning trees starting with the ones of minimum cost and climbing to the ones of maximum cost.
In order to implement this feature, my first step was to ensure that once I know what the edge partition of the graph was, I could find a minimum spanning tree which respected the partition. As a brief reminder, the edge partition creates two disjoint sets of edges of which one must appear in the resulting spanning tree and one cannot appear in the spanning tree. Edges which are neither included or excluded from the spanning tree and called open.
The easiest algorithm to implement this which is Kruskal’s algorithm. The included edges are all added to the spanning tree first, and then the algorithm can join the components created with the included edges using the open edges.
This was easy to implement in NetworkX. The Kruskal’s algorithm in NetworkX is a generator which returns the edges in the minimum spanning tree one at a time using a sorted list of edges. All that I had to do was change the sorting process so that the included edges where always at the front of that list, then the algorithm would always select them, regardless of weight for the spanning tree.
Additionally, since the general spanning tree of a graph is a partitioned tree where the partition has no included or excluded edges, I was about to convert the normal Kruskal’s implementation into a wrapper for my partition respecting one in order to reduce redunant code.
As for the partitioning process itself, that proved to be a bit more tricky mostly stemming from my own limited python experience.
(I have only been working with python since the start of the calendar year)
In order to implement the partitioning scheme I needed an ordered data structure and choose the PriorityQueue
class.
This was convienct, but for elements with the same weight for their minimum spanning trees it tried to compare the dictionaries hold the edge data was is not a supported operation.
Thus, I implemented a dataclass where only the weight of the spanning tree was comparable.
This means that for ties in spanning tree weight, the oldest partition with that weight is considered first.
Once the implementation details were ironed out, I moved on to testing.
At the time of this writting, I have tested the SpanningTreeIterator
on the sample graph in the Sörensen and Janssens paper.
That graph is
It has eight spanning trees, ranging in weight from 17 to 23 which are all shown below.
Since this graph only has a few spanning trees, it was easy to explicitly test that each graph returned from the iterator was the next one in the sequence. The iterator also works backwards, so calling
for tree in nx.SpanningTreeIterator(G, minimum=False):
pass
starts with the maximum spanning tree and works down to the minimum spanning tree.
The code for the spanning tree iterator can be found here starting around line 761.
The arborescence iterator is what I actually need for my GSoC project, and as expected was more complicated to implement.
In my original post titled Finding All Minimum Arborescences, I discussed cases that Edmond’s algorithm [1] would need to handle and proposed a change to the desired_edge
method.
These changes where easy to make, but were not the extent of the changes that needed to be made as I originally thought. The original graph from Edmonds’ 1967 paper is below
In my first test, which was limited to the minimum spanning arborescence of a random partition I created, the results where close. Below, the blue edges are included and the red one is excluded.
The minimum spanning arborescence initially is shown below.
While the \((3, 0)\) edge is properly excluded and the \((2, 3)\) edge is included, the \((6, 2)\) is not present in the arborescence (show as a dashed edge). Tracking this problem down was a hassle, but the way that Edmonds’ algorithm works is that a cycle, which would have been present if the \((6, 2)\) edge was included, are collasped into a signle vertex as the algorithm moves to the next iteration. Once that cycle is collapsed into a vertex, it still has to choose how to access that vertex and the choice is based on the best edge as before (this is step I1 in [1]). Then, when the algorithm expands the cycle out, it will remove the edge which is
Which is this case, would be \((6, 2)\) shown in red in the next image. Represented visually, the cycle with incoming edges would look like
And that would be collapsed into a new vertex, \(N\) from which the incoming edge with weight 12 would be selected.
In this example we want to forbid the algorithm from picking the edge with weight 12, so that when the cycle is reconstructed the included edge \((6, 2)\) is still present. Once we make one of the incoming edges an included edge, we know from the definition of an arborescence that we cannot get to that vertex from any other edges. They are all effectivily excluded, so once we find an included edge directed towards a vertex we can made all of the other incoming edges excluded.
Returning to the example, the collapsed vertex \(N\) would have the edge of weight 12 excluded and would pick the edge of weight 13.
At this point the iterator would find 236 arborescences with cost ranging from 96 to 125. I thought that I was very close to being finished and I knew that the cost of the minimum spanning arborescence was 96, until I checked to see what the weight of the maximum spanning arborescence was: 131.
This means that I was removing partitions which contained a valid arborescence before they were being added to priority queue.
My check_partition
method within the ArborescenceIterator
was doing the following:
False
.Rather than try to debug what I though was a good method, I decided to change my process.
I moved the last bullet point into the write_partition
method and then stopped using the check_partition
method.
If an edge partition does not have a spanning arborescence, the partition_spanning_arborescence
function will return None
and I discard the partition.
This approach is more computationally intensive, but it increased the number of returned spanning araborescences from 236 to 680 and the range expanded to the proper 96 - 131.
But how do I know that it isn’t skipping arborescences within that range? Since 680 arborescences is too many to explicitly check, I decided to write another test case. This one would check that the number of arborescences was correct and that the sequence never decreases.
In order to check the number of arborescecnes, I dicided to take a brute force approach. There are
\[ \binom{18}{8} = 43,758 \]
possible combinations of edges which could be arborescences. That’s a lot of combintation, more than I wanted to check by hand so I wrote a short python script.
from itertools import combinations
import networkx as nx
edgelist = [
(0, 2),
(0, 4),
(1, 0),
(1, 5),
(2, 1),
(2, 3),
(2, 5),
(3, 0),
(3, 4),
(3, 6),
(4, 7),
(5, 6),
(5, 8),
(6, 2),
(6, 8),
(7, 3),
(7, 6),
(8, 7),
]
combo_count = 0
arbor_count = 0
for combo in combinations(edgelist, 8):
combo_count += 1
combo_test = nx.DiGraph()
combo_test.add_edges_from(combo)
if nx.is_arborescence(combo_test):
arbor_count += 1
print(
f"There are {combo_count} possible combinations of eight edges which "
f"could be an arboresecnce."
)
print(f"Of those {combo_count} combinations, {arbor_count} are arborescences.")
The output of this script is
There are 43758 possible combinations of eight edges which could be an arboresecnce.
Of those 43758 combinations, 680 are arborescences.
So now I know how many arborescences where in the graph and it matched the number returned from the iterator. Thus, I believe that the iterator is working well.
The iterator code is here and starts around line 783. It can be used in the same way as the spanning tree iterator.
Attached is a sample output from the iterator detailing all 680 arborescences of the test graph. Since Jekyll will not let me put up the txt file I had to convert it into a pdf which is 127 pages to show the 6800 lines of output from displaying all of the arborescences.
[1] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[2] G.K. Janssens, K. Sörensen, An algoirthm to generate all spanning trees in order of incresing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
]]>There is only one thing that I need to figure out before the first coding period for GSoC starts on Monday: how to find all of the minimum arborescences of a graph. This is the set \(K(\pi)\) in the Held and Karp paper from 1970 which can be refined down to \(K(\pi, d)\) or \(K_{X, Y}(\pi)\) as needed. For more information as to why I need to do this, please see my last post here.
This is a place where my contributions to NetworkX to implement the Asadpour algorithm [1] for the directed traveling salesman problem will be useful to the rest of the NetworkX community (I hope). The research paper that I am going to template this off of is this 2005 paper by Sörensen and Janssens titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost [4].
The basic idea here is to implement their algorithm and then generate spanning trees until we find the first one with a cost that is greater than the first one generated, which we know is a minimum, so that we have found all of the minimum spanning trees. I know what you guys are saying, “Matt, this paper discusses spanning trees, not spanning arborescences, how is this helpful?”. Well, the heart of this algorithm is to partition the vertices into either excluded edges which cannot appear in the tree, included edges which must appear in the tree and open edges which can be but are not required to be in the tree. Once we have a partition, we need to be able to find a minimum spanning tree or minimum spanning arborescence that respects the partitioned edges.
In NetworkX, the minimum spanning arborescences are generated using Chu-Liu/Edmonds’ Algorithm developed by Yoeng-Jin Chu and Tseng-Hong Liu in 1965 and independently by Jack Edmonds in 1967. I believe that Edmonds’ Algorithm [2] can be modified to require an arc to be either included or excluded from the resulting spanning arborescence, thus allowing me to implement Sörensen and Janssens’ algorithm for directed graphs.
First, let’s explore whether the partition scheme discussed in the Sörensen and Janssens paper [4] will work for a directed graph. The critical ideas for creating the partitions are given on pages 221 and 222 and are as follows:
Given an MST of a partition, this partition can be split into a set of resulting partitions in such a way that the following statements hold:
- the intersection of any two resulting partitions is the empty set,
- the MST of the original partition is not an element of any of the resulting partitions,
- the union of the resulting partitions is equal to the original partition, minus the MST of the original partition.
In order to achieve these conditions, they define the generation of the partitions using this definition for a minimum spanning tree
\[ s(P) = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1}} \]
where the \((i, j)\) edges are the included edges of the original parition and the \((t, v)\) are from the open edges of the original partition. Now, to create the next set of partitions, take each of the \((t, v)\) edges sequentially and introduce them one at a time, make that edge an excluded edge in the first partition it appears in and an included edge in all subsequent partitions. This will produce something to the effects of
\[ \begin{array}{l} P_1 = {(i_1, j_1), \dots, (i_r, j_r), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_1, v_1})} \\\ P_2 = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_2, v_2})} \\\ P_3 = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), (t_2, v_2), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_3, v_3})} \\\ \vdots \\\ \begin{multline*} P_{n-r-1} = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-2}, v_{n-r-2}), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), \\\ (\overline{t_{n-r-1}, v_{n-r-1}})} \end{multline*} \\\ \end{array} \]
Now, if we extend this to a directed graph, our included and excluded edges become included and excluded arcs, but the definition of the spanning arborescence of a partition does not change. Let \(s_a(P)\) be the minimum spanning arborescence of a partition \(P\). Then
\[ s_a(P) = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1}} \]
\(s_a(P)\) is still constructed of all of the included arcs of the partition and a subset of the open arcs of that partition. If we partition in the same manner as the Sörensen and Janssens paper [4], then their cannot be spanning trees which both include and exclude a given edge and this conflict exists for every combintaion of partitions.
Clearly the original arborescence, which includes all of the \((t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1})\) cannot be an element of any of the resulting partitions.
Finally, there is the claim that the union of the resulting partitions is the original partition minus the original minimum spanning tree. Being honest here, this claim took a while for me to understand. In fact, I had a whole paragraph talking about how this claim doesn’t make sense before all of a sudden I realized that it does. The important thing to remember here is that the union of all of the partitions isn’t the union of the sets of included and excluded edges (which is where I went wrong the first time), it is a subset of spanning trees. The original partition contains many spanning trees, one or more of which are minimum, but each tree in the partition is a unique subset of the edges of the original graph. Now, because each of the resulting partitions cannot include one of the edges of the original partition’s minimum spanning tree we know that the original minimum spanning tree is not an element of the union of the resulting partitions. However, because every other spanning tree in the original partition which was not the selected minimum one is different by at least one edge it is a member of at least one of the resulting partitions, specifically the one where that one edge of the selected minimum spanning tree which it does not contain is the excluded edge.
So now we know that this same partition scheme which works for undirected graphs will work for directed ones. We need to modify Edmonds’ algorithm to mandate that certain arcs be included and others excluded. To start, a review of this algorithm is in order. The original description of the algorithm is given on pages 234 and 235 of Jack Edmonds’ 1967 paper Optimum Branchings [2] and roughly speaking it has three major steps.
Now that we are familiar with the minimum arborescence algorithm, we can discuss modifying it to force it to include certain edges or reject others. The changes will be primarily located in step 1. Under the normal operation of the algorithm, the consideration which happens at each vertex might look like this.
Where the bolded arrow is chosen by the algorithm as it is the incoming arc with minimum weight. Now, if we were required to include a different edge, say the weight 6 arc, we would want this behavior even though it is strictly speaking not optimal. In a similar case, if the arc of weight 2 was excluded we would also want to pick the arc of weight 6. Below the excluded arc is a dashed line.
But realistically, these are routine cases that would not be difficult to implement. A more interesting case would be if all of the arcs were excluded or if more than one are included.
Under this case, there is no spanning arborescence for the partition because the graph is not connected. The Sörensen and Janssens paper characterize these as empty partitions and they are ignored.
In this case, things start to get a bit tricky. With two (or more) included arcs leading to this vertex, it is but definition not an arborescence as according to Edmonds on page 233
A branching is a forest whose edges are directed so that each is directed toward a different node. An arborescence is a connected branching.
At first I thought that there was a case where because this case could result in the creation of a cycle that it was valid, but I realize now that in step 3 of Edmonds’ algorithm that one of those arcs would be removed anyways. Thus, any partition with multiple included arcs leading to a single vertex is empty by definition. While there are ways in which the algorithm can handle the inclusion of multiple arcs, one (or more) of them by definition of an arborescence will be deleted by the end of the algorithm.
I propose that these partitions are screened out before we hand off to Edmonds’ algorithm to find the arborescences.
As such, Edmonds’ algorithm will need to be modified for the cases of at most one included edge per vertex and any number of excluded edges per vertex.
The critical part of altering Edmonds’ Algorithm is contained within the desired_edge
function in the NetworkX implementation starting on line 391 in algorithms.tree.branchings
.
The whole function is as follows.
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if new_weight > weight:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
The function would be changed to automatically return an included arc and then skip considering any excluded arcs.
Because this is an inner function, we can access parameters passed to the parent function such as something along the lines as partition=None
where the value of partition
is the edge attribute detailing true
if the arc is included and false
if it is excluded.
Open edges would not need this attribute or could use None
.
The creation of an enum is also possible which would unify the language if I talk to my GSoC mentors about how it would fit into the NetworkX ecosystem.
A revised version of desired_edge
using the true
and false
scheme would then look like this:
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if data[partition]:
return edge, data[attr]
if new_weight > weight and not data[partition]:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
And a version using the enum might look like
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if data[partition] is Partition.INCLUDED:
return edge, data[attr]
if new_weight > weight and data[partition] is not Partition.EXCLUDED:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
Once Edmonds’ algorithm has been modified to be able to use partitions, the pseudocode from the Sörensen and Janssens paper would be applicable.
Input: Graph G(V, E) and weight function w
Output: Output_File (all spanning trees of G, sorted in order of increasing cost)
List = {A}
Calculate_MST(A)
while MST ≠ ∅ do
Get partition Ps in List that contains the smallest spanning tree
Write MST of Ps to Output_File
Remove Ps from List
Partition(Ps)
And the corresponding Partition
function being
P1 = P2 = P
for each edge i in P do
if i not included in P and not excluded from P then
make i excluded from P1
make i include in P2
Calculate_MST(P1)
if Connected(P1) then
add P1 to List
P1 = P2
I would need to change the format of the first code block as I would like it to be a Python iterator so that a for
loop would be able to iterate through all of the spanning arborescences and then stop once the cost increases in order to limit it to only minimum spanning arborescences.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), p. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[3] M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees, Operations research, 1970-11-01, Vol.18 (6), p.1138-1162, https://www.jstor.org/stable/169411
[4] G.K. Janssens, K. Sörensen, An algoirthm to generate all spanning trees in order of incresing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
]]>After talking with my GSoC mentors about what we all believe to be the most difficult part of the Asadpour algorithm, the Held-Karp relaxation, we came to several conclusions:
Thus, alternative methods for solving the Held-Karp relaxation needed to be investigated. To this end, we turned to the original 1970 paper by Held and Karp, The Traveling Salesman Problem and Minimum Spanning Trees to see how they proposed solving the relaxation (Note that this paper was published before the ellipsoid algorithm was applied to linear programming in 1979). The Held and Karp paper discusses three methods for solving the relaxation:
But before we explore the methods that Held and Karp discuss, we need to ensure that these methods still apply to solving the Held-Karp relaxation within the context of the Asadpour paper. The definition of the Held-Karp relaxation that I have been using on this blog comes from the Asadpour paper, section 3 and is listed below.
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
The closest match to this program in the Held Karp paper is their linear program 3, which is a linear programming representation of the entire traveling salesman problem, not solely the relaxed version. Note that Held and Karp were dealing with the symmetric TSP (STSP) while Asadpour is addressing the asymmetric or directed TSP (ATSP).
\[ \begin{array}{c l l} \text{min} & \sum_{1 \leq i < j \leq n} c_{i j}x_{i j} \\\ \text{s.t.} & \sum_{j > i} x_{i j} + \sum_{j < i} x_{j i} = 2 & (i = 1, 2, \dots, n) \\\ & \sum_{i \in S\\\ j \in S\\\ i < j} x_{i j} \leq |S| - 1 & \text{for any proper subset } S \subset {2, 3, \dots, n} \\\ & 0 \leq x_{i j} \leq 1 & (1 \leq i < j \leq n) \\\ & x_{i j} \text{integer} \\\ \end{array} \]
The last two constraints on the second linear program is correctly bounded and fits within the scope of the original problem while the first two constraints do most of the work in finding a TSP tour. Additionally, changing the last two constraints to be \(x_{i j} \geq 0\) is the Held Karp relaxation. The first constraint, \(\sum_{j > i} x_{i j} + \sum_{j < i} x_{j i} = 2\), ensures that for every vertex in the resulting tour there is one edge to get there and one edge to leave by. This matches the second constraint in the Asadpour ATSP relaxation. The second constraint in the Held Karp formulation is another form of the subtour elimination constraint seen in the Asadpour linear program.
Held and Karp also state that
In this section, we show that minimizing the gap \(f(\pi)\) is equivalent to solving this program without the integer constraints.
on page 1141, so it would appear that solving one of the equivalent programs that Held and Karp forumalate should work here.
The Column Generation technique seeks to solve linear program 2 from the Held and Karp paper, stated as
\[ \begin{array}{c l} \text{min} & \sum_{k} c_ky_k \\\ \text{s.t.} & y_k \geq 0 \\\ & \sum_k y_k = 1 \\\ & \sum_{i = 2}^{n - 1} (-v_{i k})y_k = 0 \\\ \end{array} \]
Where \(v_{i k}\) is the degree of vertex \(i\) in 1-Tree \(k\) minus two, or \(v_{i k} = d_{i k} - 2\) and each variable \(y_k\) corresponds to a 1-Tree \(T^k\). The associated cost \(c_k\) for each tree is the weight of \(T^k\).
The rest of this method uses a simplex algorithm to solve the linear program. We only focus on the edges which are in each of the 1-Trees, giving each column the form
\[ \begin{bmatrix} 1 & -v_{2k} & -v_{3k} & \dots & -v_{n-1,k} \end{bmatrix}^T \]
and the column which enters the solution in the 1-Tree for which \(c_k + \theta + \sum_{j=2}^{n-1} \pi_jv_{j k}\) is a minimum where \(\theta\) and \(\pi_j\) come from the vector of ‘shadow prices’ given by \((\theta, \pi_2, \pi_3, \dots, \pi_{n-1})\). Now the basis is \((n - 1) \times (n - 1)\) and we can find the 1-Tree to add to the basis using a minimum 1-Tree algorithm which Held and Karp say can be done in \(O(n^2)\) steps.
I am already familar with the simplex method, so I will not detail it’s implementation here.
This technique is slow to converge. Held and Karp programmed in on an IBM/360 and where able to solve problems consestinal for up to \(n = 12\). Now, on a modern computer the clock rate is somewhere between 210 and 101,500 times faster (depending on the model of IBM/360 used), so we expect better performance, but cannot say at this time how much of an improvement.
They also talk about a heuristic procedure in which a vertex is eliminated from the program whenever the choice of its adjacent vertices was ’evident’. Technical details for the heuristic where essentially non-existent, but
The procedure showed promise on examples up to \(n = 48\), but was not explored systematically
This paper from Held and Karp is about minimizing \(f(\pi)\) where \(f(\pi)\) is the gap between the permuted 1-Trees and a TSP tour. One way to do this is to maximize the dual of \(f(\pi)\) which is written as \(\text{max}_{\pi}\ w(\pi)\) where
\[ w(\pi) = \text{min}_k\ (c_k + \sum_{i=1}^{i=n} \pi_iv_{i k}) \]
This method uses the set of indices of 1-Trees that are of minimum weight with respect to the weights \(\overline{c}_{i j} = c_{i j} + \pi_i + \pi_j\).
\[ K(\pi) = {k\ |\ w(\pi) = c_k + \sum_{i=1}^{i=n} \pi_i v_{i k}} \]
If \(\pi\) is not a maximum point of \(w\), then there will be a vector \(d\) called the direction of ascent at \(\pi\). This is theorem 3 and a proof is given on page 1148. Let the functions \(\Delta(\pi, d)\) and \(K(\pi, d)\) be defined as below.
\[ \Delta(\pi, d) = \text{min}_{k \in K(\pi)}\ \sum_{i=1}^{i=n} d_iv_{i k} \\\ K(\pi, d) = {k\ |\ k \in K(\pi) \text{ and } \sum_{i=1}^{i=n} d_iv_{i k} = \Delta(\pi, d)} \]
Now for a sufficiently small \(\epsilon\), \(K(\pi + \epsilon d) = K(\pi, d)\) and \(w(\pi + \epsilon d) = w(\pi) + \epsilon \Delta(\pi, d)\), or the value of \(w(\pi)\) increases and the growth rate of the minimum 1-Trees is at its smallest so we maintain the low weight 1-Trees and progress farther towards the optimal value. Finally, let \(\epsilon(\pi, d)\) be the following quantity
\[ \epsilon(\pi, d) = \text{max}\ {\epsilon\ |\text{ for } \epsilon’ < \epsilon,\ K(\pi + \epsilon’d = K(\pi, d)} \]
So in other words, \(\epsilon(\pi, d)\) is the maximum distance in the direction of \(d\) that we can travel to maintain the desired behavior.
If we can find \(d\) and \(\epsilon\) then we can set \(\pi = \pi + \epsilon d\) and move to the next iteration of the ascent method. Held and Karp did give a protocol for finding \(d\) on page 1149.
There are two things which must be refined about this procedure in order to make it implementable in Python.
Held and Karp have provided guidance on both of these points.
In section 6 on matroids, we are told to use a method developed by Dijkstra in A Note on Two Problems in Connexion with Graphs, but in this particular case that is not the most helpful.
I have found this document, but there is a function called minimum_spanning_arborescence
already within NetworkX which we can use to create a minimum 1-Arborescence.
That process would be to find a minimum spanning arborescence on only the vertices in \({2, 3, \dots, n}\) and then connect vertex 1 to create the cycle.
In order to connect vertex 1, we would choose the outgoing arc with the smallest cost and the incoming arc with the smallest cost.
Finally, at the maximum value of \(w(\pi)\), there is no direction of ascent and the procedure outlined by Held and Karp will not terminate. Their article states on page 1149 that
Thus, when failure to terminate is suspected, it is necessary to check whether no direction of ascent exists; by the Minkowski-Farkas lemma this is equivalent to the existence of nonnegative coefficients \(\alpha_k\) such that
\( \sum_{k \in K(\pi)} \alpha_kv_{i k} = 0, \quad i = 1, 2, \dots, n \)
This can be checked by linear programming.
While it is nice that they gave that summation, the rest of the linear program would have been useful too. The entire linear program would be written as follows
\[ \begin{array}{c l l} \text{max} & \sum_k \alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{i k} = 0 & \forall\ i \in {1, 2, \dots n} \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} \]
This linear program is not in standard form, but it is not difficult to convert it. First, change the maximization to a minimization by minimizing the negative.
\[ \begin{array}{c l l} \text{min} & \sum_k -\alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{i k} = 0 & \forall\ i \in {1, 2, \dots n} \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} \]
While the constraint is not intuitively in standard form, a closer look reveals that it is. Each column in the matrix form will be for one entry of \(\alpha_k\), and each row will represent a different value of \(i\), or a different vertex. The one constraint is actually a collection of very similar one which could be written as
\[ \begin{array}{c l} \text{min} & \sum_k -\alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{1 k} = 0 \\\ & \sum_{k \in K(\pi)} \alpha_k v_{2 k} = 0 \\\ & \vdots \\\ & \sum_{k \in K(\pi)} \alpha_k v_{n k} = 0 \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} \]
Because all of the summations must equal zero, no stack and surplus variables are required, so the constraint matrix for this program is \(n \times k\).
The \(n\) obivously has a linear growth rate, but I’m not sure how big to expect \(k\) to become.
\(k\) is the set of minimum 1-Trees, so I believe that it will be manageable.
This linear program can be solved using the built in linprog
function in the SciPy library.
As an implementation note, to start with I would probably check the terminating condition every iteration, but eventually we can find a number of iterations it has to execute before it starts to check for the terminating condition to save computational power.
One possible difficulty with the terminating condition is that we need to run the linear program with data from every minimum 1-Trees or 1-Arborescences, which means that we need to be able to generate all of the minimum 1-Trees. There does not seem to be an easy way to do this within NetworkX at the moment. Looking through the tree algorithms here they seem exclusively focused on finding one minimum branching of the required type and not all of those branchings.
Now we have to find \(\epsilon\). Theorem 4 on page 1150 states that
Let \(k\) be any element of \(K(\pi, d)\), where \(d\) is a direction of ascent at \(\pi\). Then \(\epsilon(\pi, d) = \text{min}{\epsilon\ |\text{ for some pair } (e, e’),\ e’ \text{ is a substitute for } e \text{ in } T^k \\\ \text{ and } e \text{ and } e’ \text{ cross over at } \epsilon }\)
The first step then is to determine if \(e\) and \(e’\) are substitutes. \(e’\) is a substitute if for a 1-Tree \(T^k\), \((T^k - {e}) \cup {e’}\) is also a 1-Tree. The edges \(e = {r, s}\) and \(e’ = {i, j}\) cross over at \(\epsilon\) if the pairs \((\overline{c}_{i j}, d_i + d_j)\) and \((\overline{c}_{r s}, d_r + d_s)\) are different but
\[ \overline{c}_{i j} + \epsilon(d_i + d_j) = \overline{c}_{r s} + \epsilon(d_r + d_s) \]
From that equation, we can derive a formula for \(\epsilon\).
\[ \begin{array}{r c l} \overline{c}_{i j} + \epsilon(d_i + d_j) &=& \overline{c}_{r s} + \epsilon(d_r + d_s) \\\ \epsilon(d_i + d_j) &=& \overline{c}_{r s} + \epsilon(d_r + d_s) - \overline{c}_{i j} \\\ \epsilon(d_i + d_j) - \epsilon(d_r + d_s) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon\left((d_i + d_j) - (d_r + d_s)\right) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon(d_i + d_j - d_r - d_s) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon &=& \displaystyle \frac{\overline{c}_{r s} - \overline{c}_{i j}}{d_i + d_j - d_r - d_s} \end{array} \]
So we can now find \(epsilon\) for any two pairs of edges which are substitutes for each other, but we need to be able to find substitutes in the 1-Tree.
We know that \(e’\) is a substitute for \(e\) if and only if \(e\) and \(e’\) are both incident to vertex 1 or \(e\) is in a cycle of \(T^k \cup {e’}\) that does not pass through vertex 1.
In a more formal sense, we are trying to find edges in the same fundamental cycle as \(e’\).
A fundamental cycle is created when any edge not in a spanning tree is added to that spanning tree.
Because the endpoints of this edge are connected by one, unique path this creates a unique cycle.
In order to find this cycle, we will take advantage of find_cycle
within the NetworkX library.
Below is a pseudocode procedure that uses Theorem 4 to find \(\epsilon(\pi, d)\) that I sketched out. It is not well optimized, but will find \(\epsilon(\pi, d)\).
# Input: An element k of K(pi, d), the vector pi and the vector d.
# Output: epsilon(pi, d) using Theorem 4 on page 1150.
for each edge e in the graph G
if e is in k:
continue
else:
add e to k
let v be the terminating end of e
c = find_cycle(k, v)
for each edge a in c not e:
if a[cost] = e[cost] and d[i] + d[j] = d[r] + d[s]:
continue
epsilon = (a[cost] - e[cost])/(d[i] + d[j] - d[r] - d[s])
min_epsilon = min(min_epsilon, epsilon)
remove e from k
return min_epsilon
The ascent method is also slow, but would be better on a modern computer. When Held and Karp programmed it, they tested it on some small problems up to 25 vertices and while the time per iteration was small, the number of iterations grew quickly. They do not comment on if this is a better method than the Column Generation technique, but do point up that they did not determine if this method always converges to a maximum point of \(w(\pi)\).
After talking with my GSoC mentors, we believe that this is the best method we can implement for the Held-Karp relaxation as needed by the Asadpour algorithm. The ascent method is embedded within this method, so the in depth exploration of the previous method is required to implement this one. Most of the notation in this method is reused from the ascent method.
The branch and bound method utilizes the concept that a vertex can be out-of-kilter. A vertex \(i\) is out-of-kilter high if
\[ \forall\ k \in K(\pi),\ v_{i k} \geq 1 \]
Similarly, vertex \(i\) is out-of-kilter low if
\[ \forall\ k \in K(\pi),\ v_{i k} = -1 \]
Remember that \(v_{i k}\) is the degree of the vertex minus 2. We know that all the vertices have a degree of at least one, otherwise the 1-Tree \(T^k\) would not be connected. An out-of-kilter high vertex has a degree of 3 or higher in every minimum 1-Tree and an out-of-kilter low vertex has a degree of only one in all of the minimum 1-Trees. Our goal is a minimum 1-Tree where every vertex has a degree of 2.
If we know that a vertex is out-of-kilter in either direction, we know the direction of ascent and that direction is a unit vector. Let \(u_i\) be an \(n\)-dimensional unit vector with 1 in the \(i\)-th coordinate. \(u_i\) is the direction of ascent if vertex \(i\) is out-of-kilter high and \(-u_i\) is the direction of ascent if vertex \(i\) is out-of-kilter low.
Corollaries 3 and 4 from page 1151 also show that finding \(\epsilon(\pi, d)\) is simpler when a vertex is out-of-kilter as well.
Corollary 3. Assume vertex \(i\) is out-of-kilter low and let \(k\) be an element of \(K(\pi, -u_i)\). Then \(\epsilon(\pi, -u_i) = \text{min} (\overline{c}_{i j} - \overline{c}_{r s})\) such that \({i, j}\) is a substitute for \({r, s}\) in \(T^k\) and \(i \not\in {r, s}\).
Corollary 4. Assume vertex \(r\) is out-of-kilter high. Then \(\epsilon(\pi, u_r) = \text{min} (\overline{c}_{i j} - \overline{c}_{r s})\) such that \({i, j}\) is a substitute for \({r, s}\) in \(T^k\) and \(r \not\in {i, j}\).
These corollaries can be implemented with a modified version of the pseudocode listing above for finding \(\epsilon\) in the ascent method section.
Once there are no more out-of-kilter vertices, the direction of ascent is not a unit vector and fractional weights are introduced. This is the cause of a major slow down in the convergence of the ascent method to the optimal solution, so it should be avoided if possible.
Before we can discuss implementation details, there are still some more primaries to be reviewed. Let \(X\) and \(Y\) be disjoint sets of edges in the graph. Then let \(\mathsf{T}(X, Y)\) denote the set of 1-Trees which include all edges in \(X\) but none of the edges in \(Y\). Finally, define \(w_{X, Y}(\pi)\) and \(K_{X, Y}(\pi)\) as follows.
\[ w_{X, Y}(\pi) = \text{min}_{k \in \mathsf{T}(X, Y)} (c_k + \sum_{i=1}^{i=n} \pi_i v_{i k}) \\\ K_{X, Y}(\pi) = {k\ |\ c_k + \sum \pi_i v_{i k} = w_{X, Y}(\pi)} \]
From these functions, a revised definition of out-of-kilter high and low arise, allowing a vertex to be out-of-kilter relative to \(X\) and \(Y\).
During the completion of the branch and bound method, the branches are tracking in a list where each entry has the following format.
\[[X, Y, \pi, w_{X, Y}(\pi)]\]
Where \(X\) and \(Y\) are the disjoint sets discussed earlier, \(\pi\) is the vector we are using to perturb the edge weights and \(w_{X, Y}(\pi)\) is the bound of the entry.
At each iteration of the method, we consider the list entry with the minimum bound and try to find an out-of-kilter vertex. If we find one, we apply one iteration of the ascent method using the simplified unit vector as the direction of ascent. Here we can take advantage of integral weights if they exist. Perhaps the documentation for the Asadpour implementation in NetworkX should state that integral edge weights will perform better but that claim will have to be supported by our testing.
If there is not an out-of-kilter vertex, we still need to find the direction of ascent in order to determine if we are at the maximum of \(w(\pi)\). If the direction of ascent exists, we branch. If there is no direction of ascent, we search for a tour among \(K_{X, Y}(\pi)\) and if none is found, we also branch.
The branching process is as follows. From entry \([X, Y, \pi, w_{X, Y}(\pi)]\) an edge \(e \not\in X \cup Y\) is chosen (Held and Karp do not give any criteria to branch on, so I believe the choose can be arbitrary) and the parent entry is replaced with two other entries of the forms
\[ [X \cup {e}, Y^*, \pi, w_{X \cup {e}, Y^*}(\pi)] \quad \text{and} \quad [X^*, Y \cup {e}, \pi, w_{X^*, Y \cup {e}}(\pi)] \]
An example of the branch and bound method is given on pages 1153 through 1156 in the Held and Karp paper.
In order to implement this method, we need to be able to determine in addition to modifying some of the details of the ascent method.
The Held and Karp paper states that in order to find an out-of-kilter vertex, all we need to do is test the unit vectors. If for arbitrary member \(k\) of \(K(\pi, u_i)\), \(v_{i k} \geq 1\) and the appropriate inverse holds for out-of-kilter low. From this process we can find out-of-kilter vertices by sequentially checking the \(u_i\)’s in an \(O(n^2)\) procedure.
Searching \(K_{X, Y}(\pi)\) for a tour would be easy if we can enumerate that set minimum 1-Trees. While I know how find one of the minimum 1-Trees, or a member of \(K(\pi)\), I am not sure how to find elements in \(K(\pi, d)\) or even all of the members of \(K(\pi)\). Using the properties in the Held and Karp paper, I do know how to refine \(K(\pi)\) into \(K(\pi, d)\) and \(K(\pi)\) into \(K_{X, Y}(\pi)\). This will have to a blog post for another time.
The most promising research paper I have been able to find on this problem is this 2005 paper by Sörensen and Janssens titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost. From here we generate spanning trees or arborescences until the cost moves upward at which point we have found all elements of \(K(\pi)\).
Held and Karp did not program this method. We have some reason to believe that the performance of this method will be the best due to the fact that it is designed to be an improvement over the ascent method which was tested (somewhat) until \(n = 25\) which is still better than the column generation technique which was only consistently able to solve up to \(n = 12\).
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>Now that my porposal was accepted by NetworkX for the 2021 Google Summer of Code (GSoC), I can get more into the technical details of how I plan to implement the Asadpour algorithm within NetworkX.
In this post I am going to outline my thought process for the control scheme of my implementation and create function stubs according to my GSoC proposal.
Most of the work for this project will happen in netowrkx.algorithms.approximation.traveling_salesman.py
, where I will finish the last algorithm for the Traveling Salesman Problem so it can be merged into the project. The main function in traveling_salesman.py
is
def traveling_salesman_problem(G, weight="weight", nodes=None, cycle=True, method=None):
"""
...
Parameters
----------
G : NetworkX graph
Undirected possibly weighted graph
nodes : collection of nodes (default=G.nodes)
collection (list, set, etc.) of nodes to visit
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
cycle : bool (default: True)
Indicates whether a cycle should be returned, or a path.
Note: the cycle is the approximate minimal cycle.
The path simply removes the biggest edge in that cycle.
method : function (default: None)
A function that returns a cycle on all nodes and approximates
the solution to the traveling salesman problem on a complete
graph. The returned cycle is then used to find a corresponding
solution on `G`. `method` should be callable; take inputs
`G`, and `weight`; and return a list of nodes along the cycle.
Provided options include :func:`christofides`, :func:`greedy_tsp`,
:func:`simulated_annealing_tsp` and :func:`threshold_accepting_tsp`.
If `method is None`: use :func:`christofides` for undirected `G` and
:func:`threshold_accepting_tsp` for directed `G`.
To specify parameters for these provided functions, construct lambda
functions that state the specific value. `method` must have 2 inputs.
(See examples).
...
"""
All user calls to find an approximation to the traveling salesman problem will go through this function.
My implementation of the Asadpour algorithm will also need to be compatible with this function.
traveling_salesman_problem
will handle creating a new, complete graph using the weight of the shortest path between nodes \(u\) and \(v\) as the weight of that arc, so we know that by the time the graph is passed to the Asadpour algorithm it is a complete digraph which satisfies the triangle inequality.
The main function also handles the nodes
and cycles
parameters by only copying the necessary nodes into the complete digraph before calling the requested method and afterwards searching for and removing the largest arc within the returned cycle.
Thus, the parent function for the Asadpour algorithm only needs to deal with the graph itself and the weights or costs of the arcs in the graph.
My controlling function will have the following signature and I have included a draft of the docstring as well.
def asadpour_tsp(G, weight="weight"):
"""
Returns an O( log n / log log n ) approximate solution to the traveling
salesman problem.
This approximate solution is one of the best known approximations for
the asymmetric traveling salesman problem developed by Asadpour et al,
[1]_. The algorithm first solves the Held-Karp relaxation to find a
lower bound for the weight of the cycle. Next, it constructs an
exponential distribution of undirected spanning trees where the
probability of an edge being in the tree corresponds to the weight of
that edge using a maximum entropy rounding scheme. Next we sample that
distribution $2 \\\\\\log n$ times and saves the minimum sampled tree once
the direction of the arcs is added back to the edges. Finally,
we argument then short circuit that graph to find the approximate tour
for the salesman.
Parameters
----------
G : nx.DiGraph
The graph should be a complete weighted directed graph.
The distance between all pairs of nodes should be included.
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
Returns
-------
cycle : list of nodes
Returns the cycle (list of nodes) that a salesman can follow to minimize
the total weight of the trip.
Raises
------
NetworkXError
If `G` is not complete, the algorithm raises an exception.
References
----------
.. [1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi,
An o(log n/log log n)-approximation algorithm for the asymmetric
traveling salesman problem, Operations research, 65 (2017),
pp. 1043–1061
"""
pass
Following my GSoC proposal, the next function is held_karp
, which will solve the Held-Karp relaxation on the complete digraph using the ellipsoid method (See my last two posts here and here for my thoughts on why and how to accomplish this).
Solving the Held-Karp relaxation is the first step in the algorithm.
Recall that the Held-Karp relaxation is defined as the following linear program:
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
and that it is a semi-infinite program so it is too large to be solved in conventional forms. The algorithm uses the solution to the Held-Karp relaxation to create a vector \(z^*\) which is a symmetrized and slightly scaled down version of the true Held-Karp solution \(x^*\). \(z^*\) is defined as
\[ z^*_{{u, v}} = \frac{n - 1}{n} \left(x^*_{uv} + x^*_{vu}\right) \]
and since this is what the algorithm using to build the rest of the approximation, this should be one of the return values from held_karp
.
I will also return the value of the cost of \(x^*\), which is denoted as \(c(x^*)\) or \(OPT_{HK}\) in the Asadpour paper [1].
Additionally, the separation oracle will be defined as an inner function within held_karp
.
At the present moment I am not sure what the exact parameters for the separation oracle, sep_oracle
, but it should be the the point the algorithm wishes to test and will need to access the graph the algorithm is relaxing.
In particular, I’m not sure yet how I will represent the hyperplane which is returned by the separation oracle.
def _held_karp(G, weight="weight"):
"""
Solves the Held-Karp relaxation of the input complete digraph and scales
the output solution for use in the Asadpour [1]_ ASTP algorithm.
The Held-Karp relaxation defines the lower bound for solutions to the
ATSP, although it does return a fractional solution. This is used in the
Asadpour algorithm as an initial solution which is later rounded to a
integral tree within the spanning tree polytopes. This function solves
the relaxation with the ellipsoid method for linear programs.
Parameters
----------
G : nx.DiGraph
The graph should be a complete weighted directed graph.
The distance between all paris of nodes should be included.
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
Returns
-------
OPT : float
The cost for the optimal solution to the Held-Karp relaxation
z_star : numpy array
A symmetrized and scaled version of the optimal solution to the
Held-Karp relaxation for use in the Asadpour algorithm
References
----------
.. [1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi,
An o(log n/log log n)-approximation algorithm for the asymmetric
traveling salesman problem, Operations research, 65 (2017),
pp. 1043–1061
"""
def sep_oracle(point):
"""
The separation oracle used in the ellipsoid algorithm to solve the
Held-Karp relaxation.
This 'black-box' takes a point and check to see if it violates any
of the Held-Karp constraints, which are defined as
- The out-degree of all non-empty subsets of $V$ is at lest one.
- The in-degree and out-degree of each vertex in $V$ is equal to
one. Note that if a vertex has more than one incoming or
outgoing arcs the values of each could be less than one so long
as they sum to one.
- The current value for each arc is greater
than zero.
Parameters
----------
point : numpy array
The point in n dimensional space we will to test to see if it
violations any of the Held-Karp constraints.
Returns
-------
numpy array
The hyperplane which was the most violated by `point`, i.e the
hyperplane defining the polytope of spanning trees which `point`
was farthest from, None if no constraints are violated.
"""
pass
pass
Next the algorithm uses the symmetrized and scaled version of the Held-Karp solution to construct an exponential distribution of undirected spanning trees which preserves the marginal probabilities.
def _spanning_tree_distribution(z_star):
"""
Solves the Maximum Entropy Convex Program in the Asadpour algorithm [1]_
using the approach in section 7 to build an exponential distribution of
undirected spanning trees.
This algorithm ensures that the probability of any edge in a spanning
tree is proportional to the sum of the probabilities of the trees
containing that edge over the sum of the probabilities of all spanning
trees of the graph.
Parameters
----------
z_star : numpy array
The output of `_held_karp()`, a scaled version of the Held-Karp
solution.
Returns
-------
gamma : numpy array
The probability distribution which approximately preserves the marginal
probabilities of `z_star`.
"""
pass
Now that the algorithm has the distribution of spanning trees, we need to sample them. Each sampled tree is a \(\lambda\)-random tree and can be sampled using algorithm A8 in [2].
def _sample_spanning_tree(G, gamma):
"""
Sample one spanning tree from the distribution defined by `gamma`,
roughly using algorithm A8 in [1]_ .
We 'shuffle' the edges in the graph, and then probabilistically
determine whether to add the edge conditioned on all of the previous
edges which were added to the tree. Probabilities are calculated using
Kirchhoff's Matrix Tree Theorem and a weighted Laplacian matrix.
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
gamma : numpy array
The probabilities associated with each of the edges in the undirected
graph `G`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by `gamma`.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
algorithms, 11 (1990), pp. 185–207
"""
pass
At this point there is only one function left to discuss, laplacian_matrix
.
This function already exists within NetworkX at networkx.linalg.laplacianmatrix.laplacian_matrix
, and even though this is relatively simple to implement, I’d rather use an existing version than create duplicate code within the project.
A deeper look at the function signature reveals
@not_implemented_for("directed")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse matrix
The Laplacian matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges weights are summed.
See Also
--------
to_numpy_array
normalized_laplacian_matrix
laplacian_spectrum
"""
Which is exactly what I need, except the decorator states that it does not support directed graphs and this algorithm deals with those types of graphs. Fortunately, our distribution of spanning trees is for trees in a directed graph once the direction is disregarded, so we can actually use the existing function. The definition given in the Asadpour paper [1], is
\[ L_{i,j} = \left\{ \begin{array}{l l} -\lambda_e & e = (i, j) \in E \\\ \sum_{e \in \delta({i})} \lambda_e & i = j \\\ 0 & \text{otherwise} \end{array} \right. \]
Where \(E\) is defined as “Let \(E\) be the support of graph of \(z^*\) when the direction of the arcs are disregarded” on page 5 of the Asadpour paper. Thus, I can use the existing method without having to create a new one, which will save time and effort on this GSoC project.
In addition to being discussed here, these function stubs have been added to my fork of NetworkX
on the bothTSP
branch.
The commit, Added function stubs and draft docstrings for the Asadpour algorithm
is visible on my GitHub using that link.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] V. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207
]]>Continuing the theme of my last post, we know that the Held-Karp relaxation in the Asadpour Asymmetric Traveling Salesman Problem cannot be practically written into the standard matrix form of a linear program. Thus, we need a different method to solve the relaxation, which is where the ellipsoid method comes into play. The ellipsoid method can be used to solve semi-infinite linear programs, which is what the Held-Karp relaxation is.
One of the keys to the ellipsoid method is the separation oracle. From the perspective of the algorithm itself, the oracle is a black-box program which takes a vector and determines
In the most basic form, the ellipsoid method is a decision algorithm rather than an optimization algorithm, so it terminates once a single, but almost certainly nonoptimal, vector within the feasible region is found. However, we can convert the ellipsoid method into an algorithm which is truly an optimization one. What this means for us is that we can assume that the separation oracle will return a hyperplane.
The hyperplane that the oracle returns is then used to construct the next ellipsoid in the algorithm, which is of smaller volume and contains a half-ellipsoid from the originating ellipsoid. This is, however, a topic for another post. Right now I want to focus on this ‘black-box’ separation oracle.
The reason that the Held-Karp relaxation is semi-infinite is because for a graph with \(n\) vertices, there are \(2^n + 2n\) constraints in the program. A naive approach to the separation oracle would be to check each constraint individually for the input vector, creating a program with \(O(2^n)\) running time. While it would terminate eventually, it certainly would take a long time to do so.
So, we look for a more efficient way to do this. Recall from the Asadpour paper [1] that the Held-Karp relaxation is the following linear program.
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
The first set of constraints ensures that the output of the relaxation is connected. This is called subtour elimination, and it prevents a solution with multiple disconnected clusters by ensuring that every set of vertices has at least one total outgoing arc (we are currently dealing with fractional arcs). From the perspective of the separation oracle, we do not care about all of the sets of vertices for which \(x(\delta^+(U)) \geqslant 1\), only trying to find one such subset of the vertices where \(x(\delta^+(U)) < 1\).
In order to find such a set of vertices \(U \in V\) where \(x(\delta^+(U)) < 1\) we can find the subset \(U\) with the smallest value of \(\delta^+(x)\) for all \(U \subset V\). That is, find the global minimum cut in the complete digraph using the edge capacities given by the input vector to the separation oracle. Using lecture notes by Michel X. Goemans (who is also one of the authors of the Asadpour algorithm this project seeks to implement), [2] we can find such a minimum cut with \(2(n - 1)\) maximum flow calculations.
The algorithm described in section 6.4 of the lecture notes [2] is fairly simple. Let \(S\) be a subset of \(V\) and \(T\) be a subset of \(V\) such that the \(s-t\) cut is the global minimum cut for the graph. First, we pick an arbitrary \(s\) in the graph. By definition, \(s\) is either in \(S\) or it is in \(T\). We now iterate through every other vertex in the graph \(t\), and compute the \(s-t\) and \(t-s\) minimum cut. If \(s \in S\) than we will find that one of the choices of \(t\) will produce the global minimum cut and the case where \(s \not\in S\) or \(s \in T\) is covered by using the \(t-s\) cuts.
According to Geoman [2], the complexity of finding the global min cut in a weighted digraph, using an effeicent maxflow algorithm, is \(O(mn^2\log(n^2/m))\).
The second constraint can be checked in \(O(n)\) time with a simple loop. It makes sense to actually check this one first as it is computationally simpler and thus if one of these conditions are violated we will be able to return the violated hyperplane faster.
Now we have reduced the complexity of the oracle from \(O(2^n)\) to the same as finding the global min cut, \(O(mn^2\log(n^2/m))\) which is substantially better. For example, let us consider an initial graph with 100 vertices. Using the \(O(2^n)\) method, that is \(1.2677 \times 10^{30}\) subsets \(U\) that we need to check times whatever the complexity of actually determining whether the constraint violates \(x(\delta^+(U)) \geqslant 1\). For that same complete digraph on 100 vertices, we know that there \(n = 100\) and \(m = \binom{100}{2} = 4950\). Using the global min cut approach, the complexity which includes finding the max flow as well as the number of times it needs to be found, is \(15117042\) or \(1.5117 \times 10^7\) which is faster by a factor of \(10^{23}\).
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] M. X. Goemans, Lecture notes on flows and cuts, Handout 18, Massachusetts Institute of Technology, Cambridge, MA, 2009 http://www-math.mit.edu/~goemans/18433S09/flowscuts.pdf.
]]>In linear programming, we sometimes need to take what would be a integer program and ‘relax’ it, or unbound the values of the variables so that they are continuous. One particular application of this process is Held-Karp relaxation used the first part of the Asadpour algorithm for the Asymmetric Traveling Salesman Problem, where we find the lower bound of the approximation. Normally the relaxation is written as follows.
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
This is a convenient way to write the program, but if we want to solve it, and we definitely do, we need it written in standard form for a linear program. Standard form is represented using a matrix for the set of constraints and vectors for the objective function. It is shown below
\[ \begin{array}{c l} \text{min} & Z = c^TX \\\ \text{s.t.} & AX = b \\\ & X \geqslant 0 \end{array} \]
Where \(c\) is the coefficient vector for objective function, \(X\) is the vector for the values of all of the variables, \(A\) is the coefficient matrix for the constraints and \(b\) is a vector of what the constraints are equal to. Once a linear program is in this form there are efficient algorithms which can solve it.
In the Held-Karp relaxation, the objective function is a summation, so we can expand it to a summation. If there are \(n\) edges then it becomes
\[ \sum_{a} c(a)x_a = c(1)x_1 + c(2)x_2 + c(3)x_3 + \dots + c(n)_n \]
Where \(c(a)\) is the weight of that edge in the graph. From here it is easy to convert the objective function into two vectors which satisfies the standard form.
\[ \begin{array}{rCl} c &=& \begin{bmatrix} c_1 & c_2 & c_3 & \dots & c_n \end{bmatrix}^T \\\ X &=& \begin{bmatrix} x_1 & x_2 & x_3 & \dots & x_n \end{bmatrix}^T \end{array} \]
Now we have to convert the constraints to be in standard form. First and foremost, notice that the Held-Karp relaxation contains \(x_a \geqslant 0\ \forall\ a\) and the standard form uses \(X \geqslant 0\), so these constants match already and no work is needed. As for the others… well they do need some work.
Starting with the first constraint in the Held-Karp relaxation, \(x(\delta^+(U)) \geqslant 1\ \forall\ U \subset V\) and \(U \not= \emptyset\). This constraint specifies that for every subset of the vertex set \(V\), that subset must have at lest one arc with its tail in \(U\) and its head not in \(U\). For any given \(\delta^+(U)\), which is defined in the paper is \(\delta^+(U) = {a = (u, v) \in A: u \in U, v \not\in U}\) where \(A\) in this set is the set of all arcs in the graph, the coefficients on arcs not in \(U\) are zero. Arcs in \(\delta^+(U)\) have a coefficient of \(1\) as their full weight is counted as part of \(\delta^+(U)\). We know that there are about \(2^{|V|}\) subsets of the vertex \(V\), so this constraint adds that many rows to the constraint matrix \(A\).
Moving to the next constraint, \(x(\delta^+(v)) = x(\delta^-(v)) = 1\), we first need to split it in two.
\[ \begin{array}{rCl} x(\delta^+(v)) &=& 1 \\\ x(\delta^-(v)) &=& 1 \end{array} \]
Similar to the last constraint, each of these say that the number of arcs entering and leaving a vertex in the graph need to equal one. For each vertex \(v\) we find all the arcs which start at \(v\) and those are the members of \(\delta^+(v)\), so they have a weight of 1 and all others have a weight of zero. The opposite is true for \(\delta^-(v)\), every vertex which has a head on \(v\) has a weight or coefficient of 1 while the rest have a weight of zero. This adds \(2 \times |V|\) rows to \(A\), the coefficient matrix which brings the total to \(2^{|V|} + 2|V|\) rows.
We already know that \(A\) will have \(2^{|V|} + 2|V|\) rows. But how many columns will \(A\) have? We know that each arc is a variable so at lest \(|E|\) rows, but in a traditional matrix form of a linear program, we have to introduce slack and surplus variables so that \(AX = b\) and not \(AX \geqslant b\) or any other inequality operation. The \(2|V|\) rows already comply with this requirment, but the rows created with every subset of \(V\) do not, those rows only require that \(x(\delta^+(U)) \geqslant 1\), so we introduce a surplus variable for each of these rows bring the column count to \(|E| + 2^{|V|}\).
Now, the Held-Karp relaxation performed in the Asadpour algorithm in is done on the complete bi-directed graph. For a graph with \(n\) vertices, there will be \(2 \times \binom{n}{2}\) arcs in the graph. The updated value for the size of \(A\) is then that it is a
\[ \left(2^n + 2n \right)\times \left(2\binom{n}{2} + 2^n\right) \]
matrix. This is very large. For \(n = 100\) there are \(1.606 \times 10^{60}\) elements in the matrix. Allocating a measly 8 bits per entry sill consumes over \(1.28 \times 10^{52}\) gigabytes of memory.
This is an impossible amount of memory for any computer that we could run NetworkX on.
The Held-Karp relaxation must be solved in the Asadpour Asymmertic Traveling Salesman Problem Algorithm, but clearly putting it into standard form is not possible. This means that we will not be able to use SciPy’s linprog method which I was hoping to use. I will instead have to research and write an ellipsoid method solver, which hopefully will be able to solve the Held-Karp relaxation in both polynomial time and a practical amount of memory.
]]>