Our first Contributor Spotlight interview is with Mukulika Pahari, our “go-to” person for Numpy documentation. Mukulika is a Computer Science student at Mumbai University. Her passions outside of computing involve things with paper, including reading books (fiction!), folding origami, and journaling. During our interview she discussed why she joined NumPy, what keeps her motivated, and how likely she would recommend becoming a NumPy contributor.
Hi, I am Mukulika. I live in Mumbai, India, and I’m completing my Computer Science degree at Mumbai University. I joined NumPy last summer during Google Season of Docs. The idea behind this initiative is to raise awareness of open source, the role of documentation, and the importance of technical writing. It also gives technical writers an opportunity to gain experience working on open source projects.
Apart from that, I like to read fiction – literally everything that I can put my hands on – and I find it relaxing to learn origami from YouTube tutorials.
I write technical documentation for NumPy, and I help new contributors with their questions.
The best part for me, honestly, is the people. It is inspiring to meet people from diverse backgrounds all over the world and do something together. However, I do find it quite scary to put your code out there for “the whole world to see and evaluate.” It can challenge my confidence. But meeting all the contributors, seeing their work, and getting their valuable feedback is absolutely worth it.
Since I already used NumPy in my data analysis courses in school, and now I am using it at my internship, I thought that I could also contribute to it. It is always more fun to do side projects in a group. Once you get to know the people in the NumPy community, you want to stay. They are really open and supportive!
Well, I do not really give out books to people – being a broke college student is quite a barrier. But I think that everyone should read “The Hitchhiker’s Guide to Galaxy” by Douglas Adams. It is absolutely hilarious! It is both entertaining and spiked with wisdom.
I recently bought a nice journal and started to write in it. I find it very cleansing to put thoughts on paper and give them structure. I appreciate pretty paper products–this one has pastel pages.
I can’t think of a specific situation, but, in general, all my experiences so far seem to follow a general theme: it is absolutely okay not to be great at everything. You fail, and then you learn for the future.
My definition of success is being happy without causing harm to anyone.
they ignore? Since I am at the beginning of my career, I can’t say much. But I think it is nice to listen to everyone and get feedback, with the mindset that you do not necessarily have to act on their advice. Having multiple perspectives is good.
I’d say a solid nine! It is overall a great experience.
Yes. What I like the most about the NumPy community is that it does not require huge commitments time-wise. Every little thing is appreciated, so that is certainly motivating.
]]>At the end of this article, my goal is to convince you that: if you need to
use random numbers, you should consider using
scipy.stats.qmc
instead of
np.random
.
In the following, we assume that SciPy, NumPy and Matplotlib are installed and imported:
import numpy as np
from scipy.stats import qmc
import matplotlib.pyplot as plt
Note that no seeding is used in these examples. This will be the topic of another article: seeding should only be used for testing purposes.
So what are Monte Carlo (MC) and Quasi-Monte Carlo (QMC)?
MC methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. MC methods are mainly used in three classes of problem: optimization, numerical integration, and generating draws from a probability distribution.
Put simply, this is how you would usually generate a sample of points using MC:
rng = np.random.default_rng()
sample = rng.random(size=(256, 2))
In this case, sample
is a 2-dimensional array with 256 points which can be
visualized using a 2D scatter plot.
In the plot above, points are generated randomly without any knowledge about previously drawn points. It is clear that some regions of the space are left unexplored while other regions have clusters. In an optimization problem, it could mean that you would need to generate more sample to find the optimum. Or in a regression problem, you could also overfit a model due to some cluster of points.
Generating random numbers is a more complex problem than it sounds. Simple MC methods are designed to sample points to be independent and identically distributed (IID).
One could think that the solution is just to use a grid! But look what happens if we have a distance of 0.1 between points in the unit-hypercube ( with all bounds ranging from 0 to 1).
disc = 10
x1 = np.linspace(0, 1, disc)
x2 = np.linspace(0, 1, disc)
x3 = np.linspace(0, 1, disc)
x1, x2, x3 = np.meshgrid(x1, x2, x3)
The number of points required to fill the unit interval would be 10. In a 2-dimensional hypercube the same spacing would require 100, and in 3 dimensions 1,000 points. As the number of dimensions grows, the number of samples which is required to fill the space rises exponentially as the dimensionality of the space increases. This exponential growth is called the curse of dimensionality.
To mitigate the curse of dimensionality, you could decide to randomly remove points from the sample or randomly sample in n-dimension. In both cases, this will need to empty regions and clusters of points elsewhere.
Quasi-Monte Carlo (QMC) methods have been created specifically to answer this problem. As opposed to MC methods, QMC methods are deterministic. Which means that the points are not IID, but each new point knows about previous points. The result is that we can construct samples with good coverage of the space.
Deterministic does not mean that samples are always the same. the sequences can be scrambled.
Starting with version 1.7, SciPy provides QMC methods in
scipy.stats.qmc
.
Let’s generate 2 samples with MC and a QMC method named Sobol’.
n, d = 256, 2
rng = np.random.default_rng()
sample_mc = rng.random(size=(n, d))
qrng = qmc.Sobol(d=d)
sample_qmc = qrng.random(n=n)
A very similar interface, but as seen bellow, with radically different results.
The 2D space clearly exhibit less empty areas and less clusters with the QMC sample.
Beyond the visual improvement of quality, there are metrics to assess the quality of a sample. Geometrical criteria are commonly used, one can compute the distance (L1, L2, etc.) between all pairs of points. But there are also statistical criteria such as: the discrepancy.
qmc.discrepancy(sample_mc)
# 0.0009
qmc.discrepancy(sample_qmc)
# 1.1e-05
The lower the value, the better the quality.
If this still does not convince you, let’s look at a concrete example: integrating a function. Let’s look at the mean of the squared sum in 5 dimensions:
$$f(\mathbf{x}) = \left( \sum_{j=1}^{5}x_j \right)^2,$$
with $x_j \sim \mathcal{U}(0,1)$. It has a known mean value, $\mu = 5/3+5(5-1)/4$. By sampling points, we can compute that mean numerically.
The samplings are done 99 times and averaged. The variance is not reported for simplicity, just know that it’s guaranteed to be lower with QMC than with MC.
dim = 5
ref = 5 / 3 + 5 * (5 - 1) / 4
n_conv = 99
ns_gen = 2 ** np.arange(4, 13)
def func(sample):
# dim 5, true value 5/3 + 5*(5 - 1)/4
return np.sum(sample, axis=1) ** 2
def conv_method(sampler, func, n_samples, n_conv, ref):
samples = [sampler(n_samples) for _ in range(n_conv)]
samples = np.array(samples)
evals = [np.sum(func(sample)) / n_samples for sample in samples]
squared_errors = (ref - np.array(evals)) ** 2
rmse = (np.sum(squared_errors) / n_conv) ** 0.5
return rmse
# Analysis
sample_mc_rmse = []
sample_sobol_rmse = []
rng = np.random.default_rng()
for ns in ns_gen:
# Monte Carlo
sampler_mc = lambda x: rng.random((x, dim))
conv_res = conv_method(sampler_mc, func, ns, n_conv, ref)
sample_mc_rmse.append(conv_res)
# Sobol'
engine = qmc.Sobol(d=dim)
conv_res = conv_method(engine.random, func, ns, 1, ref)
sample_sobol_rmse.append(conv_res)
sample_mc_rmse = np.array(sample_mc_rmse)
sample_sobol_rmse = np.array(sample_sobol_rmse)
# Plot
fig, ax = plt.subplots(figsize=(4, 4))
ax.set_aspect("equal")
# MC
ratio = sample_mc_rmse[0] / ns_gen[0] ** (-1 / 2)
ax.plot(ns_gen, ns_gen ** (-1 / 2) * ratio, ls="-", c="k")
ax.scatter(ns_gen, sample_mc_rmse, label="MC: np.random")
# Sobol'
ratio = sample_sobol_rmse[0] / ns_gen[0] ** (-2 / 2)
ax.plot(ns_gen, ns_gen ** (-2 / 2) * ratio, ls="-", c="k")
ax.scatter(ns_gen, sample_sobol_rmse, label="QMC: qmc.Sobol")
ax.set_xlabel(r"$N_s$")
ax.set_xscale("log")
ax.set_xticks(ns_gen)
ax.set_xticklabels([rf"$2^{{{ns}}}$" for ns in np.arange(4, 13)])
ax.set_ylabel(r"$\log (\epsilon)$")
ax.set_yscale("log")
ax.legend(loc="upper right")
fig.tight_layout()
plt.show()
With MC the approximation error follows a theoretical rate of $O(n^{-1/2})$. But, QMC methods have better rates of convergence and achieve $O(n^{-1})$ for this function–and even better rates on very smooth functions.
This means that using $2^8=256$ points from Sobol’ leads to a lower error than using $2^{12}=4096$ points from MC! When the function evaluation is costly, it can bring huge computational savings.
But there is more! Another great use of QMC is to sample arbitrary
distributions. In SciPy 1.8, there are new classes of
samplers
that allow you to sample from any custom distribution. And some of
these methods can use QMC with a qrvs
method:
Here is an example with a distribution from SciPy: fisk. We generate
a MC sample from the distribution (either directly from the distribution with
fisk.rvs
or using NumericalInverseHermite.rvs
) and another sample with
QMC using NumericalInverseHermite.qrvs
.
import scipy.stats as stats
from scipy.stats import sampling
# Any distribution
c = 3.9
dist = stats.fisk(c)
# MC
rng = np.random.default_rng()
sample_mc = dist.rvs(128, random_state=rng)
# QMC
rng_dist = sampling.NumericalInverseHermite(dist)
# sample_mc = rng_dist.rvs(128, random_state=rng) # MC alternative same as above
qrng = qmc.Sobol(d=1)
sample_qmc = rng_dist.qrvs(128, qmc_engine=qrng)
Let’s visualize the difference between MC and QMC by calculating the empirical Probability Density Function (PDF). The QMC results are clearly superior to MC.
# Visualization
fig, axs = plt.subplots(1, 2, sharey=True, sharex=True, figsize=(8, 4))
x = np.linspace(dist.ppf(0.01), dist.ppf(0.99), 100)
pdf = dist.pdf(x)
delta = np.max(pdf) * 5e-2
samples = {"MC: np.random": sample_mc, "QMC: qmc.Sobol": sample_qmc}
for ax, sample in zip(axs, samples):
ax.set_title(sample)
ax.plot(x, pdf, "-", lw=3, label="fisk PDF")
ax.plot(samples[sample], -delta - delta * np.random.random(128), "+k")
kde = stats.gaussian_kde(samples[sample])
ax.plot(x, kde(x), "-.", lw=3, label="empirical PDF")
# or use a histogram
# ax.hist(sample, density=True, histtype='stepfilled', alpha=0.2)
ax.set_xlim([0, 3])
axs[0].legend(loc="best")
fig.supylabel("Density")
fig.supxlabel("Sample value")
fig.tight_layout()
plt.show()
Careful readers will note that there is no seeding. This is intentional as noted at the beginning of this article. You might run this code again and have better results with MC. But only sometimes. And that’s exactly my point. On average, you are guaranteed to have more consistent results with a better quality using QMC. I invite you to try it and see for yourself!
I hope that I convinced you to use QMC the next time you need random numbers. QMC is superior to MC, period.
There is an extensive body of literature and rigorous proofs. One reason MC is still more popular is that QMC is harder to implement and, depending on the method, there are rules to follow.
Take the Sobol’ method we used: you must use exactly $2^n$ sample. If you don’t do it, you will break some properties and end up having the same performance than MC. This is why some people argue that QMC is not better: they simply don’t use the methods properly, hence fail to see any benefits and conclude that MC is “enough”.
In
scipy.stats.qmc
,
we went to great lengths to explain how to use the methods, and we added some
explicit warnings to make the methods accessible and useful to
everyone.
With an extensive and high-quality ecosystem of libraries, scientific Python has emerged as the leading platform for data analysis. This ecosystem is sustained largely by volunteers working on independent projects with separate mailing lists, websites, roadmaps, documentation, engineering and packaging solutions, and governance structures.
The Scientific Python project aims to better coordinate the ecosystem and prepare the software projects in this ecosystem for the next decade of data science.
There is no shortage of blog posts around the web about how to use and explore different packages in the scientific Python ecosystem. However, some of it is outdated or incomplete, and many times doesn’t follow the best practices that would be advocated for by the maintainers of these packages.
In addition, we would like to create a central, community-driven location where Scientific Python projects can make announcements and share information.
Our project aims to be the definitive community blog—for people looking to make use of these libraries in education, research and industry, contribute to them, or maintain them—written, reviewed, and approved by the community of developers and users.
While our core projects (NumPy, SciPy, Matplotlib, scikit-image, NetworkX, etc.) will be regularly contributing content, we also would like to increase the number of contributors by providing support to newer members to generate high-quality, peer-reviewed blog posts.
Our goal is to populate the https://blog.scientific-python.org/ website with high-quality content, reviewed and approved by the maintainers of the libraries in the ecosystem. The main goal of these documents is to centralize information relevant to all (or most) projects in the ecosystem, at the reduced cost of being maintained in one place.
This project aims to:
To ensure this project is successful, it is recommended that the technical writer has some familiarity with at least a few of Scientific Python’s core projects.
We would consider the project successful if:
We anticipate the project to be developed over six months including onboarding five technical writers, reviewing existing material, developing blog post ideas with the project mentors and blog editorial board, writing and revising the blog posts, as well as providing feedback on the submission and review process.
Dates | Action Items |
---|---|
May | Onboarding |
June | Review existing documentation |
July | Update contributor guide |
August–October | Create and edit content |
November | Project completion |
Budget item | Amount | Running Total | Notes/justifications |
---|---|---|---|
Technical writers (5) | $15,000.00 | $15,000.00 | $3,000 / writer |
TOTAL | $15,000.00 |
The Scientific Python project is a new initiative, and this is our first time participating in Google Season of Docs. However, both Jarrod Millman and Ross Barnowski are established members of the Python community, with a vast collective experience in mentoring, managing and maintaining large open source projects.
Jarrod cofounded the Neuroimaging in Python project. He was the NumPy and SciPy release manager from 2007 to 2009. He cofounded NumFOCUS and served on its board from 2011 to 2015. Currently, he is the release manager of NetworkX and cofounder of the Scientific Python project.
Both mentors Jarrod and Ross have mentored many new contributors on multiple projects including NumPy, SciPy, and NetworkX. Ross has served as a co-mentor for three former GSoD students on the NumPy project, largely related to generating new content for tutorials, as well as refactoring existing user documentation.
Links:
]]>This tutorial will teach you how to create custom tables in Matplotlib, which are extremely flexible in terms of the design and layout. You’ll hopefully see that the code is very straightforward! In fact, the main methods we will be using are ax.text()
and ax.plot()
.
I want to give a lot of credit to Todd Whitehead who has created these types of tables for various Basketball teams and players. His approach to tables is nothing short of fantastic due to the simplicity in design and how he manages to effectively communicate data to his audience. I was very much inspired by his approach and wanted to be able to achieve something similar in Matplotlib.
Before I begin with the tutorial, I wanted to go through the logic behind my approach as I think it’s valuable and transferable to other visualizations (and tools!).
With that, I would like you to think of tables as highly structured and organized scatterplots. Let me explain why: for me, scatterplots are the most fundamental chart type (regardless of tool).
For example ax.plot()
automatically “connects the dots” to form a line chart or ax.bar()
automatically “draws rectangles” across a set of coordinates. Very often (again regardless of tool) we may not always see this process happening. The point is, it is useful to think of any chart as a scatterplot or simply as a collection of shapes based on xy coordinates. This logic / thought process can unlock a ton of custom charts as the only thing you need are the coordinates (which can be mathematically computed).
With that in mind, we can move on to tables! So rather than plotting rectangles or circles we want to plot text and gridlines in a highly organized manner.
We will aim to create a table like this, which I have posted on Twitter here. Note, the only elements added outside of Matplotlib are the fancy arrows and their descriptions.
Importing required libraries.
import matplotlib as mpl
import matplotlib.patches as patches
from matplotlib import pyplot as plt
First, we will need to set up a coordinate space - I like two approaches:
I want to create a coordinate space for a table containing 6 columns and 10 rows - this means (similar to pandas row/column indices) each row will have an index between 0-9 and each column will have an index between 0-6 (this is technically 1 more column than what we defined but one of the columns with a lot of text will span two column “indices”)
# first, we'll create a new figure and axis object
fig, ax = plt.subplots(figsize=(8, 6))
# set the number of rows and cols for our table
rows = 10
cols = 6
# create a coordinate system based on the number of rows/columns
# adding a bit of padding on bottom (-1), top (1), right (0.5)
ax.set_ylim(-1, rows + 1)
ax.set_xlim(0, cols + 0.5)
Now, the data we want to plot is sports (football) data. We have information about 10 players and some values against a number of different metrics (which will form our columns) such as goals, shots, passes etc.
# sample data
data = [
{"id": "player10", "shots": 1, "passes": 79, "goals": 0, "assists": 1},
{"id": "player9", "shots": 2, "passes": 72, "goals": 0, "assists": 1},
{"id": "player8", "shots": 3, "passes": 47, "goals": 0, "assists": 0},
{"id": "player7", "shots": 4, "passes": 99, "goals": 0, "assists": 5},
{"id": "player6", "shots": 5, "passes": 84, "goals": 1, "assists": 4},
{"id": "player5", "shots": 6, "passes": 56, "goals": 2, "assists": 0},
{"id": "player4", "shots": 7, "passes": 67, "goals": 0, "assists": 3},
{"id": "player3", "shots": 8, "passes": 91, "goals": 1, "assists": 1},
{"id": "player2", "shots": 9, "passes": 75, "goals": 3, "assists": 2},
{"id": "player1", "shots": 10, "passes": 70, "goals": 4, "assists": 0},
]
Next, we will start plotting the table (as a structured scatterplot). I did promise that the code will be very simple, less than 10 lines really, here it is:
# from the sample data, each dict in the list represents one row
# each key in the dict represents a column
for row in range(rows):
# extract the row data from the list
d = data[row]
# the y (row) coordinate is based on the row index (loop)
# the x (column) coordinate is defined based on the order I want to display the data in
# player name column
ax.text(x=0.5, y=row, s=d["id"], va="center", ha="left")
# shots column - this is my "main" column, hence bold text
ax.text(x=2, y=row, s=d["shots"], va="center", ha="right", weight="bold")
# passes column
ax.text(x=3, y=row, s=d["passes"], va="center", ha="right")
# goals column
ax.text(x=4, y=row, s=d["goals"], va="center", ha="right")
# assists column
ax.text(x=5, y=row, s=d["assists"], va="center", ha="right")
As you can see, we are starting to get a basic wireframe of our table. Let’s add column headers to further make this scatterplot look like a table.
# Add column headers
# plot them at height y=9.75 to decrease the space to the
# first data row (you'll see why later)
ax.text(0.5, 9.75, "Player", weight="bold", ha="left")
ax.text(2, 9.75, "Shots", weight="bold", ha="right")
ax.text(3, 9.75, "Passes", weight="bold", ha="right")
ax.text(4, 9.75, "Goals", weight="bold", ha="right")
ax.text(5, 9.75, "Assists", weight="bold", ha="right")
ax.text(6, 9.75, "Special\nColumn", weight="bold", ha="right", va="bottom")
The rows and columns of our table are now done. The only thing that is left to do is formatting - much of this is personal choice. The following elements I think are generally useful when it comes to good table design (more research here):
Gridlines: Some level of gridlines are useful (less is more). Generally some guidance to help the audience trace their eyes or fingers across the screen can be helpful (this way we can group items too by drawing gridlines around them).
for row in range(rows):
ax.plot([0, cols + 1], [row - 0.5, row - 0.5], ls=":", lw=".5", c="grey")
# add a main header divider
# remember that we plotted the header row slightly closer to the first data row
# this helps to visually separate the header row from the data rows
# each data row is 1 unit in height, thus bringing the header closer to our
# gridline gives it a distinctive difference.
ax.plot([0, cols + 1], [9.5, 9.5], lw=".5", c="black")
Another important element for tables in my opinion is highlighting the key data points. We already bolded the values that are in the “Shots” column but we can further shade this column to give it further importance to our readers.
# highlight the column we are sorting by
# using a rectangle patch
rect = patches.Rectangle(
(1.5, -0.5), # bottom left starting position (x,y)
0.65, # width
10, # height
ec="none",
fc="grey",
alpha=0.2,
zorder=-1,
)
ax.add_patch(rect)
We’re almost there. The magic piece is ax.axis(‘off’)
. This hides the axis, axis ticks, labels and everything “attached” to the axes, which means our table now looks like a clean table!
ax.axis("off")
Adding a title is also straightforward.
ax.set_title("A title for our table!", loc="left", fontsize=18, weight="bold")
Finally, if you wish to add images, sparklines, or other custom shapes and patterns then we can do this too.
To achieve this we will create new floating axes using fig.add_axes()
to create a new set of floating axes based on the figure coordinates (this is different to our axes coordinate system!).
Remember that figure coordinates by default are between 0 and 1. [0,0] is the bottom left corner of the entire figure. If you’re unfamiliar with the differences between a figure and axes then check out Matplotlib’s Anatomy of a Figure for further details.
newaxes = []
for row in range(rows):
# offset each new axes by a set amount depending on the row
# this is probably the most fiddly aspect (TODO: some neater way to automate this)
newaxes.append(fig.add_axes([0.75, 0.725 - (row * 0.063), 0.12, 0.06]))
You can see below what these floating axes will look like (I say floating because they’re on top of our main axis object). The only tricky thing is figuring out the xy (figure) coordinates for these.
These floating axes behave like any other Matplotlib axes. Therefore, we have access to the same methods such as ax.bar(), ax.plot(), patches, etc. Importantly, each axis has its own independent coordinate system. We can format them as we wish.
# plot dummy data as a sparkline for illustration purposes
# you can plot _anything_ here, images, patches, etc.
newaxes[0].plot([0, 1, 2, 3], [1, 2, 0, 2], c="black")
newaxes[0].set_ylim(-1, 3)
# once again, the key is to hide the axis!
newaxes[0].axis("off")
That’s it, custom tables in Matplotlib. I did promise very simple code and an ultra-flexible design in terms of what you want / need. You can adjust sizes, colors and pretty much anything with this approach and all you need is simply a loop that plots text in a structured and organized manner. I hope you found it useful. Link to a Google Colab notebook with the code is here
]]>As part of the University of North Carolina BIOL222 class, Dr. Catherine Kehl asked her students to “use matplotlib.pyplot
to make art.” BIOL222 is Introduction to Programming, aimed at students with no programming background. The emphasis is on practical, hands-on active learning.
The students completed the assignment with festive enthusiasm around Halloween. Here are some great examples:
Harris Davis showed an affinity for pumpkins, opting to go 3D!
# get library for 3d plotting
from mpl_toolkits.mplot3d import Axes3D
# make a pumpkin :)
rho = np.linspace(0, 3 * np.pi, 32)
theta, phi = np.meshgrid(rho, rho)
r, R = 0.5, 0.5
X = (R + r * np.cos(phi)) * np.cos(theta)
Y = (R + r * np.cos(phi)) * np.sin(theta)
Z = r * np.sin(phi)
# make the stem
theta1 = np.linspace(0, 2 * np.pi, 90)
r1 = np.linspace(0, 3, 50)
T1, R1 = np.meshgrid(theta1, r1)
X1 = R1 * 0.5 * np.sin(T1)
Y1 = R1 * 0.5 * np.cos(T1)
Z1 = -(np.sqrt(X1**2 + Y1**2) - 0.7)
Z1[Z1 < 0.3] = np.nan
Z1[Z1 > 0.7] = np.nan
# Display the pumpkin & stem
fig = plt.figure()
ax = fig.gca(projection="3d")
ax.set_xlim3d(-1, 1)
ax.set_ylim3d(-1, 1)
ax.set_zlim3d(-1, 1)
ax.plot_surface(X, Y, Z, color="tab:orange", rstride=1, cstride=1)
ax.plot_surface(X1, Y1, Z1, color="tab:green", rstride=1, cstride=1)
plt.show()
Bryce Desantis stuck to the biological theme and demonstrated fractal art.
import numpy as np
import matplotlib.pyplot as plt
# Barnsley's Fern - Fractal; en.wikipedia.org/wiki/Barnsley_…
# functions for each part of fern:
# stem
def stem(x, y):
return (0, 0.16 * y)
# smaller leaflets
def smallLeaf(x, y):
return (0.85 * x + 0.04 * y, -0.04 * x + 0.85 * y + 1.6)
# large left leaflets
def leftLarge(x, y):
return (0.2 * x - 0.26 * y, 0.23 * x + 0.22 * y + 1.6)
# large right leftlets
def rightLarge(x, y):
return (-0.15 * x + 0.28 * y, 0.26 * x + 0.24 * y + 0.44)
componentFunctions = [stem, smallLeaf, leftLarge, rightLarge]
# number of data points and frequencies for parts of fern generated:
# lists with all 75000 datapoints
datapoints = 75000
x, y = 0, 0
datapointsX = []
datapointsY = []
# For 75,000 datapoints
for n in range(datapoints):
FrequencyFunction = np.random.choice(componentFunctions, p=[0.01, 0.85, 0.07, 0.07])
x, y = FrequencyFunction(x, y)
datapointsX.append(x)
datapointsY.append(y)
# Scatter plot & scaled down to 0.1 to show more definition:
plt.scatter(datapointsX, datapointsY, s=0.1, color="g")
# Title of Figure
plt.title("Barnsley's Fern - Assignment 3")
# Changing background color
ax = plt.axes()
ax.set_facecolor("#d8d7bf")
Grace Bell got a little trippy with this rotationally semetric art. It’s pretty cool how she captured mouse events. It reminds us of a flower. What do you see?
import matplotlib.pyplot as plt
from matplotlib.tri import Triangulation
from matplotlib.patches import Polygon
import numpy as np
# I found this sample code online and manipulated it to make the art piece!
# was interested in because it combined what we used for functions as well as what we used for plotting with (x,y)
def update_polygon(tri):
if tri == -1:
points = [0, 0, 0]
else:
points = triang.triangles[tri]
xs = triang.x[points]
ys = triang.y[points]
polygon.set_xy(np.column_stack([xs, ys]))
def on_mouse_move(event):
if event.inaxes is None:
tri = -1
else:
tri = trifinder(event.xdata, event.ydata)
update_polygon(tri)
ax.set_title(f"In triangle {tri}")
event.canvas.draw()
# this is the info that creates the angles
n_angles = 14
n_radii = 7
min_radius = 0.1 # the radius of the middle circle can move with this variable
radii = np.linspace(min_radius, 0.95, n_radii)
angles = np.linspace(0, 2 * np.pi, n_angles, endpoint=False)
angles = np.repeat(angles[..., np.newaxis], n_radii, axis=1)
angles[:, 1::2] += np.pi / n_angles
x = (radii * np.cos(angles)).flatten()
y = (radii * np.sin(angles)).flatten()
triang = Triangulation(x, y)
triang.set_mask(
np.hypot(x[triang.triangles].mean(axis=1), y[triang.triangles].mean(axis=1))
< min_radius
)
trifinder = triang.get_trifinder()
fig, ax = plt.subplots(subplot_kw={"aspect": "equal"})
ax.triplot(
triang, "y+-"
) # made the color of the plot yellow and there are "+" for the data points but you can't really see them because of the lines crossing
polygon = Polygon([[0, 0], [0, 0]], facecolor="y")
update_polygon(-1)
ax.add_patch(polygon)
fig.canvas.mpl_connect("motion_notify_event", on_mouse_move)
plt.show()
As a bonus, did you like that fox in the banner? That was created (and well documented) by Emily Foster!
import numpy as np
import matplotlib.pyplot as plt
plt.axis("off")
# head
xhead = np.arange(-50, 50, 0.1)
yhead = -0.007 * (xhead * xhead) + 100
plt.plot(xhead, yhead, "darkorange")
# outer ears
xearL = np.arange(-45.8, -9, 0.1)
yearL = -0.08 * (xearL * xearL) - 4 * xearL + 70
xearR = np.arange(9, 45.8, 0.1)
yearR = -0.08 * (xearR * xearR) + 4 * xearR + 70
plt.plot(xearL, yearL, "black")
plt.plot(xearR, yearR, "black")
# inner ears
xinL = np.arange(-41.1, -13.7, 0.1)
yinL = -0.08 * (xinL * xinL) - 4 * xinL + 59
xinR = np.arange(13.7, 41.1, 0.1)
yinR = -0.08 * (xinR * xinR) + 4 * xinR + 59
plt.plot(xinL, yinL, "salmon")
plt.plot(xinR, yinR, "salmon")
# bottom of face
xfaceL = np.arange(-49.6, -14, 0.1)
xfaceR = np.arange(14, 49.3, 0.1)
xfaceM = np.arange(-14, 14, 0.1)
plt.plot(xfaceL, abs(xfaceL), "darkorange")
plt.plot(xfaceR, abs(xfaceR), "darkorange")
plt.plot(xfaceM, abs(xfaceM), "black")
# nose
xnose = np.arange(-14, 14, 0.1)
ynose = -0.03 * (xnose * xnose) + 20
plt.plot(xnose, ynose, "black")
# whiskers
xwhiskR = [50, 70, 55, 70, 55, 70, 49.3]
xwhiskL = [-50, -70, -55, -70, -55, -70, -49.3]
ywhisk = [82.6, 85, 70, 65, 60, 45, 49.3]
plt.plot(xwhiskR, ywhisk, "darkorange")
plt.plot(xwhiskL, ywhisk, "darkorange")
# eyes
plt.plot(20, 60, color="black", marker="o", markersize=15)
plt.plot(-20, 60, color="black", marker="o", markersize=15)
plt.plot(22, 62, color="white", marker="o", markersize=6)
plt.plot(-18, 62, color="white", marker="o", markersize=6)
We look forward to seeing these students continue in their plotting and scientific adventures!
]]>It’s my great pleasure to announce that I’ve finished my book on matplotlib and it is now freely available at www.labri.fr/perso/nrougier/scientific-visualization.html while sources for the book are hosted at github.com/rougier/scientific-visualization-book.
The Python scientific visualisation landscape is huge. It is composed of a myriad of tools, ranging from the most versatile and widely used down to the more specialised and confidential. Some of these tools are community based while others are developed by companies. Some are made specifically for the web, others are for the desktop only, some deal with 3D and large data, while others target flawless 2D rendering. In this landscape, Matplotlib has a very special place. It is a versatile and powerful library that allows you to design very high quality figures, suitable for scientific publishing. It also offers a simple and intuitive interface as well as an object oriented architecture that allows you to tweak anything within a figure. Finally, it can be used as a regular graphic library in order to design non‐scientific figures. This book is organized into four parts. The first part considers the fundamental principles of the Matplotlib library. This includes reviewing the different parts that constitute a figure, the different coordinate systems, the available scales and projections, and we’ll also introduce a few concepts related to typography and colors. The second part is dedicated to the actual design of a figure. After introducing some simple rules for generating better figures, we’ll then go on to explain the Matplotlib defaults and styling system before diving on into figure layout organization. We’ll then explore the different types of plot available and see how a figure can be ornamented with different elements. The third part is dedicated to more advanced concepts, namely 3D figures, optimization & animation. The fourth and final part is a collection of showcases.
I have been creating common visualisations like scatter plots, bar charts, beeswarms etc. for a while and thought about doing something different. Since I’m an avid football fan, I thought of ideas to represent players’ usage or involvement over a period (a season, a couple of seasons). I have seen some cool visualisations like donuts which depict usage and I wanted to make something different and simple to understand. I thought about representing batteries as a form of player usage and it made a lot of sense.
For players who have been barely used (played fewer minutes) show a large amount of battery present since they have enough energy left in the tank. And for heavily used players, do the opposite i.e. show drained or less amount of battery
So, what is the purpose of a battery chart? You can use it to show usage, consumption, involvement, fatigue etc. (anything usage related).
The image below is a sample view of how a battery would look in our figure, although a single battery isn’t exactly what we are going to recreate in this tutorial.
Before jumping on to the tutorial, I would like to make it known that the function can be tweaked to fit accordingly depending on the number of subplots or any other size parameter. Coming to the figure we are going to plot, there are a series of steps that is to be considered which we will follow one by one. The following are those steps:-
What is our use case?
The first and foremost part is to import the essential libraries so that we can leverage the functions within. In this case, we will import the libraries we need.
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import FancyBboxPatch, PathPatch, Wedge
The functions imported from matplotlib.path
and matplotlib.patches
will be used to draw lines, rectangles, boxes and so on to display the battery as it is.
The next part is to define a function named draw_battery()
, which will be used to draw the battery. Later on, we will call this function by specifying certain parameters to build the figure as we require. The following below is the code to build the battery -
def draw_battery(
fig,
ax,
percentage=0,
bat_ec="grey",
tip_fc="none",
tip_ec="grey",
bol_fc="#fdfdfd",
bol_ec="grey",
invert_perc=False,
):
"""
Parameters
----------
fig : figure
The figure object for the plot
ax : axes
The axes/axis variable of the figure.
percentage : int, optional
This is the battery percentage - size of the fill. The default is 0.
bat_ec : str, optional
The edge color of the battery/cell. The default is "grey".
tip_fc : str, optional
The fill/face color of the tip of battery. The default is "none".
tip_ec : str, optional
The edge color of the tip of battery. The default is "grey".
bol_fc : str, optional
The fill/face color of the lighning bolt. The default is "#fdfdfd".
bol_ec : str, optional
The edge color of the lighning bolt. The default is "grey".
invert_perc : bool, optional
A flag to invert the percentage shown inside the battery. The default is False
Returns
-------
None.
"""
try:
fig.set_size_inches((15, 15))
ax.set(xlim=(0, 20), ylim=(0, 5))
ax.axis("off")
if invert_perc == True:
percentage = 100 - percentage
# color options - #fc3d2e red & #53d069 green & #f5c54e yellow
bat_fc = (
"#fc3d2e"
if percentage <= 20
else "#53d069"
if percentage >= 80
else "#f5c54e"
)
"""
Static battery and tip of battery
"""
battery = FancyBboxPatch(
(5, 2.1),
10,
0.8,
"round, pad=0.2, rounding_size=0.5",
fc="none",
ec=bat_ec,
fill=True,
ls="-",
lw=1.5,
)
tip = Wedge(
(15.35, 2.5), 0.2, 270, 90, fc="none", ec=bat_ec, fill=True, ls="-", lw=3
)
ax.add_artist(battery)
ax.add_artist(tip)
"""
Filling the battery cell with the data
"""
filler = FancyBboxPatch(
(5.1, 2.13),
(percentage / 10) - 0.2,
0.74,
"round, pad=0.2, rounding_size=0.5",
fc=bat_fc,
ec=bat_fc,
fill=True,
ls="-",
lw=0,
)
ax.add_artist(filler)
"""
Adding a lightning bolt in the centre of the cell
"""
verts = [
(10.5, 3.1), # top
(8.5, 2.4), # left
(9.5, 2.4), # left mid
(9, 1.9), # bottom
(11, 2.6), # right
(10, 2.6), # right mid
(10.5, 3.1), # top
]
codes = [
Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
path = Path(verts, codes)
bolt = PathPatch(path, fc=bol_fc, ec=bol_ec, lw=1.5)
ax.add_artist(bolt)
except Exception as e:
import traceback
print("EXCEPTION FOUND!!! SAFELY EXITING!!! Find the details below:")
traceback.print_exc()
Once we have created the API or function, we can now implement the same. And for that, we need to feed in required data. In our example, we have a dataset that has the list of Liverpool players and the minutes they have played in the past two seasons. The data was collected from Football Reference aka FBRef.
We use the read excel function in the pandas library to read our dataset that is stored as an excel file.
data = pd.read_excel("Liverpool Minutes Played.xlsx")
Now, let us have a look at how the data looks by listing out the first five rows of our dataset -
data.head()
Now that everything is ready, we go ahead and plot the data. We have 25 players in our dataset, so a 5 x 5 figure is the one to go for. We’ll also add some headers and set the colors accordingly.
fig, ax = plt.subplots(5, 5, figsize=(5, 5))
facecolor = "#00001a"
fig.set_facecolor(facecolor)
fig.text(
0.35,
0.95,
"Liverpool: Player Usage/Involvement",
color="white",
size=18,
fontname="Libre Baskerville",
fontweight="bold",
)
fig.text(
0.25,
0.92,
"Data from 19/20 and 20/21 | Battery percentage indicate usage | less battery = played more/ more involved",
color="white",
size=12,
fontname="Libre Baskerville",
)
We have now now filled in appropriate headers, figure size etc. The next step is to plot all the axes i.e. batteries for each and every player. p
is the variable used to iterate through the dataframe and fetch each players data. The draw_battery()
function call will obviously plot the battery. We also add the required labels along with that - player name and usage rate/percentage in this case.
p = 0 # The variable that'll iterate through each row of the dataframe (for every player)
for i in range(0, 5):
for j in range(0, 5):
ax[i, j].text(
10,
4,
str(data.iloc[p, 0]),
color="white",
size=14,
fontname="Lora",
va="center",
ha="center",
)
ax[i, j].set_facecolor(facecolor)
draw_battery(fig, ax[i, j], round(data.iloc[p, 8]), invert_perc=True)
"""
Add the battery percentage as text if a label is required
"""
ax[i, j].text(
5,
0.9,
"Usage - " + str(int(100 - round(data.iloc[p, 8]))) + "%",
fontsize=12,
color="white",
)
p += 1
Now that everything is almost done, we do some final touchup and this is a completely optional part anyway. Since the visualisation is focused on Liverpool players, I add Liverpool’s logo and also add my watermark. Also, crediting the data source/provider is more of an ethical habit, so we go ahead and do that as well before displaying the plot.
liv = Image.open("Liverpool.png", "r")
liv = liv.resize((80, 80))
liv = np.array(liv).astype(np.float) / 255
fig.figimage(liv, 30, 890)
fig.text(
0.11,
0.08,
"viz: Rithwik Rajendran/@rithwikrajendra",
color="lightgrey",
size=14,
fontname="Lora",
)
fig.text(
0.8, 0.08, "data: FBRef/Statsbomb", color="lightgrey", size=14, fontname="Lora"
)
plt.show()
So, we have the plot below. You can customise the design as you want in the draw_battery()
function - change size, colours, shapes etc
Matplotlib: Revisiting Text/Font Handling
To kick things off for the final report, here’s a meme to nudge about the previous blogs.
Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations, which has become a de-facto Python plotting library.
Much of the implementation behind its font manager is inspired by W3C compliant algorithms, allowing users to interact with font properties like font-size
, font-weight
, font-family
, etc.
By “not ideal”, I do not mean that the library has design flaws, but that the design was engineered in the early 2000s, and is now outdated.
(PS: here’s the link to my GSoC proposal, if you’re interested)
Overall, the project was divided into two major subgoals:
But before we take each of them on, we should get an idea about some basic terminology for fonts (which are a lot, and are rightly confusing)
The PR: Clarify/Improve docs on family-names vs generic-families brings about a bit of clarity about some of these terms. The next section has a linked PR which also explains the types of fonts and how that is relevant to Matplotlib.
An easy-to-read guide on Fonts and Matplotlib was created with PR: [Doc] Font Types and Font Subsetting, which is currently live at Matplotlib’s DevDocs.
Taking an excerpt from one of my previous blogs (and the doc):
Fonts can be considered as a collection of these glyphs, so ultimately the goal of subsetting is to find out which glyphs are required for a certain array of characters, and embed only those within the output.
PDF, PS/EPS and SVG output document formats are special, as in the text within them can be editable, i.e, one can copy/search text from documents (for eg, from a PDF file) if the text is editable.
The PDF, PS/EPS and SVG backends used to support font subsetting, only for a few types. What that means is, before Summer ‘21, Matplotlib could generate Type 3 subsets for PDF, PS/EPS backends, but it could not generate Type 42 / TrueType subsets.
With PR: Type42 subsetting in PS/PDF merged in, users can expect their PDF/PS/EPS documents to contains subsetted glyphs from the original fonts.
This is especially benefitial for people who wish to use commercial (or CJK) fonts. Licenses for many fonts require subsetting such that they can’t be trivially copied from the output files generated from Matplotlib.
Matplotlib was designed to work with a single font at runtime. A user could specify a font.family
, which was supposed to correspond to CSS properties, but that was only used to find a single font present on the user’s system.
Once that font was found (which is almost always found, since Matplotlib ships with a set of default fonts), all the user text was rendered only through that font. (which used to give out “tofu” if a character wasn’t found)
It might seem like an outdated approach for text rendering, now that we have these concepts like font-fallback, but these concepts weren’t very well discussed in early 2000s. Even getting a single font to work was considered a hard engineering problem.
This was primarily because of the lack of any standardization for representation of fonts (Adobe had their own font representation, and so did Apple, Microsoft, etc.)
Previous (notice Tofus) VS After (CJK font as fallback)
To migrate from a font-first approach to a text-first approach, there are multiple steps involved:
The very first (and crucial!) step is to get to a point where we have multiple font paths (ideally individual font files for the whole family). That is achieved with either:
Quoting one of my previous blogs:
Don’t break, a lot at stake!
My first approach was to change the existing public findfont
API to incorporate multiple filepaths. Since Matplotlib has a very huge userbase, there’s a high chance it would break a chunk of people’s workflow:
First PR (left), Second PR (right)
Once we get a list of font paths, we need to change the internal representation of a “font”. Matplotlib has a utility called FT2Font, which is written in C++, and used with wrappers as a Python extension, which in turn is used throughout the backends. For all intents and purposes, it used to mean: FT2Font === SingleFont
(if you’re interested, here’s a meme about how FT2Font was named!)
But that is not the case anymore, here’s a flowchart to explain what happens now:
Font-Fallback Algorithm
With PR: Implement Font-Fallback in Matplotlib, every FT2Font object has a std::vector<FT2Font *> fallback_list
, which is used for filling the parent cache, as can be seen in the self-explanatory flowchart.
For simplicity, only one type of cache (character -> FT2Font) is shown, whereas in actual implementation there’s 2 types of caches, one shown above, and another for glyphs (glyph_id -> FT2Font).
Note: Only the parent’s APIs are used in some backends, so for each of the individual public functions like
load_glyph
,load_char
,get_kerning
, etc., we find the FT2Font object which has that glyph from the parent FT2Font cache!
Now that we have multiple fonts to render a string, we also need to embed them for those special backends (i.e., PDF/PS, etc.). This was done with some patches to specific backends:
With this, one could create a PDF or a PS/EPS document with multiple fonts which are embedded (and subsetted!).
From small contributions to eventually working on a core module of such a huge library, the road was not what I had imagined, and I learnt a lot while designing solutions to these problems.
…since all plots will work their way through the new codepath!
I think that single statement is worth the whole GSoC project.
For the sake of statistics (and to make GSoC sound a bit less intimidating), here’s a list of contributions I made to Matplotlib before Summer ‘21, most of which are only a few lines of diff:
Created At | PR Title | Diff | Status |
---|---|---|---|
Nov 2, 2020 | Expand ScalarMappable.set_array to accept array-like inputs | (+28 −4) | MERGED |
Nov 8, 2020 | Add overset and underset support for mathtext | (+71 −0) | MERGED |
Nov 14, 2020 | Strictly increasing check with test coverage for streamplot grid | (+54 −2) | MERGED |
Jan 11, 2021 | WIP: Add support to edit subplot configurations via textbox | (+51 −11) | DRAFT |
Jan 18, 2021 | Fix over/under mathtext symbols | (+7,459 −4,169) | MERGED |
Feb 11, 2021 | Add overset/underset whatsnew entry | (+28 −17) | MERGED |
May 15, 2021 | Warn user when mathtext font is used for ticks | (+28 −0) | MERGED |
Here’s a list of PRs I opened during Summer'21:
From learning about software engineering fundamentals from Tom to learning about nitty-gritty details about font representations from Jouni;
From learning through Antony’s patches and pointers to receiving amazing feedback on these blogs from Hannah, it has been an adventure! 💯
Special Mentions: Frank, Srijan and Atharva for their helping hands!
And lastly, you, the reader; if you’ve been following my previous blogs, or if you’ve landed at this one directly, I thank you nevertheless. (one last meme, I promise!)
I know I speak for every developer out there, when I say it means a lot when you choose to look at their journey or their work product; it could as well be a tiny website, or it could be as big as designing a complete library!
I’m grateful to Maptlotlib (under the parent organisation: NumFOCUS), and of course, Google Summer of Code for this incredible learning opportunity.
Farewell, reader! :’)
Consider contributing to Matplotlib (Open Source in general) ❤️
“Matplotlib, I want 多个汉字 in between my text.”
Let’s say you asked Matplotlib to render a plot with some label containing 多个汉字 (multiple Chinese characters) in between your English text.
Or conversely, let’s say you use a Chinese font with Matplotlib, but you had English text in between (which is quite common).
Assumption: the Chinese font doesn’t have those English glyphs, and vice versa
With this short writeup, I’ll talk about how does a migration from a font-first to a text-first approach in Matplotlib looks like, which ideally solves the above problem.
Logically, the very first step to solving this would be to ask whether you have multiple fonts, right?
Matplotlib doesn’t ship CJK (Chinese Japanese Korean) fonts, which ideally contains these Chinese glyphs. It does try to cover most grounds with the default font it ships with, however.
So if you don’t have a font to render your Chinese characters, go ahead and install one! Matplotlib will find your installed fonts (after rebuilding the cache, that is).
This is where things get interesting, and what my previous writeup was all about..
Parsing the whole family to get multiple fonts for given font properties
To give you an idea about how things used to work for Matplotlib:
FT2Font is a matplotlib-to-font module, which provides high-level Python API to interact with a single font’s operations like read/draw/extract/etc.
Being written in C++, the module needs wrappers around it to be converted into a Python extension using Python’s C-API.
It allows us to use C++ functions directly from Python!
So wherever you see a use of font within the library (by library I mean the readable Python codebase XD), you could have derived that:
FT2Font === SingleFont
Things are be a bit different now however..
FT2Font is basically itself a wrapper around a library called FreeType, which is a freely available software library to render fonts.
In my initial proposal.. while looking around how FT2Font is structured, I figured:
Oh, looks like all we need are Faces!
If you don’t know what faces/glyphs/ligatures are, head over to why Text Hates You. I can guarantee you’ll definitely enjoy some real life examples of why text rendering is hard. 🥲
Anyway, if you already know what Faces are, it might strike you:
If we already have all the faces we need from multiple fonts (let’s say we created a child of FT2Font.. which only tracks the faces for its families), we should be able to render everything from that parent FT2Font right?
As I later figured out while finding segfaults in implementing this design:
Each FT2Font is linked to a single FT_Library object!
If you tried to load the face/glyph/character (basically anything) from a different FT2Font object.. you’ll run into serious segfaults. (because one object linked to an FT_Library
can’t really access another object which has it’s own FT_Library
)
// face is linked to FT2Font; which is
// linked to a single FT_Library object
FT_Face face = this->get_face();
FT_Get_Glyph(face->glyph, &placeholder); // works like a charm
// somehow get another FT2Font's face
FT_Face family_face = this->get_family_member()->get_face();
FT_Get_Glyph(family_face->glyph, &placeholder); // segfaults!
Realizing this took a good amount of time! After this I quickly came up with a recursive approach, wherein we:
std::vector<FT2Font *> fallback_list
fallback_list
A quick overhaul of the above piece of code^
bool ft_get_glyph(FT_Glyph &placeholder) {
FT_Error not_found = FT_Get_Glyph(this->get_face(), &placeholder);
if (not_found) return False;
else return True;
}
// within driver code
for (uint i=0; i<fallback_list.size(); i++) {
// iterate through all FT2Font objects
bool was_found = fallback_list[i]->ft_get_glyph(placeholder);
if (was_found) break;
}
With the idea surrounding this implementation, the Agg backend is able to render a document (either through GUI, or a PNG) with multiple fonts!
I’ve spent days at Python C-API’s argument doc, and it’s hard to get what you need at first, ngl.
But, with the help of some amazing people in the GSoC community (@srijan-paul, @atharvaraykar) and amazing mentors, blockers begone!
Oh no. XD
Things work just fine for the Agg backend, but to generate a PDF/PS/SVG with multiple fonts is another story altogether! I think I’ll save that for later.
Data visualization is a key step in a data science pipeline. Python offers great possibilities when it comes to representing some data graphically, but it can be hard and time-consuming to create the appropriate chart.
The Python Graph Gallery is here to help. It displays many examples, always providing the reproducible code. It allows to build the desired chart in minutes.
The gallery currently provides more than 400 chart examples. Those examples are organized in 40 sections, one for each chart types: scatterplot, boxplot, barplot, treemap and so on. Those chart types are organized in 7 big families as suggested by data-to-viz.com: one for each visualization purpose.
It is important to note that not only the most common chart types are covered. Lesser known charts like chord diagrams, streamgraphs or bubble maps are also available.
Each section always starts with some very basic examples. It allows to understand how to build a chart type in a few seconds. Hopefully applying the same technique on another dataset will thus be very quick.
For instance, the scatterplot section starts with this matplotlib example. It shows how to create a dataset with pandas and plot it with the plot()
function. The main graph argument like linestyle
and marker
are described to make sure the code is understandable.
The gallery uses several libraries like seaborn or plotly to produce its charts, but is mainly focus on matplotlib. Matplotlib comes with great flexibility and allows to build any kind of chart without limits.
A whole page is dedicated to matplotlib. It describes how to solve recurring issues like customizing axes or titles, adding annotations (see below) or even using custom fonts.
The gallery is also full of non-straightforward examples. For instance, it has a tutorial explaining how to build a streamchart with matplotlib. It is based on the stackplot()
function and adds some smoothing to it:
Last but not least, the gallery also displays some publication ready charts. They usually involve a lot of matplotlib code, but showcase the fine grain control one has over a plot.
Here is an example with a post inspired by Tuo Wang’s work for the tidyTuesday project. (Code translated from R available here)
The python graph gallery is an ever growing project. It is open-source, with all its related code hosted on github.
Contributions are very welcome to the gallery. Each blogpost is just a jupyter notebook so suggestion should be very easy to do through issues or pull requests!
The python graph gallery is a project developed by Yan Holtz in his free time. It can help you improve your technical skills when it comes to visualizing data with python.
The gallery belongs to an ecosystem of educative websites. Data to viz describes best practices in data visualization, the R, python and d3.js graph galleries provide technical help to build charts with the 3 most common tools.
For any question regarding the project, please say hi on twitter at @R_Graph_Gallery!
]]>“Well? Did you get it working?!”
Before I answer that question, if you’re missing the context, check out my previous blog’s last few lines.. promise it won’t take you more than 30 seconds to get the whole problem!
With this short writeup, I intend to talk about what we did and why we did, what we did. XD
Ring any bells? Remember OS (Operating Systems)? It’s one of the core CS subjects which I bunked then and regret now. (╥﹏╥)
The wikipedia page has a 2-liner explaination if you have no idea what’s an Ostrich Algorithm.. but I know most of y’all won’t bother clicking it XD, so here goes:
Ostrich algorithm is a strategy of ignoring potential problems by “sticking one’s head in the sand and pretending there is no problem”
An important thing to note: it is used when it is more cost-effective to allow the problem to occur than to attempt its prevention.
As you might’ve guessed by now, we ultimately ended up with the not-so-clean API (more on this later).
The highest level overview of the problem was:
❌ fontTools -> buffer -> ttconv_with_buffer
✅ fontTools -> buffer -> tempfile -> ttconv_with_file
The first approach created corrupted outputs, however the second approach worked fine. A point to note here would be that Method 1 is better in terms of separation of reading the file from parsing the data.
ttconv_with_buffer
is a modification to the original ttconv_with_file
; that allows it to input a file buffer instead of a file-pathYou might be tempted to say:
“Well,
ttconv_with_buffer
must be wrongly modified, duh.”
Logically, yes. ttconv
was designed to work with a file-path and not a file-object (buffer), and modifying a codebase written in 1998 turned out to be a larger pain than we anticipated.
He even did, but the efforts to get it to production / or to fix ttconv
embedding were ⋙ to just get on with the second method. That damn ostrich really helped us get out of that debugging hell. 🙃
Finally, we’re onto the second subgoal for the summer: Font Fallback!
To give an idea about how things work right now:
matplotlib.rcParams["font-family"] = ["list", "of", "font", "families"]
As soon as a font is found by iterating the font-family, all text is rendered by that and only that font.
You can immediately see the problems with this approach; using the same font for every character will not render any glyph which isn’t present in that font, and will instead spit out a square rectangle called “tofu” (read the first line here).
And that is exactly the first milestone! That is, parsing the entire list of font families to get an intermediate representation of a multi-font interface.
Imagine if you had the superpower to change Python standard library’s internal functions, without consulting anybody. Let’s say you wanted to write a solution by hooking in and changing, let’s say str("dumb")
implementation by returning:
>>> str("dumb")
["d", "u", "m", "b"]
Pretty “dumb”, right? xD
For your usecase it might work fine, but it would also mean breaking the entire Python userbase’ workflow, not to mention the 1000000+ libraries that depend on the original functionality.
On a similar note, Matplotlib has a public API known as findfont(prop: str)
, which when given a string (or FontProperties) finds you a font that best matches the given properties in your system.
It is used throughout the library, as well as at multiple other places, including downstream libraries. Being naive as I was, I changed this function signature and submitted the PR. 🥲
Had an insightful discussion about this with my mentors, and soon enough raised the other PR, which didn’t touch the findfont
API at all.
One last thing to note: Even if we do complete the first milestone, we wouldn’t be done yet, since this is just parsing the entire list to get multiple fonts..
We still need to migrate the library’s internal implementation from font-first to text-first!
But that’s for later, for now:
"Aitik, how is your GSoC going?"
Well, it’s been a while since I last wrote. But I wasn’t spending time watching Loki either! (that’s a lie.)
During this period the project took on some interesting (and stressful) curves, which I intend to talk about in this small writeup.
The first week of coding period, and I met one of my new mentors, Jouni. Without him, along with Tom and Antony, the project wouldn’t have moved an inch.
It was initially Jouni’s PR which was my starting point of the first milestone in my proposal, Font Subsetting.
As was proposed by Tom, a good way to understand something is to document your journey along the way! (well, that’s what GSoC wants us to follow anyway right?)
Taking an excerpt from one of the paragraphs I wrote here:
Font Subsetting can be used before generating documents, to embed only the required glyphs within the documents. Fonts can be considered as a collection of these glyphs, so ultimately the goal of subsetting is to find out which glyphs are required for a certain array of characters, and embed only those within the output.
Now this may seem straightforward, right?
The glyph programs can call their own subprograms, for example, characters like ä
could be composed by calling subprograms for a
and ¨
; or →
could be composed by a program that changes the display matrix and calls the subprogram for ←
.
Since the subsetter has to find out all such subprograms being called by every glyph included in the subset, this is a generally difficult problem!
Something which one of my mentors said which really stuck with me:
Matplotlib isn’t a font library, and shouldn’t try to be one.
It’s really easy to fall into the trap of trying to do everything within your own project, which ends up rather hurting itself.
Since this holds true even for Matplotlib, it uses external dependencies like FreeType, ttconv, and newly proposed fontTools to handle font subsetting, embedding, rendering, and related stuff.
PS: If that font stuff didn’t make sense, I would recommend going through a friendly tutorial I wrote, which is all about Matplotlib and Fonts!
Matplotlib uses an external dependency ttconv
which was initially forked into Matplotlib’s repository in 2003!
ttconv was a standalone commandline utility for converting TrueType fonts to subsetted Type 3 fonts (among other features) written in 1995, which Matplotlib forked in order to make it work as a library.
Over the time, there were a lot of issues with it which were either hard to fix, or didn’t attract a lot of attention. (See the above paragraph for a valid reason)
One major utility which is still used is convert_ttf_to_ps
, which takes a font path as input and converts it into a Type 3 or Type 42 PostScript font, which can be embedded within PS/EPS output documents. The guide I wrote (link) contains decent descriptions, the differences between these type of fonts, etc.
Why do we need to? Type 42 subsetting isn’t really supported by ttconv, so we use a new dependency called fontTools, whose ‘full-time job’ is to subset Type 42 fonts for us (among other things).
It provides us with a font buffer, however ttconv expects a font path to embed that font
Easily enough, this can be done by Python’s tempfile.NamedTemporaryFile
:
with tempfile.NamedTemporaryFile(suffix=".ttf") as tmp:
# fontdata is the subsetted buffer
# returned from fontTools
tmp.write(fontdata.getvalue())
# TODO: allow convert_ttf_to_ps
# to input file objects (BytesIO)
convert_ttf_to_ps(
os.fsencode(tmp.name),
fh,
fonttype,
glyph_ids,
)
But this is far from a clean API; in terms of separation of *reading* the file from *parsing* the data.
What we ideally want is to pass the buffer down to convert_ttf_to_ps
, and modify the embedding code of ttconv
(written in C++). And here we come across a lot of unexplored codebase, which wasn’t touched a lot ever since it was forked.
Funnily enough, just yesterday, after spending a lot of quality time, me and my mentors figured out that the whole logging system of ttconv was broken, all because of a single debugging function. 🥲
This is still an ongoing problem that we need to tackle over the coming weeks, hopefully by the next time I write one of these blogs, it gets resolved!
Again, thanks a ton for spending time reading these blogs. :D
The day of result, was a very, very long day.
With this small writeup, I intend to talk about everything before that day, my experiences, my journey, and the role of Matplotlib throughout!
I am a third-year undergraduate student currently pursuing a Dual Degree (B.Tech + M.Tech) in Information Technology at Indian Institute of Information Technology, Gwalior.
During my sophomore year, my interests started expanding in the domain of Machine Learning, where I learnt about various amazing open-source libraries like NumPy, SciPy, pandas, and Matplotlib! Gradually, in my third year, I explored the field of Computer Vision during my internship at a startup, where a big chunk of my work was to integrate their native C++ codebase to Android via JNI calls.
To actuate my learnings from the internship, I worked upon my own research along with a friend from my university. The paper was accepted in CoDS-COMAD’21 and is published at ACM Digital Library. (Link, if anyone’s interested)
During this period, I also picked up the knack for open-source and started glaring at various issues (and pull requests) in libraries, including OpenCV [contributions] and NumPy [contributions].
I quickly got involved in Matplotlib’s community; it was very welcoming and beginner-friendly.
Fun fact: Its dev call was the very first I attended with people from all around the world!
We all mess up, my very first PR to an organisation like OpenCV went horrible, till date, it looks like this:
In all honesty, I added a single commit with only a few lines of diff.
However, I pulled all the changes from upstream
master
to my working branch, whereas the PR was to be made on3.4
branch.
I’m sure I could’ve done tons of things to solve it, but at that time I couldn’t do anything - imagine the anxiety!
At this point when I look back at those fumbled PRs, I feel like they were important for my learning process.
Fun Fact: Because of one of these initial contributions, I got a shiny little badge [Mars 2020 Helicopter Contributor] on GitHub!
It was around initial weeks of November last year, I was scanning through Good First Issue
and New Feature
labels, I realised a pattern - most Mathtext related issues were unattended.
To make it simple, Mathtext is a part of Matplotlib which parses mathematical expressions and provides TeX-like outputs, for example:
I scanned the related source code to try to figure out how to solve those Mathtext issues. Eventually, with the help of maintainers reviewing the PRs and a lot of verbose discussions on GitHub issues/pull requests and on the Gitter channel, I was able to get my initial PRs merged!
Most of us use libraries without understanding the underlining structure of them, which sometimes can cause downstream bugs!
While I was studying Matplotlib’s architecture, I figured that I could use the same ideology for one of my own projects!
Matplotlib uses a global dictionary-like object named as rcParams
, I used a smaller interface, similar to rcParams, in swi-ml - a small Python library I wrote, implementing a subset of ML algorithms, with a switchable backend.
It was around January, I had a conversation with one of the maintainers (hey Antony!) about the long-list of issues with the current ways of handling texts/fonts in the library.
After compiling them into an order, after few tweaks from maintainers, GSoC Idea-List for Matplotlib was born. And so did my journey of building a strong proposal!
The aim of the project is divided into 3 subgoals:
Font-Fallback: A redesigned text-first font interface - essentially parsing all family before rendering a “tofu”.
(similar to specifying font-family in CSS!)
Font Subsetting: Every exported PS/PDF would contain embedded glyphs subsetted from the whole font.
(imagine a plot with just a single letter “a”, would you like it if the PDF you exported from Matplotlib to embed the whole font file within it?)
Most mpl backends would use the unified TeX exporting mechanism
Mentors Thomas A Caswell, Antony Lee, Hannah.
Thanks a lot for spending time reading the blog! I’ll be back with my progress in subsequent posts.
In May 2020, Alexandre Morin-Chassé published a blog post about the stellar chart. This type of chart is an (approximately) direct alternative to the radar chart (also known as web, spider, star, or cobweb chart) — you can read more about this chart here.
In this tutorial, we will see how we can create a quick-and-dirty stellar chart. First of all, let’s get the necessary modules/libraries, as well as prepare a dummy dataset (with just a single record).
from itertools import chain, zip_longest
from math import ceil, pi
import matplotlib.pyplot as plt
data = [
("V1", 8),
("V2", 10),
("V3", 9),
("V4", 12),
("V5", 6),
("V6", 14),
("V7", 15),
("V8", 25),
]
We will also need some helper functions, namely a function to round up to the nearest 10 (round_up()
) and a function to join two sequences (even_odd_merge()
). In the latter, the values of the first sequence (a list or a tuple, basically) will fill the even positions and the values of the second the odd ones.
def round_up(value):
"""
>>> round_up(25)
30
"""
return int(ceil(value / 10.0)) * 10
def even_odd_merge(even, odd, filter_none=True):
"""
>>> list(even_odd_merge([1,3], [2,4]))
[1, 2, 3, 4]
"""
if filter_none:
return filter(None.__ne__, chain.from_iterable(zip_longest(even, odd)))
return chain.from_iterable(zip_longest(even, odd))
That said, to plot data
on a stellar chart, we need to apply some transformations, as well as calculate some auxiliary values. So, let’s start by creating a function (prepare_angles()
) to calculate the angle of each axis on the chart (N
corresponds to the number of variables to be plotted).
def prepare_angles(N):
angles = [n / N * 2 * pi for n in range(N)]
# Repeat the first angle to close the circle
angles += angles[:1]
return angles
Next, we need a function (prepare_data()
) responsible for adjusting the original data (data
) and separating it into several easy-to-use objects.
def prepare_data(data):
labels = [d[0] for d in data] # Variable names
values = [d[1] for d in data]
# Repeat the first value to close the circle
values += values[:1]
N = len(labels)
angles = prepare_angles(N)
return labels, values, angles, N
Lastly, for this specific type of chart, we require a function (prepare_stellar_aux_data()
) that, from the previously calculated angles, prepares two lists of auxiliary values: a list of intermediate angles for each pair of angles (stellar_angles
) and a list of small constant values (stellar_values
), which will act as the values of the variables to be plotted in order to achieve the star-like shape intended for the stellar chart.
def prepare_stellar_aux_data(angles, ymax, N):
angle_midpoint = pi / N
stellar_angles = [angle + angle_midpoint for angle in angles[:-1]]
stellar_values = [0.05 * ymax] * N
return stellar_angles, stellar_values
At this point, we already have all the necessary ingredients for the stellar chart, so let’s move on to the Matplotlib side of this tutorial. In terms of aesthetics, we can rely on a function (draw_peripherals()
) designed for this specific purpose (feel free to customize it!).
def draw_peripherals(ax, labels, angles, ymax, outer_color, inner_color):
# X-axis
ax.set_xticks(angles[:-1])
ax.set_xticklabels(labels, color=outer_color, size=8)
# Y-axis
ax.set_yticks(range(10, ymax, 10))
ax.set_yticklabels(range(10, ymax, 10), color=inner_color, size=7)
ax.set_ylim(0, ymax)
ax.set_rlabel_position(0)
# Both axes
ax.set_axisbelow(True)
# Boundary line
ax.spines["polar"].set_color(outer_color)
# Grid lines
ax.xaxis.grid(True, color=inner_color, linestyle="-")
ax.yaxis.grid(True, color=inner_color, linestyle="-")
To plot the data and orchestrate (almost) all the steps necessary to have a stellar chart, we just need one last function: draw_stellar()
.
def draw_stellar(
ax,
labels,
values,
angles,
N,
shape_color="tab:blue",
outer_color="slategrey",
inner_color="lightgrey",
):
# Limit the Y-axis according to the data to be plotted
ymax = round_up(max(values))
# Get the lists of angles and variable values
# with the necessary auxiliary values injected
stellar_angles, stellar_values = prepare_stellar_aux_data(angles, ymax, N)
all_angles = list(even_odd_merge(angles, stellar_angles))
all_values = list(even_odd_merge(values, stellar_values))
# Apply the desired style to the figure elements
draw_peripherals(ax, labels, angles, ymax, outer_color, inner_color)
# Draw (and fill) the star-shaped outer line/area
ax.plot(
all_angles,
all_values,
linewidth=1,
linestyle="solid",
solid_joinstyle="round",
color=shape_color,
)
ax.fill(all_angles, all_values, shape_color)
# Add a small hole in the center of the chart
ax.plot(0, 0, marker="o", color="white", markersize=3)
Finally, let’s get our chart on a blank canvas (figure).
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111, polar=True) # Don't forget the projection!
draw_stellar(ax, *prepare_data(data))
plt.show()
It’s done! Right now, you have an example of a stellar chart and the boilerplate code to add this type of chart to your repertoire. If you end up creating your own stellar charts, feel free to share them with the world (and me!). I hope this tutorial was useful and interesting for you!
]]>The IPCC’s Special Report on Global Warming of 1.5°C (SR15), published in October 2018, presented the latest research on anthropogenic climate change. It was written in response to the 2015 UNFCCC’s “Paris Agreement” of
holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C […]".
cf. Article 2.1.a of the Paris Agreement
As part of the SR15 assessment, an ensemble of quantitative, model-based scenarios was compiled to underpin the scientific analysis. Many of the headline statements widely reported by media are based on this scenario ensemble, including the finding that
global net anthropogenic CO2 emissions decline by ~45% from 2010 levels by 2030
in all pathways limiting global warming to 1.5°C (cf. statement C.1 in the Summary For Policymakers).
When preparing the SR15, the authors wanted to go beyond previous reports not just regarding the scientific rigor and scope of the analysis, but also establish new standards in terms of openness, transparency and reproducibility.
The scenario ensemble was made accessible via an interactive IAMC 1.5°C Scenario Explorer (link) in line with the FAIR principles for scientific data management and stewardship. The process for compiling, validating and analyzing the scenario ensemble was described in an open-access manuscript published in Nature Climate Change (doi: 10.1038/s41558-018-0317-4).
In addition, the Jupyter notebooks generating many of the headline statements, tables and figures (using Matplotlib) were released under an open-source license to facilitate a better understanding of the analysis and enable reuse for subsequent research. The notebooks are available in rendered format and on GitHub.
To facilitate reusability of the scripts and plotting utilities developed for the SR15 analysis, we started the open-source Python package pyam as a toolbox for working with scenarios from integrated-assessment and energy system models.
The package is a wrapper for pandas and Matplotlib geared for several data formats commonly used in energy modelling. Read the docs!
]]>This year’s Google Season of Docs (GSoD) provided me the opportunity to work with the open source organization, Matplotlib. In early summer, I submitted my proposal of Developing Matplotlib Entry Paths with the goal of improving the documentation with an alternative approach to writing.
I had set out to identify with users more by providing real world contexts to examples and programming. My purpose was to lower the barrier of entry for others to begin using the Python library with an expository approach. I focused on aligning with users based on consistent derived purposes and a foundation of task-based empathy.
The project began during the community bonding phase with learning the fundamentals of building documentation and working with open source code. I later generated usability testing surveys to the community and consolidated findings. From these results, I developed two new documents for merging into the Matplotlib repository, a Getting Started introductory tutorial and a lean Style Guide for the documentation.
Throughout this year’s Season of Docs with Matplotlib, I learned a great deal about working on open source projects, provided contributions of surveying communities and interviewing subject matter experts in documentation usability testing, and produced a comprehensive introductory guide for improving entry-level content with an initiative style guide section.
As a new user to Git and GitHub, I had a learning curve in getting started with building documentation locally on my machine. Working with cloning repositories and familiarizing myself with commits and pull requests took the bulk of the first few weeks on this project. However, with experiencing errors and troubleshooting broken branches, it was excellent to be able to lean on my mentors for resolving these issues. Platforms like Gitter, Zoom, and HackMD were key in keeping communication timely and concise. I was fortunate to be able to get in touch with the team to help me as soon as I had problems.
With programming, I was not a completely fresh face to Python and Matplotlib. However, installing the library from the source and breaking down functionality to core essentials helped me grow in my understanding of not only the fundamentals, but also the terminology. Tackling everything through my own experience of using Python and then also having suggestions and advice from the development team accelerated the ideas and implementations I aimed to work towards.
New formats and standards with reStructuredText files and Sphinx compatibility were unfamiliar avenues to me at first. In building documentation and reading through already written content, I adapted to making the most of the features available with the ideas I had for writing material suited for users new to Matplotlib. Making use of tables and code examples embedded allowed me to be more flexible in visual layout and navigation.
During the beginning stages of the project, I was able to incorporate usability testing for the current documentation. By reaching out to communities on Twitter, Reddit, and various Slack channels, I compiled and consolidated findings that helped shape the language and focus of new content to create. I summarized and shared the community’s responses in addition to separate informational interviews conducted with subject matter experts in my location. These data points helped in justifying and supporting decisions for the scope and direction of the language and content.
At the end of the project, I completed our agreed upon expectations for the documentation. The focused goal consisted of a Getting Started tutorial to introduce and give context to Matplotlib for new users. In addition, through the documentation as well as the meetings with the community, we acknowledged a missing element of a Style Guide. Though a comprehensive document for the entire library was out of the scope of the project, I put together, in conjunction with the featured task, a lean version that serves as a foundational resource for writing Matplotlib documentation.
The two sections are part of a current pull request to merge into Matplotlib’s repository. I have already worked through smaller changes to the content and am working with the community in moving forward with the process.
This Season of Docs proposal began as a vision of ideals I hoped to share and work towards with an organization and has become a technical writing experience full of growth and camaraderie. I am pleased with the progress I had made and cannot thank the team enough for the leadership and mentorship they provided. It is fulfilling and rewarding to both appreciate and be appreciated within a team.
In addition, the opportunity put together by the team at Google to foster collaboration among skilled contributors cannot be understated. Highlighting the accomplishments of these new teams raises the bar for the open source community.
Special thanks to Emily Hsu, Joe McEwen, and Smriti Singh for their time and responses, fellow Matplotlib Season of Docs writer Bruno Beltran for his insight and guidance, and the Matplotlib development team mentors Tim, Tom, and Hannah for their patience, support, and approachability for helping a new technical writer like me with my own Getting Started.
My name is Jerome Villegas and I’m a technical writer based in Seattle. I’ve been in education and education-adjacent fields for several years before transitioning to the industry of technical communication. My career has taken me to Taiwan to teach English and work in publishing, then to New York City to work in higher education, and back to Seattle where I worked at a private school.
Since leaving my job, I’ve taken to supporting my family while studying technical writing at the University of Washington and supplementing the knowledge with learning programming on the side. Along with a former classmate, the two of us have worked with the UX writing community in the Pacific Northwest. We host interview sessions, moderate sessions at conferences, and generate content analyzing trends and patterns in UX/tech writing.
In telling people what I’ve got going on in my life, you can find work I’ve done at my personal site and see what we’re up to at shift J. Thanks for reading!
]]>Code-switching is the practice of alternating between two or more languages in the context of a single conversation, either consciously or unconsciously. As someone who grew up bilingual and is currently learning other languages, I find code-switching a fascinating facet of communication from not only a purely linguistic perspective, but also a social one. In particular, I’ve personally found that code-switching often helps build a sense of community and familiarity in a group and that the unique ways in which speakers code-switch with each other greatly contribute to shaping group dynamics.
This is something that’s evident in seven-member pop boy group WayV. Aside from their discography, artistry, and group chemistry, WayV is well-known among fans and many non-fans alike for their multilingualism and code-switching, which many fans have affectionately coined as “WayV language.” Every member in the group is fluent in both Mandarin and Korean, and at least one member in the group is fluent in one or more of the following: English, Cantonese, Thai, Wenzhounese, and German. It’s an impressive trait that’s become a trademark of WayV as they’ve quickly drawn a global audience since their debut in January 2019. Their multilingualism is reflected in their music as well. On top of their regular album releases in Mandarin, WayV has also released singles in Korean and English, with their latest single “Bad Alive (English Ver.)” being a mix of English, Korean, and Mandarin.
As an independent translator who translates WayV content into English, I’ve become keenly aware of the true extent and rate of WayV’s code-switching when communicating with each other. In a lot of their content, WayV frequently switches between three or more languages every couple of seconds, a phenomenon that can make translating quite challenging at times, but also extremely rewarding and fun. I wanted to be able to present this aspect of WayV in a way that would both highlight their linguistic skills and present this dimension of their group dynamic in a more concrete, quantitative, and visually intuitive manner, beyond just stating that “they code-switch a lot.” This prompted me to make step charts - perfect for displaying data that changes at irregular intervals but remains constant between the changes - in hopes of enriching the viewer’s experience and helping make a potentially abstract concept more understandable and readily consumable. With a step chart, it becomes more apparent to the viewer the extent of how a group communicates, and cross-sections of the graph allow a rudimentary look into how multilinguals influence each other in code-switching.
This tutorial on creating step charts uses one of WayV’s livestreams as an example. There were four members in this livestream and a total of eight languages/dialects spoken. I will go through the basic steps of creating a step chart that depicts the frequency of code-switching for just one member. A full code chunk that shows how to layer two or more step chart lines in one graph to depict code-switching for multiple members can be found near the end.
First, we import the required libraries and load the data into a Pandas dataframe.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
This dataset includes the timestamp of every switch (in seconds) and the language of switch for one speaker.
df_h = pd.read_csv("WayVHendery.csv")
HENDERY = df_h.reset_index()
HENDERY.head()
index | time | lang |
---|---|---|
0 | 2 | ENG |
1 | 3 | KOR |
2 | 10 | ENG |
3 | 13 | MAND |
4 | 15 | ENG |
With the dataset loaded, we can now set up our graph in terms of determining the size of the figure, dpi, font size, and axes limits. We can also play around with the aesthetics, such as modifying the colors of our plot. These few simple steps easily transform the default all-white graph into a more visually appealing one.
fig, ax = plt.subplots(figsize = (20,12))
sns.set(rc={'axes.facecolor':'aliceblue', 'figure.facecolor':'c'})
fig, ax = plt.subplots(figsize = (20,12), dpi = 300)
plt.xlabel("Duration of Instagram Live (seconds)", fontsize = 18)
plt.ylabel("Cumulative Number of Times of Code-Switching", fontsize = 18)
plt.xlim(0, 570)
plt.ylim(0, 85)
Following this, we can make our step chart line easily with matplotlib.pyplot.step, in which we plot the x and y values and determine the text of the legend, color of the step chart line, and width of the step chart line.
ax.step(HENDERY.time, HENDERY.index, label = "HENDERY", color = "palevioletred", linewidth = 4)
Of course, we want to know not only how many switches there were and when they occurred, but also to what language the member switched. For this, we can write a for loop that labels each switch with its respective language as recorded in our dataset.
for x,y,z in zip(HENDERY["time"], HENDERY["index"], HENDERY["lang"]):
label = z
ax.annotate(label, #text
(x,y), #label coordinate
textcoords = "offset points", #how to position text
xytext = (15,-5), #distance from text to coordinate (x,y)
ha = "center", #alignment
fontsize = 8.5) #font size of text
Now add a title, save the graph, and there you have it!
plt.title("WayV Livestream Code-Switching", fontsize = 35)
fig.savefig("wayv_codeswitching.png", bbox_inches = "tight", facecolor = fig.get_facecolor())
Below is the complete code for layering step chart lines for multiple speakers in one graph. You can see how easy it is to take the code for visualizing the code-switching of one speaker and adapt it to visualizing that of multiple speakers. In addition, you can see that I’ve intentionally left the title blank so I can incorporate external graphic adjustments after I created the chart in Matplotlib, such as the addition of my social media handle and the use of a specific font I wanted, which you can see in the final graph. With visualizations being all about communicating information, I believe using Matplotlib in conjunction with simple elements of graphic design can be another way to make whatever you’re presenting that little bit more effective and personal, especially when you’re doing so on social media platforms.
# Initialize graph color and size
sns.set(rc={'axes.facecolor':'aliceblue', 'figure.facecolor':'c'})
fig, ax = plt.subplots(figsize = (20,12), dpi = 120)
# Set up axes and labels
plt.xlabel("Duration of Instagram Live (seconds)", fontsize = 18)
plt.ylabel("Cumulative Number of Times of Code-Switching", fontsize = 18)
plt.xlim(0, 570)
plt.ylim(0, 85)
# Layer step charts for each speaker
ax.step(YANGYANG.time, YANGYANG.index, label = "YANGYANG", color = "firebrick", linewidth = 4)
ax.step(HENDERY.time, HENDERY.index, label = "HENDERY", color = "palevioletred", linewidth = 4)
ax.step(TEN.time, TEN.index, label = "TEN", color = "mediumpurple", linewidth = 4)
ax.step(KUN.time, KUN.index, label = "KUN", color = "mediumblue", linewidth = 4)
# Add legend
ax.legend(fontsize = 17)
# Label each data point with the language switch
for i in (KUN, TEN, HENDERY, YANGYANG): #for each dataset
for x,y,z in zip(i["time"], i["index"], i["lang"]): #looping within the dataset
label = z
ax.annotate(label, #text
(x,y), #label coordinate
textcoords = "offset points", #how to position text
xytext = (15,-5), #distance from text to coordinate (x,y)
ha = "center", #alignment
fontsize = 8.5) #font size of text
# Add title (blank to leave room for external graphics)
plt.title("\n\n", fontsize = 35)
# Save figure
fig.savefig("wayv_codeswitching.png", bbox_inches = "tight", facecolor = fig.get_facecolor())
Languages/dialects: Korean (KOR), English (ENG), Mandarin (MAND), German (GER), Cantonese (CANT), Hokkien (HOKK), Teochew (TEO), Thai (THAI)
186 total switches! That’s approximately one code-switch in the group every 2.95 seconds.
And voilà! There you have it: a brief guide on how to make step charts. While I utilized step charts here to visualize code-switching, you can use them to visualize whatever data you would like. Please feel free to contact me here if you have any questions or comments. I hope you enjoyed this tutorial, and thank you so much for reading!
]]>Google Summer of Code 2020 is completed. Hurray!! This post discusses about the progress so far in the three months of the coding period from 1 June to 24 August 2020 regarding the project Baseline Images Problem
under matplotlib
organisation under the umbrella of NumFOCUS
organization.
This project helps with the difficulty in adding/modifying tests which require a baseline image. Baseline images are problematic because
So, the idea is to not store the baseline images in the repository, instead to create them from the existing tests.
We had created the matplotlib_baseline_images
package. This package is involved in the sub-wheels directory so that more packages can be added in the same directory, if needed in future. The matplotlib_baseline_images
package contain baseline images for both matplotlib
and mpl_toolkits
.
The package can be installed by using python3 -mpip install matplotlib_baseline_images
.
We successfully created the generate_missing
command line flag for baseline image generation for matplotlib
and mpl_toolkits
in the previous months. It was generating the matplotlib
and the mpl_toolkits
baseline images initially. Now, we have also modified the existing flow to generate any missing baseline images, which would be fetched from the master
branch on doing git pull
or git checkout -b feature_branch
.
Now, the image generation on the time of fresh install of matplotlib and the generation of missing baseline images works with the python3 -pytest lib/matplotlib matplotlib_baseline_image_generation
for the lib/matplotlib
folder and python3 -pytest lib/mpl_toolkits matplotlib_baseline_image_generation
for the lib/mpl_toolkits
folder.
We have written documentation explaining the following scenarios:
matplotlib_baseline_images_package
to be used for testing by the developer?I am grateful to be part of such a great community. Project is really interesting and challenging :)
Thanks Thomas, Antony and Hannah for helping me to complete this project.
]]>Welcome! This post is not going to be discussing technical implementation details or theortical work for my Google Summer of Code project, but rather serve as a summary and recap for the work that I did this summer.
I am very happy with the work I was able to accomplish and believe that I successfully completed my project.
My project was titled NetworkX: Implementing the Asadpour Asymmetric Traveling Salesman Problem Algorithm. The updated abstract given on the Summer of Code project project page is below.
This project seems to implement the asymmetric traveling salesman problem developed by Asadpour et al, originally published in 2010 and revised in 2017. The project is broken into multiple methods, each of which has a set timetable during the project. We start by solving the Held-Karp relaxation using the Ascent method from the original paper by Held and Karp. Assuming the result is fractional, we continue into the Asadpour algorithm (integral solutions are optimal by definition and immediately returned). We approximate the distribution of spanning trees on the undirected support of the Held Karp solution using a maximum entropy rounding method to construct a distribution of trees. Roughly speaking, the probability of sampling any given tree is proportional to the product of all its edge lambda values. We sample 2 log n trees from the distribution using an iterative approach developed by V. G. Kulkarni and choose the tree with the smallest cost after returning direction to the arcs. Finally, the minimum tree is augmented using a minimum network flow algorithm and shortcut down to an O(log n / log log n) approximation of the minimum Hamiltonian cycle.
My proposal PDF for the 2021 Summer of Code can be found here.
All of my changes and additions to NetworkX are part of this pull request and can also be found on this branch in my fork of the GitHub repository, but I will be discussing the changes and commits in more detail later.
Also note that for the commits I listed in each section, this is an incomplete list only hitting on focused commits to that function or its tests.
For the complete list, please reference the pull request or the bothTSP
GitHub branch on my fork of NetworkX.
My contributions to NetworkX this summer consist predominantly of the following functions and classes, each of which I will discuss in their own sections of this blog post. Functions and classes which are front-facing are also linked to the developer documentation for NetworkX in the list below and for their section headers.
SpanningTreeIterator
ArborescenceIterator
held_karp_ascent
spanning_tree_distribution
sample_spanning_tree
asadpour_atsp
These functions have also been unit tested, and those tests will be integrated into NetworkX once the pull request is merged.
The following papers are where all of these algorithms originate form and they were of course instrumental in the completion of this project.
[1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, p. 379 - 389 https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
[2] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[3] M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
[4] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning trees in order of increasing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
[5] V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), p. 185–207.
SpanningTreeIterator
The SpanningTreeIterator
was the first contribution I completed as part of my GSoC project.
This class takes a graph and returns every spanning tree in it in order of increasing cost, which makes it a direct implementation of [4].
The interesting thing about this iterator is that it is not used as part of the Asadpour algorithm, but served as an intermediate step so that I could develop the ArborescenceIterator
which is required for the Held Karp relaxation.
It works by partitioning the edges of the graph as either included, excluded or open and then finding the minimum spanning tree which respects the partition data on the graph edges.
In order to get this to work, I created a new minimum spanning tree function called kruskal_mst_edges_partition
which does exactly that.
To prevent redundancy, all kruskal minimum spanning trees now use this function (the original kruskal_mst_edges
function is now just a wrapper for the partitioned version).
Once a spanning tree is returned from the iterator, the partition data for that tree is split so that the union of the newly generated partitions is the set of all spanning trees in the partition except the returned minimum spanning tree.
As I mentioned earlier, the SpanningTreeIterator
is not directly used in my GSoC project, but I still decided to implement it to understand the partition process and be able to directly use the examples from [4] before moving onto the ArborescenceIterator
.
This class I’m sure will be useful to the other users of NetworkX and provided a strong foundation to build the ArborescenceIterator
off of.
Blog Posts about SpanningTreeIterator
5 Jun 2021 - Finding All Minimum Arborescences
10 Jun 2021 - Implementing The Iterators
Commits about SpanningTreeIterator
Now, at the beginning of this project, my commit messages were not very good… I had some problems about merge conflicts after I accidentally committed to the wrong branch and this was the first time I’d used a pre-commit hook.
I have not changed the commit messages here, so that you may be assumed by my troughly unhelpful messages, but did annotate them to provide a more accurate description of the commit.
Testing - Rewrote Kruskal’s algorithm to respect partitions and tested that while stubbing the iterators in a separate file
I’m not entirly sure how the commit hook works… - Added test cases and finalized implementation of Spanning Tree Iterator in the incorrect file
Moved iterators into the correct files to maintain proper codebase visibility - Realized that the iterators need to be in mst.py
and branchings.py
respectively to keep private functions hidden
Documentation update for the iterators - No explanation needed
Update mst.py to accept suggestion - Accepted doc string edit from code review
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Cleaned code, merged functions if possible and opened partition functionality to all
Implement suggestions from boothby
ArborescenceIterator
The ArborescenceIterator
is a modified version of the algorithm discussed in [4] so that it iterates over the spanning arborescences.
This iterator was a bit more difficult to implement, but that is due to how the minimum spanning arborescence algorithm is structured rather than the partition scheme not being applicable to directed graphs.
In fact the partition scheme is identical to the undirected SpanningTreeIterator
, but Edmonds’ algorithm is more complex and there are several edge cases about how nodes can be contracted and what it means for respecting the partition data.
In order to fully understand the NetworkX implementation, I had to read the original Edmonds paper, [2].
The most notable change was that when the iterator writes the next partition onto the edges of the graph just before Edmonds’ algorithm is executed, if any incoming edge is marked as included, all of the others are marked as excluded.
This is an implicit part of the SpanningTreeIterator
, but needed to be explicitly done here so that if the vertex in question was merged during Edmonds’ algorithm we could not choose two of the incoming edges to the same vertex once the merging was reversed.
As a final note, the ArborescenceIterator
has one more initial parameter than the SpanningTreeIterator
, which is the ability to give it an initial partition and iterate over all spanning arborescence with cost greater than the initial partition.
This was used as part of the branch and bound method, but is no longer a part of the my Asadpour algorithm implementation.
Blog Posts about ArborescenceIterator
5 Jun 2021 - Finding All Minimum Arborescences
10 Jun 2021 - Implementing The Iterators
Commits about ArborescenceIterator
My commits listed here are still annotated and much of the work was done at the same time.
Testing - Rewrote Kruskal’s algorithm to respect partitions and tested that while stubbing the iterators in a separate file
Moved iterators into the correct files to maintain proper codebase visibility - Realized that the iterators need to be in mst.py
and branchings.py
respectivly to keep private functions hidden
Including Black reformat - Modified Edmonds’ algorithm to respect partitions
Modified the ArborescenceIterator to accept init partition - No explanation needed
Documentation update for the iterators - No explanation needed
Update branchings.py accept doc string edit - No explanation needed
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Cleaned code, merged functions if possible and opened partition functionality to all
Implemented review suggestions from rossbar
Implement suggestions from boothby
held_karp_ascent
The Held Karp relaxation was the most difficult part of my GSoC project and the part that I was the most worried about going into this May.
My plans on how to solve the relaxation evolved over the course of the summer as well, finally culminating in held_karp_ascent
.
In my GSoC proposal, I discuss using scipy
to solve the relaxation, but the Held Karp relaxation is a semi-infinite linear problem (that is, it is finite but exponential) so I would quickly surpass the capabilities of virtually any computer that the code would be run on.
Fortunately I realized that while I was still writing my proposal and was able to change it.
Next, I wanted to use the ellipsoid algorithm because that is the suggested method in the Asadpour paper [1].
As it happens, the ellipsoid algorithm is not implemented in numpy
or scipy
and after discussing the practicality of implementing the algorithm as part of this project, we decided that a robust ellipsoid solver was a GSoC project onto itself and beyond the scope of the Asadpour algorithm.
Another method was needed, and was found.
In the original paper by Held and Karp [3], they present three different algorithms for solving the relaxation, the column-generation technique, the ascent method and the branch and bound method.
After reading the paper and comparing all of the methods, I decided that the branch and bound method was the best in terms of performance and wanted to implement that one.
The branch and bound method is a modified version of the ascent method, so I started by implementing the ascent method, then the branch and bound around it. This had the extra benefit of allowing me to compare the two and determine which is actually better.
Implementing the ascent method proved difficult. There were a number of subtle bugs in finding the minimum 1-arborescences and finding the value of epsilon by not realizing all of the valid edge substitutions in the graph. More information about these problems can be found in my post titled Understanding the Ascent Method. Even after this the ascent method was not working proper, but I decided to move onto the branch and bound method in hopes of learning more about the process so that I could fix the ascent method.
That is exactly what happened! While debugging the branch and bound method, I realized that my function for finding the set of minimum 1-arborescences would stop searching too soon and possibly miss the minimum 1-arborescences. Once I fixed that bug, both the ascent as well as the branch and bound method started to produce the correct results.
But which one would be used in the final project?
Well, that came down to which output was more compatible with the rest of the Asadpour algorithm. The ascent method could find a fractional solution where the edges are not totally in or out of the solution while the branch and bound method would take the time to ensure that the solution was integral. As it would happen, the Asadpour algorithm expects a fractional solution to the Held Karp relaxation so in the end the ascent method one out and the branch and bound method was removed from the project.
All of this is detailed in the (many) blog posts I wrote on this topic, which are listed below.
Blog posts about the Held Karp relaxation
My first two posts were about the scipy
solution and the ellipsoid algorithm.
11 Apr 2021 - Held Karp Relaxation
8 May 2021 - Held Karp Separation Oracle
This next post discusses the merits of each algorithm presenting in the original Held and Karp paper [3].
3 Jun 2021 - A Closer Look At Held Karp
And finally, the last three Held Karp related posts are about the debugging of the algorithms I did implement.
22 Jun 2021 - Understanding The Ascent Method
28 Jun 2021 - Implementing The Held Karp Relaxation
7 Jul 2021 - Finalizing Held Karp
Commits about the Held Karp relaxation
Annotations only provided if needed.
Grabbing black reformats - Initial Ascent method implementation
Working on debugging ascent method plus black reformats
Ascent method terminating, but at non-optimal solution
minor edits - Removed some debug statements
Fixed termination condition, still given non-optimal result
Minor bugfix, still non-optimal result - Ensured reported answer is the cycle if multiple options
Fixed subtle bug in find_epsilon() - Fixed the improper substitute detection bug
Cleaned code and tried something which didn’t work
Black formats - Initial branch and bound implementation
Branch and bound returning optimal solution
black formatting changes - Split ascent and branch and bound methods into different functions
Performance tweaks and testing fractional answers
Asadpour output for ascent method
Removed branch and bound method. One unit test misbehaving
Added asymmetric fractional test for the ascent method
Removed printn statements and tweaked final test to be more asymmetric
Changed HK to only report on the support of the answer
spanning_tree_distribution
Once we have the support of the Held Karp relaxation, we calculate edge weights \(\gamma\) for support so that the probability of any tree being sampled is proportional to the product of \(e^{\gamma}\) across its edges. This is called a maximum entropy distribution in the Asadpour paper. This procedure was included in the Asadpour paper [1] on page 386.
- Set \(\gamma = \vec{0}\).
- While there exists an edge \(e\) with \(q_e(\gamma) > (1 + \epsilon)z_e\):
- Compute \(\delta\) such that if we define \(\gamma’\) ad \(\gamma_e’ = \gamma_e - \delta\) and \(\gamma_f’ = \gamma_e\) for all \(f \in E \backslash {e}\), then \(q_e(\gamma’) = (1 + \epsilon / 2)z_e\)
- Set \(\gamma \leftarrow \gamma’\)
- Output \(\tilde{\gamma} := \gamma\).
Where \(q_e(\gamma)\) is the probability that any given edge \(e\) will be in a sampled spanning tree chosen with probability proportional to \(\exp(\gamma(T))\). \(\delta\) is also given as
\[ \delta = \frac{q_e(\gamma)(1-(1+\epsilon/2)z_e)}{(1-q_e(\gamma))(1+\epsilon/2)z_e} \]
so the Asadpour paper did almost all of the heavy lifting for this function. However, they were not very clear on how to calculate \(q_e(\gamma)\) other than that Krichhoff’s Tree Matrix Theorem can be used.
My original method for calculating \(qe(\gamma)\) was to apply Krichhoff’s Theorem to the original laplacian matrix and the laplacian produced once the edge \(e\) is contracted from the graph. Testing quickly showed that once the edge is contracted from the graph, it cannot affect the value of the laplacian and thus after subtracting \(\delta\) the probability of that edge would increase rather than decrease. Multiplying my original value of \(q_e(\gamma)\) by \(\exp(\gamma_e)\) proved to be the solution here for reasons extensively discussed in my blog post _The Entropy Distribution and in particular the “Update! (28 July 2021)” section.
Blog posts about spanning_tree_distribution
13 Jul 2021 - Entropy Distribution Setup
20 Jul 2021 - The Entropy Distribution
Commits about spanning_tree_distribution
Draft of spanning_tree_distribution
Changed HK to only report on the support of the answer - Needing to limit \(\gamma\) to only the support of the Held Karp relaxation is what caused this change
Fixed contraction bug by changing to MultiGraph. Problem with prob > 1 - Because the probability is only proportional to the product of the edge weights, this was not actually a problem
Black reformats - Rewrote the test and cleaned the code
Fixed pypi test error - The pypi tests do not have numpy
or scipy
and I forgot to flag the test to be skipped if they are not available
Further testing of dist fix - Fixed function to multiply \(q_e(\gamma)\) by \(\exp(\gamma_e)\) and implemented exception if \(\delta\) ever misbehaves
Can sample spanning trees - Streamlined finding \(q_e(\gamma)\) using new helper function
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Implement suggestions from boothby
sample_spanning_tree
What good is a spanning tree distribution if we can’t sample from it?
While the Asadpour paper [1] provides a rough outline of the sampling process, the bulk of their methodology comes from the Kulkarni paper, Generating random combinatorial objects [5]. That paper had a much more detailed explanation and even this pseudo code from page 202.
\(U = \emptyset,\) \(V = E\)
Do \(i = 1\) to \(N\);
\(\qquad\)Let \(a = n(G(U, V))\)
\(\qquad\qquad a’\) \(= n(G(U \cup {i}, V))\)
\(\qquad\)Generate \(Z \sim U[0, 1]\)
\(\qquad\)If \(Z \leq \alpha_i \times \left(a’ / a\right)\)
\(\qquad\qquad\)then \(U = U \cup {i}\),
\(\qquad\qquad\)else \(V = V - {i}\)
\(\qquad\)end.
Stop. \(U\) is the required spanning tree.
The only real difficulty here was tracking how the nodes were being contracted.
My first attempt was a mess of if
statements and the like, but switching it to a merge-find data structure (or disjoint set data structure) proved to be a wise decision.
Of course, it is one thing to be able to sample a spanning tree and another entirely to know if the sampling technique matches the expected distribution.
My first iteration test for sample_spanning_tree
just sampled a large number of trees (50000) and they printed the percent error from the normalized distribution of spanning tree.
With a sample size of 50000 all of the errors were under 10%, but I still wanted to find a better test.
From my AP statistics class in high school I remembered the \(X^2\) (Chi-squared) test and realized that it would be perfect here.
scipy
even had the ability to conduct one.
By converting to a chi-squared test I was able to reduce the sample size down to 1200 (near the minimum required sample size to have a valid chi-squared test) and use a proper hypothesis test at the \(\alpha = 0.01\) significance level.
Unfortunately, the test would still fail 1% of the time until I added the @py_random_state
decorator to sample_spanning_tree
, and then the test can pass in a Random
object to produce repeatable results.
Blog posts about sample_spanning_tree
21 Jul 2021 - Preliminaries For Sampling A Spanning Tree
28 Jul 2021 - Sampling A Spanning Tree
Commits about sample_spanning_tree
Developing test for sampling spanning tree
Changed sample_spanning_tree test to Chi squared test
Adding test cases - Implemented @py_random_state
decorator
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
asadpour_atsp
This function was the last piece of the puzzle, connecting all of the others together and producing the final result!
Implementation of this function was actually rather smooth.
The only technical difficulty I had was reading the support of the flow_dict
and the theoretical difficulties were adapting the min_cost_flow
function to solve the minimum circulation problem.
Oh, and that if the flow is greater than 1 I need to add parallel edges to the graph so that it is still eulerian.
A brief overview of the whole algorithm is given below:
Blog posts about asadpour_atsp
29 Jul 2021 - Looking At The Big Picture
10 Aug 2021 - Completing The Asadpour Algorithm
Commits about asadpour_atsp
untested implementation of asadpour_tsp
Fixed runtime errors in asadpour_tsp - General traveling salesman problem function assumed graph were undirected. This is not work with an atsp algorithm
black reformats - Fixed parallel edges from flow support bug
Fixed rounding error with tests
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Implemented review suggestions from rossbar
Overall, I really enjoyed this Summer of Code. I was able to branch out, continue to learn python and more about graphs and graph algorithms which is an area of interest for me.
Assuming that I have any amount of free time this coming fall semester, I’d love to stay involved with NetworkX. In fact, there are already some things that I have in mind even though my current code works as is.
Move sample_spanning_tree
to mst.py
and rename it to random_spanning_tree
.
The ability to sample random spanning trees is not a part of the greater NetworkX library and could be useful to others.
One of my mentors mentioned it being relevant to Steiner trees and if I can help other developers and users out, I will.
Adapt sample_spanning_tree
so that it can use both additive and multiplicative weight functions.
The Asadpour algorithm only needs the multiplicative weight, but the Kulkarni paper [5] does talk about using an additive weight function which may be more useful to other NetworkX users.
Move my Krichhoff’s Tree Matrix Theorem helper function to laplacian_matrix.py
so that other NetworkX users can access it.
Investigate the following article about the Held Karp relaxation. While I have no definite evidence for this one, I do believe that the Held Karp relaxation is the slowest part of my implementation of the Asadpour algorithm and thus is the best place for improving it. The ascent method I am using comes from the original Held and Karp paper [3], but they did release a part II which may have better algorithms in it. The citation is given below.
M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees: Part II. Mathematical Programming, 1971, 1(1), p. 6–25. https://doi.org/10.1007/BF01584070
Refactor the Edmonds
class in branchings.py
.
That class is the implementation for Edmonds’ branching algorithm but uses an iterative approach rather than the recursive one discussed in Edmonds’ paper [2].
I did also agree to work with another person, lkora to help rework this class and possible add a minimum_maximal_branching
function to find the minimum branching which still connects as many nodes as possible.
This would be analogous to a spanning forest in an undirected graph.
At the moment, neither of us have had time to start such work.
For more information please reference issue #4836.
While there are areas of this problem which I can improve upon, it is important for me to remember that this project was still a complete success. NetworkX now has an algorithm to approximate the traveling salesman problem in asymmetric or directed graphs.
]]>My implementation of asadpour_atsp
is now working!
Recall that my pseudo code for this function from my last post was
def asadpour_tsp
Input: A complete graph G with weight being the attribute key for the edge weights.
Output: A list of edges which form the approximate ATSP solution.
z_star = held_karp(G)
# test to see if z_star is a graph or dict
if type(z_star) is nx.DiGraph
return z_star.edges
z_support = nx.MultiGraph()
for u, v in z_star
if not in z_support.edges
edge_weight = min(G[u][v][weight], G[v][u][weight])
z_support.add_edge(u, v, weight=edge_weight)
gamma = spanning_tree_distribution(z_support, z_star)
for u, v in z_support.edges
z_support[u][v][lambda] = exp(gamma[(u, v)])
for _ in range 1 to 2 ceil(log(n))
sampled_tree = sample_spanning_tree(G)
sampled_tree_weight = sampled_tree.size()
if sampled_tree_weight < minimum_sampled_tree_weight
minimum_sampled_tree = sampled_tree.copy()
minimum_sampled_tree_weight = sampled_tree_weight
t_star = nx.DiGraph
for u, v, d in minimum_sampled_tree.edges(data=weight)
if d == G[u][v][weight]
t_star.add_edge(u, v, weight=d)
else
t_star.add_edge(v, u, weight=d)
for n in t_star
node_demands[n] = t_star.out_degree(n) - t_star.in_degree(n)
nx.set_node_attributes(G, node_demands)
flow_dict = nx.min_cost_flow(G)
for u, v in flow_dict
if edge not in t_star.edges and flow_dict[u, v] > 0
t_star.add_edge(u, v)
eulerian_curcuit = nx.eulerian_circuit(t_star)
return _shortcutting(eulerian_curcuit)
And this was more or less correct. A few issues were present, as they always were going to be.
First, my largest issue came from a part of a word being in parenthesis in the Asadpour paper on page 385.
This integral circulation \(f^*\) corresponds to a directed (multi)graph \(H\) which contains \(\vec{T}^*\).
Basically if the minimum flow is every larger than 1 along an edge, I need to add that many parallel edges in order to ensure that everything is still Eulerian. This became a problem quickly while developing my test cases as shown in the below example.
As you can see, for the incorrect circulation, vertices 2 and 3 are not eulerian as they in and out degrees do not match.
All of the others were just minor points where the pseudo code didn’t directly translate into python (because, after all, it isn’t python).
The first thing I did once asadpour_atsp
was take the fractional, symmetric Held Karp relaxation test graph and run it through the general traveling_salesman_problem
function.
Since there are random numbers involved here, the results were always within the \(O(\log n / \log \log n)\) approximation factor but were different.
Three examples are shown below.
The first thing we want to check is the approximation ratio.
We know that the minimum cost output of the traveling_saleman_problem
function is 304 (This is actually lower than the optimal tour in the undirected version, more on this later).
Next we need to know what our maximum approximation factor is.
Now, the Asadpour algorithm is \(O(\log n / \log \log n)\) which for our six vertex graph would be \(\ln(6) / \ln(\ln(6)) \approx 3.0723\).
However, on page 386 they give the coefficients of the approximation as \((2 + 8 \log n / \log \log n)\) which would be \(2 + 8 \times \ln(6) / \ln(\ln(6)) \approx 26.5784\).
(Remember that all \(\log\)’s in the Asadpour paper refer to the natural logarithm.)
All of our examples are well below even the lower limit.
For example 1:
\[ \begin{array}{r l} \text{actual}: & 504 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{504}{304} \approx 1.6578 < 3.0723 \end{array} \]
Example 2:
\[ \begin{array}{r l} \text{actual}: & 404 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{404}{304} \approx 1.3289 < 3.0723 \end{array} \]
Example 3:
\[ \begin{array}{r l} \text{actual}: & 304 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{304}{304} = 1.0000 < 3.0723 \end{array} \]
At this point, you’ve probably noticed that the examples given are strictly speaking, not hamiltonian cycles: they visit vertices multiple times.
This is because the graph we have is not complete.
The Asadpour algorithm only works on complete graphs, so the traveling_salesman_problem
function finds the shortest cost path between every pair of vertices and inserts the missing edges.
In fact, if the asadpour_atsp
function is given an incomplete graph, it will raise an exception.
Take example three, since there is only one repeated vertex, 5.
Behind the scenes, the graph is complete and the solution may contain the dashed edge in the below image.
But that edge is not in the original graph, so during the post-processing done by the traveling_salesman_problem
function, the red edges are inserted instead of the dashed edge.
Before I could write any tests, I needed to ensure that the tests were consistent from execution to execution.
At the time, this was not the case since there were random numbers being generated in order to sample the spanning trees.
So I had to learn how to use the @py_random_state
decorator.
When this decorator is added to the top of a function, we pass it either the position of the argument in the function signature or the name of the keyword for that argument. It then takes that argument and configures a python Random object based on the input parameter.
None
, use a new Random
object.int
, use a new Random
object with that seed.Random
object, use that object as is.So I changed the function signature of sample_spanning_tree
to have random=None
at the end.
For most use cases, the default value will not be changed and the results will be different every time the method is called, but if we give it an int
, the same tree will be sampled every time.
But, for my tests I can give it a seed to create repeatable behaviour.
Since the sample_spanning_tree
function is not visible outside of the treveling_salesman
file, I also had to create a pass-through parameter for asadpour_atsp
so that my seed could have any effect.
Once this was done, I modified the test for sample_spanning_tree
so that it would not have a 1 in 100 chance of spontaneously failing.
At first I just passed it an int
, but that forced every tree sampled to be the same (since the edges were shuffled the same and sampled from the same sequence of numbers) and the test failed.
So I tweaked it to use a Random
object from the random package and this worked well.
From here, I wrap the complete asadpour_atsp
parameters I want in another function fixed_asadpour
like this:
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, 56)
path = nx_app.traveling_salesman_problem(
G, weight="weight", cycle=False, method=fixed_asadpour
)
I tested using both traveling_salesman_problem
and asadpour_atsp
.
The tests included:
There is even a bonus feature!
The asadpour_atsp
function accepts a fourth argument, source
!
Since both of the return methods use eulerian_circuit
and the _shortcutting
functions, I can pass a source
vertex to the circuit function and ensure that the returned path starts and returns to the desired vertex.
Access it by wrapping the method, just be sure that the source vertex is in the graph to avoid an exception.
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, source=0)
path = nx_app.traveling_salesman_problem(
G, weight="weight", cycle=False, method=fixed_asadpour
)
A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
]]>Google Summer of Code 2020’s second evaluation is completed. I passed!!! Hurray! Now we are in the mid way of the last evaluation. This post discusses about the progress so far in the first two weeks of the third coding period from 26 July to 9 August 2020.
We successfully created the matplotlib_baseline_image_generation
command line flag for baseline image generation for matplotlib
and mpl_toolkits
in the previous months. It was generating the matplotlib and the matplotlib toolkit baseline images successfully. Now, we modified the existing flow to generate any missing baseline images, which would be fetched from the master
branch on doing git pull
or git checkout -b feature_branch
.
We initially thought of creating a command line flag generate_baseline_images_for_test "test_a,test_b"
, but later on analysis of the approach, we came to the conclusion that the developer will not know about the test names to be given along with the flag. So, we tried to generate the missing images by generate_missing
without the test names. This worked successfully.
Later, we refactored the matplot_baseline_image_generation
and generate_missing
command line flags to single command line flag matplotlib_baseline_image_generation
as the logic was similar for both of them. Now, the image generation on the time of fresh install of matplotlib and the generation of missing baseline images works with the python3 -pytest lib/matplotlib matplotlib_baseline_image_generation
for the lib/matplotlib
folder and python3 -pytest lib/mpl_toolkits matplotlib_baseline_image_generation
for the lib/mpl_toolkits
folder.
We have written documentation explaining the following scenarios:
matplotlib_baseline_images_package
to be used for testing by the developer?Right now, we are trying to refactor the code and maintain git clean history. The current PR is under review. I am working on the suggested changes. We are trying to merge this :)
Monday to Thursday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Thomas, Antony and Hannah for helping me so far.
]]>Well, we’re finally at the point in this GSoC project where the end is glimmering on the horizon. I have completed the Held Karp relaxation, generating a spanning tree distribution and now sampling from that distribution. That means that it is time to start thinking about how to link these separate components into one algorithm.
Recall that from the Asadpour paper the overview of the algorithm is
Algorithm 1 An \(O(\log n / \log \log n)\)-approximation algorithm for the ATSP
Input: A set \(V\) consisting of \(n\) points and a cost function \(c\ :\ V \times V \rightarrow \mathbb{R}^+\) satisfying the triangle inequality.
Output: \(O(\log n / \log \log n)\)-approximation of the asymmetric traveling salesman problem instance described by \(V\) and \(c\).
Solve the Held-Karp LP relaxation of the ATSP instance to get an optimum extreme point solution \(x^*\). Define \(z^*\) as in (5), making it a symmetrized and scaled down version of \(x^*\). Vector \(z^*\) can be viewed as a point in the spanning tree polytope of the undirected graph on the support of \(x^*\) that one obtains after disregarding the directions of arcs (See Section 3.)
Let \(E\) be the support graph of \(z^*\) when the direction of the arcs are disregarded. Find weights \({\tilde{\gamma}}_{e \in E}\) such that the exponential distribution on the spanning trees, \(\tilde{p}(T) \propto \exp(\sum_{e \in T} \tilde{\gamma}_e)\) (approximately) preserves the marginals imposed by \(z^*\), i.e. for any edge \(e \in E\),
\\(\sum\_{T \in \mathcal{T} : T \ni e} \tilde{p}(T) \leq (1 + \epsilon) z^\*\_e\\), for a small enough value of \\(\epsilon\\). (In this paper we show that \\(\epsilon = 0.2\\) suffices for our purpose. See Section 7 and 8 for a description of how to compute such a distribution.)Sample \(2\lceil \log n \rceil\) spanning trees \(T_1, \dots, T_{2\lceil \log n \rceil}\) from \(\tilde{p}(.)\). For each of these trees, orient all its edges so as to minimize its cost with respect to our (asymmetric) cost function \(c\). Let \(T^*\) be the tree whose resulting cost is minimal among all of the sampled trees.
Find a minimum cost integral circulation that contains the oriented tree \(\vec{T}^*\). Shortcut this circulation to a tour and output it. (See Section 4.)
We are now firmly in the steps 3 and 4 area.
Going all the way back to my post on 24 May 2021 titled Networkx Function stubs the only function left is asadpour_tsp
, the main function which needs to accomplish this entire algorithm.
But before we get to creating pseudo code for it there is still step 4 which needs a thorough examination.
Once we have sampled enough spanning trees from the graph and converted the minimum one into \(\vec{T}^*\) we need to find the minimum cost integral circulation in the graph which contains \(\vec{T}^*\).
While NetworkX a minimum cost circulation function, namely, min_cost_flow
, it is not suitable for the Asadpour algorithm out of the box.
The problem here is that we do not have node demands, we have edge demands.
However, after some reading and discussion with one of my mentors Dan, we can convert the current problem into one which can be solved using the min_cost_flow
function.
The problem that we are trying to solve is called the minimum cost circulation problem and the one which min_cost_flow
is able to solve is the, well, minimum cost flow problem.
As it happens, these are equivalent problems, so I can convert the minimum cost circulation into a minimum cost flow problem by transforming the minimum edge demands into node demands.
Recall that at this point we have a directed minimum sampled spanning tree \(\vec{T}^*\) and that the flow through each of the edges in \(\vec{T}^*\) needs to be at least one. From the perspective of a flow problem, \(\vec{T}^*\) is moving some flow around the graph. However, in order to augment \(\vec{T}^*\) into an Eulerian graph so that we can walk it, we need to counteract this flow so that the net flow for each node is 0 \((f(\delta^+(v)) = f(\delta^-(v))\) in the Asadpour paper).
So, we find the net flow of each node and then assign its demand to be the negative of that number so that the flow will balance at the node in question. If the total flow at any node \(i\) is \(\delta^+(i) - \delta^-(i)\) then the demand we assign to that node is \(\delta^-(i) - \delta^+(i)\). Once we assign the demands to the nodes we can temporarily ignore the edge lower capacities to find the minimum flow.
For more information on the conversion process, please see [2].
After the minimum flow is found, we take the support of the flow and add it to the \(\vec{T}^*\) to create a multigraph \(H\).
Now we know that \(H\) is weakly connected (it contains \(\vec{T^*}\)) and that it is Eulerian because for every node the in-degree is equal to the out-degree.
A closed eulerian walk or eulerian circuit can be found in this graph with eulerian_circuit
.
Here is an example of this process on a simple graph. I suspect that the flow will not always be the back edges from the spanning tree and that the only reason that is the case here is due to the small number of vertcies.
Finally, we take the eulerian circuit and shortcut it.
On the plus side, the shortcutting process is the same as the Christofides algorithm so that is already the _shortcutting
helper function in the traveling salesman file.
This is really where it is critical that the triangle inequality holds so that the shortcutting cannot increase the cost of the circulation.
Let’s start with the function signature.
def asadpour_tsp
Input: A complete graph G with weight being the attribute key for the edge weights.
Output: A list of edges which form the approximate ATSP solution.
This is exactly what we’d expect, take a complete graph \(G\) satisfying the triangle inequality and return the edges in the approximate solution to the asymmetric traveling salesman problem.
Recall from my post Networkx Function Stubs what the primary traveling salesman function, traveling_salesman_problem
will ensure that we are given a complete graph that follows the triangle inequality by using all-pairs shortest path calculations and will handle if we are expected to return a true cycle or only a path.
The first step in the Asadpour algorithm is the Held Karp relaxation. I am planning on editing the flow of the algorithm here a bit. If the Held Karp relaxation finds an integer solution, then we know that is one of the optimal TSP routes so there is no point in continuing the algorithm: we can just return that as an optimal solution. However, if the Held Karp relaxation finds a fractional solution we will press on with the algorithm.
z_star = held_karp(G)
# test to see if z_star is a graph or dict
if type(z_star) is nx.DiGraph
return z_star.edges
Once we have the Held Karp solution, we create the undirected support of z_star
for the next step of creating the exponential distribution of spanning trees.
z_support = nx.MultiGraph()
for u, v in z_star
if not in z_support.edges
edge_weight = min(G[u][v][weight], G[v][u][weight])
z_support.add_edge(u, v, weight=edge_weight)
gamma = spanning_tree_distribution(z_support, z_star)
This completes steps 1 and 2 in the Asadpour overview at the top of this post. Next we sample \(2 \lceil \log n \rceil\) spanning trees.
for u, v in z_support.edges
z_support[u][v][lambda] = exp(gamma[(u, v)])
for _ in range 1 to 2 ceil(log(n))
sampled_tree = sample_spanning_tree(G)
sampled_tree_weight = sampled_tree.size()
if sampled_tree_weight < minimum_sampled_tree_weight
minimum_sampled_tree = sampled_tree.copy()
minimum_sampled_tree_weight = sampled_tree_weight
Now that we have the minimum sampled tree, we need to orient the edge directions to keep the cost equal to that minimum tree.
We can do this by iterating over the edges in minimum_sampled_tree
and checking the edge weights in the original graph \(G\).
Using \(G\) is required here if we did not record the minimum direction which is a possibility when we create z_support
.
t_star = nx.DiGraph
for u, v, d in minimum_sampled_tree.edges(data=weight)
if d == G[u][v][weight]
t_star.add_edge(u, v, weight=d)
else
t_star.add_edge(v, u, weight=d)
Next we create a mapping of nodes to node demands for the minimum cost flow problem which was discussed earlier in this post.
I think that using a dict is the best option as it can be passed into set_node_attributes
all at once before finding the minimum cost flow.
for n in t_star
node_demands[n] = t_star.out_degree(n) - t_star.in_degree(n)
nx.set_node_attributes(G, node_demands)
flow_dict = nx.min_cost_flow(G)
Take the Eulerian circuit and shortcut it on the way out.
Here we can add the support of the flow directly to t_star
to simulate adding the two graphs together.
for u, v in flow_dict
if edge not in t_star.edges and flow_dict[u, v] > 0
t_star.add_edge(u, v)
eulerian_curcuit = nx.eulerian_circuit(t_star)
return _shortcutting(eulerian_curcuit)
That should be it.
Once the code for asadpour_tsp
is written it will need to be tested.
I’m not sure how I’m going to create the test cases yet, but I do plan on testing it using real world airline ticket prices as that is my go to example for the asymmetric traveling salesman problem.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
D. Williamson, ORIE 633 Network Flows Lecture 11, 11 Oct 2007, https://people.orie.cornell.edu/dpw/orie633/LectureNotes/lecture11.pdf.
]]>The heavy lifting I did in the preliminary post certainly paid off here!
In just one day I was able to implement sample_spanning_tree
and its two helper functions.
This was a very easy function to implement.
It followed exactly from the pesudo code and was working with spanning_tree_distribution
before I started on sample_spanning_tree
.
This function was more difficult than I originally anticipated.
The code for the main body of the function only needed minor tweaks to work with the specifics of python such as shuffle
being in place and returning None
and some details about how sets work.
For example, I add edge \(e\) to \(U\) before calling prepare_graph
on in and then switch the if
statement to be the inverse to remove \(e\) from \(U\).
Those portions are functionally the same.
The issues I had with this function all stem back to contracting multiple nodes in a row and how that affects the graph.
As a side note, the contracted_edge
function in NetworkX is a wrapper for contracted_node
and the latter has a copy
keyword argument that is assumed to be True
by the former function.
It was a trivial change to extend this functionality to contracted_edge
but in the end I used contracted_node
so the whole thing is moot.
First recall how edge contraction, or in this case node contraction, works. Two nodes are merged into one which is connected by the same edges which connected the original two nodes. Edges between those two nodes become self loops, but in this case I prevented the creation of self loops as directed by Kulkarni. If a node which is not contracted has edges to both of the contracted nodes, we insert a parallel edge between them. I struggled with NetworkX’s API about the graph classes in a past post titled The Entropy Distribution.
For NetworkX’s implementation, we would call nx.contracted_nodes(G, u, v)
and u
and v
would always be merged into u
, so v
is the node which is no longer in the graph.
Now imagine that we have three edges to contract because they are all in \(U\) which look like the following.
If we process this from left to right, we first contract nodes 0 and 1. At this point, the \(\{1, 2\}\) no longer exists in \(G\) as node 1 itself has been removed. However, we would still need to contract the new \(\{0, 2\}\) edge which is equivalent to the old \(\{1, 2\}\) edge.
My first attempt to solve this was… messy and didn’t work well.
I developed an if-elif
chain for which endpoints of the contracting edge no longer existed in the graph and tried to use dict comprehension to force a dict to always be up to date with which vertices were equivalent to each other.
It didn’t work and was very messy.
Fortunately there was a better solution. This next bit of code I actually first used in my Graph Algorithms class from last semester. In particular it is the merge-find or disjoint set data structure from the components algorithm (code can be found here and more information about the data structure here).
Basically we create a mapping from a node to that node’s representative.
In this case a node’s representative is the node that is still in \(G\) but the input node has been merged into through a series of contractions.
In the above example, once node 1 is merged into node 0, 0 would become node 1’s representative.
We search recursively through the merged_nodes
dict until we find a node which is not in the dict, meaning that it is still its own representative and therefore in the graph.
This will let us handle a representative node later being merged into another node.
Finally, we take advantage of path compression so that lookup times remain good as the number of entries in merged_nodes
grows.
This worked well once I caught a bug where the prepare_graph
function tried to contract a node with itself.
However, the function was running and returning a result but it could have one or two more edges than needed which of course means it is not a tree.
I was testing on the symmetric fractional Held Karp graph by the way, so with six nodes it should have five edges per tree.
I seeded the random number generator for one of the seven edge results and started to debug! Recall that once we generate a uniform decimal between 0 and 1 we compare it to
\[ \lambda_e \times \frac{K_{G \backslash {e}}}{K_G} \]
where \(K\) is the result of Krichhoff’s Theorem on the subscripted graph. One probability that caught my eye had the fractional component equal to 1. This means that adding \(e\) to the set of contracted edges had no effect on where that edge should be included in the final spanning tree. Closer inspection revealed that the edge \(e\) in question already could not be picked for the spanning tree since it did not exist in \(G\) it could not exist in \(G \backslash {e}\).
Imagine the following situation. We have three edges to contract but they form a cycle of length three.
If we contract \(\{0, 1\}\) and then \(\{0, 2\}\) what does that mean for \(\{1, 2\}\)? Well, \({1, 2}\) would become a self loop on vertex 0 but we are deleting self loops so it cannot exist. It has to have a probability of 0. Yet in the current implementation of the function, it would have a probability of \(\lambda_{\{1, 2\}}\). So, I have to check to see if a representative edge exists for the edge we are considering in the current iteration of the main for loop.
The solution to this is to return the merge-find data structure with the prepared graph for \(G\) and then check that an edge with endpoints at the two representatives for the endpoints of the original edge persent.
If so, use the kirchhoff value as normal but if not make G_e_total_tree_weight
equal to zero so that this edge cannot be picked.
Finally I was able to sample trees from G
consistently, but did they match the expected probabilities?
The first test I was working with sampled one tree and checked to see if it was actually a tree. I first expanded it to sample 1000 trees and make sure that they were all trees. At this point, I thought that the function will always return a tree, but I need to check the tree distribution.
So after a lot of difficulty writing the test itself to check which of the 75 possible spanning trees I had sampled I was ready to check the actual distribution. First, the test iterates over all the spanning trees, records the products of edge weights and normalizes the data. (Remember that the actual probability is only proportional to the product of edge weights). Then I sample 50000 trees and record the actual frequency. Next, it calculates the percent error from the expected probability to the actual frequency. The sample size is so large because at 1000 trees the percent error was all over the place but, as the Law of Large Numbers dictates, the larger sample shows the actual results converging to the expected results so I do believe that the function is working.
That being said, seeing the percent error converge to be less than 15% for all 75 spanning trees is not a very rigorous test. I can either implement a formal test using the percent error or try to create a Chi squared test using scipy.
This morning I was able to get a Chi squared test working and it was definatly the correct dicision. I was able to reduce the sample size from 50,000 to 1200 which is a near minimum sample. In order to run a Chi squared test you need an expected frequency of at least 5 for all of the categories so I had to find the number of samples to ganturee that for a tree with a probabilty of about 0.4% which was 1163 that I rounded to 1200.
I am testing at the 0.01 signigance level, so this test may fail without reason 1% of the time but it is still a overall good test for distribution.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, pp. 379-389, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207.
]]>Google Summer of Code 2020’s second evaluation is about to complete. Now we are about to start with the final coding phase. This post discusses about the progress so far in the last two weeks of the second coding period from 13 July to 26 July 2020.
We have divided the work in two parts as discussed in the previous blog. The first part is the generation of the baseline images discussed below. The second part is the modification of the baseline images. The modification part will be implemented in the last phase of the Google Summer of Code 2020.
Now, we have started removing the use of the matplotlib_baseline_images
package. After the changes proposed in the previous PR, the developer will have no baseline images on fresh install of matplotlib. So, the developer would need to generate matplotlib baseline images locally to get started with the testing part of the mpl.
The images can be generated by the image comparison tests with use of matplotlib_baseline_image_generation
flag from the command line. Once these images are generated for the first time, then they can be used as the baseline images for the later times for comparison. This is the main principle adopted.
We successfully created the matplotlib_baseline_image_generation
flag in the beginning of the second evaluation but images were not created in the baseline images
directory inside the matplotlib
and mpl_toolkits
directories, instead they were created in the result_images
directory. So, we implemented this functionality. The images are created in the lib/matplotlib/tests/baseline_images
directory directly now in the baseline image generation step. The baseline image generation step uses python3 -mpytest lib/matplotlib --matplotlib_baseline_image_generation
command. Later on, running the pytests with python3 -mpytest lib/matplotlib
will start the image comparison.
Right now, the matplotlib_baseline_image_generation flag works for the matplotlib directory. We are trying to achieve the same functionality for the mpl_toolkits directory.
Once the generation of the baseline images for mpl_toolkits
directory is completed in the current PR, we will move to the modification of the baseline images in the third coding phase. The addition of new baseline image and deletion of the old baseline image will also be implemented in the last phase of GSoC. Modification of baseline images will be further divided into two sub tasks: addition of new baseline image and the deletion of the previous baseline image.
Monday to Thursday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Thomas, Antony and Hannah for helping me so far.
]]>In order to test the exponential distribution that I generate using spanning_tree_distribution
, I need to be able to sample a tree from the distribution.
The primary citation used in the Asadpour paper is Generating Random Combinatorial Objects by V. G. Kulkarni (1989).
While I was not able to find an online copy of this article, the Michigan Tech library did have a copy that I was able to read.
Kulkarni gave a general overview of the algorithm in Section 2, but Section 5 is titled `Random Spanning Trees’ and starts on page 200. First, let’s check that the preliminaries for the Kulkarni paper on page 200 match the Asadpour algorithm.
Let \(G = (V, E)\) be an undirected network of \(M\) nodes and \(N\) arcs… Let \(\mathfrak{B}\) be the set of all spanning trees in \(G\). Let \(\alpha_i\) be the positive weight of arc \(i \in E\). Defined the weight \(w(B)\) of a spanning tree \(B \in \mathfrak{B}\) as
\[w(B) = \prod_{i \in B} \alpha_i\]
Also define
\[n(G) = \sum_{B \in \mathfrak{B}} w(B)\]
In this section we describe an algorithm to generate \(B \in \mathfrak{B}\) so that
\[P\{B \text{ is generated}\} = \frac{w(B)}{n(G)}\]
Immediately we can see that \(\mathfrak{B}\) is the same as \(\mathcal{T}\) from the Asadpour paper, the set of all spanning trees. The weight of each edge is \(\alpha_i\) for Kulkarni and \(\lambda_e\) to Asadpour. As for the product of the weights of the graph being the probability, the Asadpour paper states on page 382
Given \(\lambdae \geq 0\) for \(e \in E\), a \(\lambda\)-random tree_ \(T\) of \(G\) is a tree \(T\) chosen from the set of all spanning trees of \(G\) with probability proportional to \(\prod_{e \in T} \lambda_e\).
So this is not a concern. Finally, \(n(G)\) can be written as
\[\sum_{T \in \mathcal{T}} \prod_{e \in T} \lambda_e\]
which does appear several times throughout the Asadpour paper. Thus the preliminaries between the Kulkarni and Asadpour papers align.
The specialized version of the general algorithm which Kulkarni gives is Algorithm A8 on page 202.
\(U = \emptyset,\) \(V = E\)
Do \(i = 1\) to \(N\);
\(\qquad\)Let \(a = n(G(U, V))\)
\(\qquad\qquad a’\) \(= n(G(U \cup {i}, V))\)
\(\qquad\)Generate \(Z \sim U[0, 1]\)
\(\qquad\)If \(Z \leq \alpha_i \times \left(a’ / a\right)\)
\(\qquad\qquad\)then \(U = U \cup {i}\),
\(\qquad\qquad\)else \(V = V - {i}\)
\(\qquad\)end.
Stop. \(U\) is the required spanning tree.
Now we have to understand this algorithm so we can create pseudo code for it.
First as a notational explanation, the statement “Generate \(Z \sim U[0, 1]\)” means picking a uniformly random variable over the interval \([0, 1]\) which is independent of all the random variables generated before it (See page 188 of Kulkarni for more information).
The built-in python module random
can be used here.
Looking at real-valued distributions, I believe that using random.uniform(0, 1)
is preferable to random.random()
since the latter does not have the probability of generating a ‘1’ and that is explicitly part of the interval discussed in the Kulkarni paper.
The other notational oddity would be statements similar to \(G(U, V)\) which is this case does not refer to a graph with \(U\) as the vertex set and \(V\) as the edge set as \(U\) and \(V\) are both subsets of the full edge set \(E\).
\(G(U, V)\) is defined in the Kulkarni paper on page 201 as
Let \(G(U, V)\) be a subgraph of \(G\) obtained by deleting arcs that are not in \(V\), and collapsing arcs that are in \(U\) (i.e., identifying the end nodes of arcs in \(U\)) and deleting all self-loops resulting from these deletions and collapsing.
This language seems a bit… clunky, especially for the edges in \(U\).
In this case, “collapsing arcs that are in \(U\)” would be contracting those edges without self loops.
Fortunately, this functionality is a part of NetworkX using networkx.algorithms.minors.contracted_edge
with the self_loops
keyword argument set to False
.
As for the edges in \(E - V\), this can be easily accomplished by using networkx.MultiGraph.remove_edges_from
.
Once we have generated \(G(U, V)\), we need to find \(n(G(U, V)\).
This can be done with something we are already familiar with: Kirchhoff’s Tree Matrix Theorem.
All we need to do is create the Laplacian matrix and then find the determinant of the first cofactor.
This code will probably be taken directly from the spanning_tree_distribution
function.
Actually, this is a place to create a broader helper function called krichhoffs
which will take a graph and return the number of weighted spanning trees in it which would then be used as part of q
in spanning_tree_distribution
and in sample_spanning_tree
.
From here we compare \(Z\) to \(\alpha_i \left(a’ / a\right)\) so see if that edge is added to the graph or discarded. Understanding the process of the algorithm gives context to the meaning of \(U\) and \(V\). \(U\) is the set of edges which we have decided to include in the spanning tree while \(V\) is the set of edges yet to be considered for \(U\) (roughly speaking).
Now there is still a bit of ambiguity in the algorithm that Kulkarni gives, mainly about \(i\). In the loop condition, \(i\) is an integer from 1 to \(N\), the number of arcs in the graph but it is later being added to \(U\) so it has to be an edge. Referencing the Asadpour paper, it starts its description of sampling the \(\lambda\)-random tree on page 383 by saying “The idea is to order the edges \(e_1, \dots, e_m\) of \(G\) arbitrarily and process them one by one”. So I believe that the edge interpretation is correct and the integer notation used in Kulkarni was assuming that a mapping of the edges to \({1, 2, \dots, N}\) has occurred.
Time to write some pseudo code! Starting with the function signature
def sample_spanning_tree
Input: A multigraph G whose edges contain a lambda value stored at lambda_key
Output: A new graph which is a spanning tree of G
Next up is a bit of initialization
U = set()
V = set(G.edges)
shuffled_edges = shuffle(G.edges)
Now the definitions of U
and V
come directly from Algorithm A8, but shuffled_edges
is new.
My thoughts are that this will be what we use for \(i\).
We shuffle the edges of the graph and then in the loop we iterate over the edges within shuffled_edges
.
Next we have the loop.
for edge e in shuffled_edges
G_total_tree_weight = kirchhoffs(prepare_graph(G, U, V))
G_i_total_tree_weight = kirchhoffs(prepare_graph(G, U.add(e), V))
z = uniform(0, 1)
if z <= e[lambda_key] * G_i_total_tree_weight / G_total_tree_weight
U = U.add(e)
if len(U) == G.number_of_edges - 1
# Spanning tree complete, no need to continue to consider edges.
spanning_tree = nx.Graph
spanning_tree.add_edges_from(U)
return spanning_tree
else
V = V.remove(e)
The main loop body does use two other functions which are not part of the standard NetworkX libraries, krichhoffs
and prepare_graph
.
As I mentioned before, krichhoffs
will apply Krichhoff’s Theorem to the graph.
Pseudo code for this is below and strongly based off of the existing code in q
of spanning_tree_distribution
which will be updated to use this new helper.
def krichhoffs
Input: A multigraph G and weight key, weight
Output: The total weight of the graph's spanning trees
G_laplacian = laplacian_matrix(G, weight=weight)
G_laplacian = G_laplacian.delete(0, 0)
G_laplacian = G_laplacian.delete(0, 1)
return det(G_laplacian)
The process for the other helper, prepare_graph
is also given.
def prepare_graph
Input: A graph G, set of contracted edges U and edges which are not removed V
Output: A subgraph of G in which all vertices in U are contracted and edges not in V are
removed
result = G.copy
edges_to_remove = set(result.edges).difference(V)
result.remove_edges_from(edges_to_remove)
for edge e in U
nx.contracted_edge(e)
return result
There is one other change to the NetworkX API that I would like to make.
At the moment, networkx.algorithms.minors.contracted_edge
is programmed to always return a copy of a graph.
Since I need to be contracting multiple edges at once, it would make a lot more sense to do the contraction in place.
I would like to add an optional keyword argument to contracted_edge
called copy
which will default to True
so that the overall functionality will not change but I will be able to perform in place contractions.
The most obvious one is to implement the functions that I have laid out in the pseudo code step, but testing is still a concerning area. My best bet is to sample say 1000 trees and check that the probability of each tree is equal to the product of all of the lambda’s on it’s edges.
That actually just caused me to think of a new test of spanning_tree_distribution
.
If I generate the distribution and then iterate over all of the spanning trees with a SpanningTreeIterator
I can sum the total probability of each tree being sampled and if that is not 1 (or very close to it) than I do not have a valid distribution over the spanning trees.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, pp. 379-389, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207.
]]>Implementing spanning_tree_distribution
proved to have some NetworkX difficulties and one algorithmic difficulty.
Recall that the algorithm for creating the distribution is given in the Asadpour paper as
- Set \(\gamma = \vec{0}\).
- While there exists an edge \(e\) with \(q_e(\gamma) > (1 + \epsilon) z_e\):
- Compute \(\delta\) such that if we define \(\gamma’\) as \(\gamma_e’ = \gamma_e - \delta\), and \(\gamma_f’ = \gamma_f\) for all \(f \in E\ \backslash {e}\), then \(q_e(\gamma’) = (1 + \epsilon/2)z_e\).
- Set \(\gamma \leftarrow \gamma’\).
- Output \(\tilde{\gamma} := \gamma\).
Now, the procedure that I laid out in my last blog titled Entropy Distribution Setup worked well for the while loop portion.
All of my difficulties with the NetworkX API happened in the q
inner function.
After I programmed the function, I of course needed to run it and at first I was just printing the gamma
dict out so that I could see what the values for each edge were.
My first test uses the symmetric fractional Held Karp solution and to my surprise, every value of \(\gamma\) returned as 0.
I didn’t think that this was intended behavior because if it was, there would be no reason to include this step in the overall Asadpour algorithm, so I started to dig around the code with PyCharm’s debugger.
The results were, as I suspected, not correct.
I was running Krichhoff’s tree matrix theorem on the original graph, so the returned probabilities were an order of magnitude smaller than the values of \(z_e\) that I was comparing them to.
Additionally, all of the values were the same so I knew that this was a problem and not that the first edge I checked had unusually small probabilities.
So, I returned to the Asadpour paper and started to ask myself questions like
It was pretty easy to dismiss the first question, if normalization was required it would be mentioned in the Asadpour paper and without a description of how to normalize it the chances of me finding the `correct’ way to do so would be next to impossible. The second question did take some digging. The sections of the Asadpour paper which talk about using Krichhoff’s theorem all discuss it using the graph \(G\) which is why I was originally using all edges in \(G\) rather than the edges in \(E\). A few hints pointed to the fact that I needed to only consider the edges in \(E\), the first being the algorithm overview which states
Find weights \({\tilde{\gamma}}_{e \in E}\)
In particular the \(e \in E\) statement says that I do not need to consider the edges which are not in \(E\). Secondly, Lemma 7.2 starts by stating
Let \(G = (V, E)\) be a graph with weights \(\gamma_e\) for \(e \in E\)
Based on the current state of the function and these hints, I decided to reduce the input graph to spanning_tree_distribution
to only edges with \(z_e > 0\).
Running the test on the symmetric fractional solution now, it still returned \(\gamma = \vec{0}\) but the probabilities it was comparing were much closer during that first iteration.
Due to the fact that I do not have an example graph and distribution to work with, this could be the correct answer, but the fact that every value was the same still confused me.
My next step was to determine the actual probability of an edge being in the spanning trees for the first iteration when \(\gamma = \vec{0}\).
This can be easily done with my SpanningTreeIterator
and exploits the fact that \(\gamma = \vec{0} \equiv \lambda_e = 1\ \forall\ e \in \gamma\) so we can just iterate over the spanning trees and count how often each edge appears.
That script is listed below
import networkx as nx
edges = [
(0, 1),
(0, 2),
(0, 5),
(1, 2),
(1, 4),
(2, 3),
(3, 4),
(3, 5),
(4, 5),
]
G = nx.from_edgelist(edges, create_using=nx.Graph)
edge_frequency = {}
sp_count = 0
for tree in nx.SpanningTreeIterator(G):
sp_count += 1
for e in tree.edges:
if e in edge_frequency:
edge_frequency[e] += 1
else:
edge_frequency[e] = 1
for u, v in edge_frequency:
print(
f"({u}, {v}): {edge_frequency[(u, v)]} / {sp_count} = {edge_frequency[(u, v)] / sp_count}"
)
This output revealed that the probabilities returned by q
should vary from edge to edge and that the correct solution for \(\gamma\) is certainly not \(\vec{0}\).
(networkx-dev) mjs@mjs-ubuntu:~/Workspace$ python3 spanning_tree_frequency.py
(0, 1): 40 / 75 = 0.5333333333333333
(0, 2): 40 / 75 = 0.5333333333333333
(0, 5): 45 / 75 = 0.6
(1, 4): 45 / 75 = 0.6
(2, 3): 45 / 75 = 0.6
(1, 2): 40 / 75 = 0.5333333333333333
(5, 3): 40 / 75 = 0.5333333333333333
(5, 4): 40 / 75 = 0.5333333333333333
(4, 3): 40 / 75 = 0.5333333333333333
Let’s focus on that first edge, \((0, 1)\). My brute force script says that it appears in 40 of the 75 spanning trees of the below graph where each edge is labelled with its \(z_e\) value.
Yet q
was saying that the edge was in 24 of 75 spanning trees.
Since the denominator was correct, I decided to focus on the numerator which is the number of spanning trees in \(G\ \backslash\ \{(0, 1)\}\).
That graph would be the following.
An argument can be made that this graph should have a self-loop on vertex 0, but this does not affect the Laplacian matrix in any way so it is omitted here. Basically, the \([0, 0]\) entry of the adjacency matrix would be 1 and the degree of vertex 0 would be 5 and \(5 - 1 = 4\) which is what the entry would be without the self loop.
What was happening was that I was giving nx.contracted_edge
a graph of the Graph class (not a directed graph since \(E\) is undirected) and was getting a graph of the Graph class back.
The Graph class does not support multiple edges between two nodes so the returned graph only had one edge between node 0 and node 2 which was affecting the overall Laplacian matrix and thus the number of spanning trees.
Switching from a Graph to a MultiGraph did the trick, but this subtle change should be mentioned in the NetworkX documentation for the function, linked here.
I definitely believed that if a contracted an edge the output should automatically include both of the \((0, 2)\) edges.
An argument can be made for changing the default behavior to match this, but at the very least the documentation should explain this problem.
Now the q
function was returning the correct \(40 / 75\) answer for \((0, 1)\) and correct values for the rest of the edges so long as all of the \(\gamma_e\)’s were 0.
But the test was erroring out with a ValueError
when I tried to compute \(\delta\).
q
was returning a probability of an edge being in a sampled spanning tree of more than 1, which is clearly impossible but also caused the denominator of \(\delta\) to become negative and violate the domain of the natural log.
During my investigation of this problem, I noticed that after computing \(\delta\) and subtracting it from \(\gamma_e\), it did not have the desired effect on \(q_e\). Recall that we define \(\delta\) so that \(\gamma_e - \delta\) yields a \(q_e\) of \((1 + \epsilon / 2) z_e\). In other words, the effect of \(\delta\) is to decrease an edge probability which is too high, but in my current implementation it was having the opposite effect. The value of \(q_{(0, 1)}\) was going from 0.5333 to just over 0.6. If I let this trend continue, the program would eventually hit one of those cases where \(q_e \geq 1\) and crash the program.
Here I can use edge \((0, 1)\) as an example to show the problem. The original Laplacian matrix for \(G\) with \(\gamma = \vec{0}\) is
\[ \begin{bmatrix} 3 & -1 & -1 & 0 & 0 & -1 \\\ -1 & 3 & -1 & 0 & -1 & 0 \\\ -1 & -1 & 3 & -1 & 0 & 0 \\\ 0 & 0 & -1 & 3 & -1 & -1 \\\ 0 & -1 & 0 & -1 & 3 & -1 \\\ -1 & 0 & 0 & -1 & -1 & 3 \end{bmatrix} \]
and the Laplacian for \(G\ \backslash\ \{(0, 1)\}\) is
\[ \begin{bmatrix} 4 & -2 & -1 & -1 & 0 \\\ -2 & 3 & 0 & 0 & -1 \\\ -1 & 0 & 3 & -1 & -1 \\\ -1 & 0 & -1 & 3 & -1 \\\ 0 & -1 & -1 & -1 & 3 \end{bmatrix} \]
The determinant of the first cofactor is how we get the \(40 / 75\). Now consider the Laplacian matrices after we updated \(\gamma_{(0, 1)}\) for the first time. The one for \(G\) becomes
\[ \begin{bmatrix} 2.74 & -0.74 & -1 & 0 & 0 & -1 \\\ -0.74 & 2.74 & -1 & 0 & -1 & 0 \\\ -1 & -1 & 3 & -1 & 0 & 0 \\\ 0 & 0 & -1 & 3 & -1 & -1 \\\ 0 & -1 & 0 & -1 & 3 & -1 \\\ -1 & 0 & 0 & -1 & -1 & 3 \end{bmatrix} \]
and its first cofactor determinant is reduced from 75 to 61.6. What do we expect the value of the matrix for \(G\ \backslash\ \{(0, 1)\}\) to be? Well, we know that the final value of \(q_e\) needs to be \((1 + \epsilon / 2) z_e\) or \(1.1 \times 0.41\overline{6}\) which is \(0.458\overline{3}\). So
\[ \begin{array}{r c l} \displaystyle\frac{x}{61.6} &=& 0.458\overline{3} \\\ x &=& 28.2\overline{3} \end{array} \]
and the value of the first cofactor determinant should be \(28.2\overline{3}\). However, the contracted Laplacian for \((0, 1)\) after the value of \(\gamma_e\) is updated is
\[ \begin{bmatrix} 4 & -2 & -1 & -1 & 0 \\\ -2 & 3 & 0 & 0 & -1 \\\ -1 & 0 & 3 & -1 & -1 \\\ -1 & 0 & -1 & 3 & -1 \\\ 0 & -1 & -1 & -1 & 3 \end{bmatrix} \]
the same as before! The only edge with a different \(\gamma_e\) than before is \((0, 1)\), but since it is the contracted edge it is no longer in the graph any more and thus cannot affect the value of the first cofactor’s determinant!
But if we change the algorithm to add \(\delta\) to \(\gamma_e\) rather than subtract it, the determinant of the first cofactor for \(G\ \backslash\ \{e\}\)’s Laplacian will not change but the determinant for the Laplacian of \(G\)’s first cofactor will increase. This reduces the overall probability of picking \(e\) in a spanning tree. And, if we happen to use the same formula for \(\delta\) as before for our example of \((0, 1)\) then \(q_{(0, 1)}\) becomes \(0.449307\). Recall our target value of \(0.458\overline{3}\). This anwser has a \(-1.96%\) error.
\[ \begin{array}{r c l} \text{error} &=& \frac{0.449307 - 0.458333}{0.458333} \times 100 \\\ &=& \frac{-0.009026}{0.458333} \times 100 \\\ &=& -0.019693 \times 100 \\\ &=& -1.9693% \end{array} \]
Also, the test now completes without error.
Further research and discussion with my mentors revealed just how flawed my original analysis was. In the next step, sampling the spanning trees, adding anything to \(\gamma\) would directly increase the probability that the edge would be sampled. That being said, the original problem that I found was still an issue.
Going back to the notion that we a graph on which every spanning tree maps to every spanning tree which contains the desired edge, this is still the key idea which lets us use Krichhoff’s Tree Matrix Theorem. And, contracting the edge will still give a graph in which every spanning tree can be mapped to a corresponding spanning tree which includes \(e\). However, the weight of those spanning trees in \(G \backslash \{e\}\) do not quite map between the two graphs.
Recall that we are dealing with a multiplicative weight function, so the final weight of a tree is the product of all the \(\lambda\)’s on its edges.
\[ c(T) = \prod_{e \in E} \lambda_e \]
The above statement can be expanded into
\[ c(T) = \lambda_1 \times \lambda_2 \times \dots \times \lambda_{|E|} \]
with some arbitary ordering of the edges \(1, 2, \dots |E|\). Because the ordering of the edges is arbitary and due to the associative property of multiplcation, we can assume without loss of generality that the desired edge \(e\) is the last one in the sequence.
Any spanning tree in \(G \backslash \{e\}\) cannot include that last \(\lambda\) in it becuase that edge does not exist in the graph. Therefore in order to convert the weight from a tree in \(G \backslash \{e\}\) we need to multiply \(\lambda_e\) back into the weight of the contracted tree. So, we can now state that
\[ c(T \in \mathcal{T}: T \ni e) = \lambda_e \prod_{f \in E} \lambda_f\ \forall\ T \in G \backslash \{e\} \]
or that for all trees in \(G \backslash \{e\}\), the cost of the corresponding tree in \(G\) is the product of its edge \(\lambda\)’s times the weight of the desired edge. Now recall that \(q_e(\gamma)\) is
\[ \frac{\sum_{T \ni e} \exp(\gamma(T))}{\sum_{T \in \mathcal{T}} \exp(\gamma(T))} \]
In particular we are dealing with the numerator of the above fraction and using \(\lambda_e = \exp(\gamma_e)\) we can rewrite it as
\[ \sum_{T \ni e} \exp(\gamma(T)) = \sum_{T \ni e} \prod_{e \in T} \lambda_e \]
Since we now know that we are missing the \(\lambda_e\) term, we can add it into the expression.
\[ \sum_{T \ni e} \lambda_e \times \prod_{f \in T, f \not= e} \lambda_f \]
Using the rules of summation, we can pull the \(\lambda_e\) factor out of the summation to get
\[ \lambda_e \times \sum_{T \ni e} \prod_{f \in T, f \not= e} \lambda_f \]
And since we use that applying Krichhoff’s Theorem to \(G \backslash \{e\}\) will yeild everything except the factor of \(\lambda_e\), we can just multiply it back manually.
This would let the peusdo code for q
become
def q
input: e, the edge of interest
# Create the laplacian matrices
write lambda = exp(gamma) into the edges of G
G_laplace = laplacian(G, lambda)
G_e = nx.contracted_edge(G, e)
G_e_laplace = laplacian(G, lambda)
# Delete a row and column from each matrix to made a cofactor matrix
G_laplace.delete((0, 0))
G_e_laplace.delete((0, 0))
# Calculate the determinant of the cofactor matrices
det_G_laplace = G_laplace.det
det_G_e_laplace = G_e_laplace.det
# return q_e
return lambda_e * det_G_e_laplace / det_G_laplace
Making this small change to q
worked very well.
I was able to change back to subtracting \(\delta\) as the Asadpour paper does and even added a check to code so that everytime we update a value in \(\gamma\) we know that \(\delta\) has had the correct effect.
# Check that delta had the desired effect
new_q_e = q(e)
desired_q_e = (1 + EPSILON / 2) * z_e
if round(new_q_e, 8) != round(desired_q_e, 8):
raise Exception
And the test passes without fail!
I technically do not know if this distribution is correct until I can start to sample from it. I have written the test I have been working with into a proper test but since my oracle is the program itself, the only way it can fail is if I change the function’s behavior without knowing it.
So I must press onwards to write sample_spanning_tree
and get a better test for both of those functions.
As for the tests of spanning_tree_distribution
, I would of course like to add more test cases.
However, if the Held Karp relaxation returns a cycle as an answer, then there will be \(n - 1\) path spanning trees and the notion of creating this distribution in the first place as we have already found a solution to the ATSP.
I really need more truly fractional Held Karp solutions to expand the test of these next two functions.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
]]>Cellular automata are discrete models, typically on a grid, which evolve in time. Each grid cell has a finite state, such as 0 or 1, which is updated based on a certain set of rules. A specific cell uses information of the surrounding cells, called it’s neighborhood, to determine what changes should be made. In general cellular automata can be defined in any number of dimensions. A famous two dimensional example is Conway’s Game of Life in which cells “live” and “die”, sometimes producing beautiful patterns.
In this post we will be looking at a one dimensional example known as elementary cellular automaton, popularized by Stephen Wolfram in the 1980s.
Imagine a row of cells, arranged side by side, each of which is colored black or white. We label black cells 1 and white cells 0, resulting in an array of bits. As an example lets consider a random array of 20 bits.
import numpy as np
rng = np.random.RandomState(42)
data = rng.randint(0, 2, 20)
print(data)
[0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 1 1 0]
To update the state of our cellular automaton we will need to define a set of rules. A given cell \(C\) only knows about the state of it’s left and right neighbors, labeled \(L\) and \(R\) respectively. We can define a function or rule, \(f(L, C, R)\), which maps the cell state to either 0 or 1.
Since our input cells are binary values there are \(2^3=8\) possible inputs into the function.
for i in range(8):
print(np.binary_repr(i, 3))
000
001
010
011
100
101
110
111
For each input triplet, we can assign 0 or 1 to the output. The output of \(f\) is the value which will replace the current cell \(C\) in the next time step. In total there are \(2^{2^3} = 2^8 = 256\) possible rules for updating a cell. Stephen Wolfram introduced a naming convention, now known as the Wolfram Code, for the update rules in which each rule is represented by an 8 bit binary number.
For example “Rule 30” could be constructed by first converting to binary and then building an array for each bit
rule_number = 30
rule_string = np.binary_repr(rule_number, 8)
rule = np.array([int(bit) for bit in rule_string])
print(rule)
[0 0 0 1 1 1 1 0]
By convention the Wolfram code associates the leading bit with ‘111’ and the final bit with ‘000’. For rule 30 the relationship between the input, rule index and output is as follows:
for i in range(8):
triplet = np.binary_repr(i, 3)
print(f"input:{triplet}, index:{7-i}, output {rule[7-i]}")
input:000, index:7, output 0
input:001, index:6, output 1
input:010, index:5, output 1
input:011, index:4, output 1
input:100, index:3, output 1
input:101, index:2, output 0
input:110, index:1, output 0
input:111, index:0, output 0
We can define a function which maps the input cell information with the associated rule index. Essentially we are converting the binary input to decimal and adjusting the index range.
def rule_index(triplet):
L, C, R = triplet
index = 7 - (4 * L + 2 * C + R)
return int(index)
Now we can take in any input and look up the output based on our rule, for example:
rule[rule_index((1, 0, 1))]
0
Finally, we can use Numpy to create a data structure containing all the triplets for our state array and apply the function across the appropriate axis to determine our new state.
all_triplets = np.stack([np.roll(data, 1), data, np.roll(data, -1)])
new_data = rule[np.apply_along_axis(rule_index, 0, all_triplets)]
print(new_data)
[1 1 1 0 1 1 1 0 1 1 1 0 0 1 1 0 1 0 0 1]
That is the process for a single update of our cellular automata.
To do many updates and record the state over time, we will create a function.
def CA_run(initial_state, n_steps, rule_number):
rule_string = np.binary_repr(rule_number, 8)
rule = np.array([int(bit) for bit in rule_string])
m_cells = len(initial_state)
CA_run = np.zeros((n_steps, m_cells))
CA_run[0, :] = initial_state
for step in range(1, n_steps):
all_triplets = np.stack(
[
np.roll(CA_run[step - 1, :], 1),
CA_run[step - 1, :],
np.roll(CA_run[step - 1, :], -1),
]
)
CA_run[step, :] = rule[np.apply_along_axis(rule_index, 0, all_triplets)]
return CA_run
initial = np.array([0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0])
data = CA_run(initial, 10, 30)
print(data)
[[0. 1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 1. 1. 1. 0.]
[1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 1. 1. 0. 1. 0. 0. 1.]
[0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1.]
[1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 0. 0.]
[1. 1. 1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 0. 1.]
[0. 0. 0. 0. 1. 0. 1. 1. 1. 0. 0. 1. 1. 0. 0. 1. 1. 1. 0. 1.]
[1. 0. 0. 1. 1. 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 1.]
[0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 0. 0. 1. 0. 0. 1. 0. 1. 1.]
[0. 1. 0. 0. 1. 1. 1. 0. 0. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 0.]
[1. 1. 1. 1. 1. 0. 0. 1. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 1. 1.]]
For larger simulations, interesting patterns start to emerge. To visualize our simulation results we will use the ax.matshow
function.
import matplotlib.pyplot as plt
plt.rcParams["image.cmap"] = "binary"
rng = np.random.RandomState(0)
data = CA_run(rng.randint(0, 2, 300), 150, 30)
fig, ax = plt.subplots(figsize=(16, 9))
ax.matshow(data)
ax.axis(False)
With the code set up to produce the simulation, we can now start to explore the properties of these different rules. Wolfram separated the rules into four classes which are outlined below.
def plot_CA_class(rule_list, class_label):
rng = np.random.RandomState(seed=0)
fig, axs = plt.subplots(
1, len(rule_list), figsize=(10, 3.5), constrained_layout=True
)
initial = rng.randint(0, 2, 100)
for i, ax in enumerate(axs.ravel()):
data = CA_run(initial, 100, rule_list[i])
ax.set_title(f"Rule {rule_list[i]}")
ax.matshow(data)
ax.axis(False)
fig.suptitle(class_label, fontsize=16)
return fig, ax
Cellular automata which rapidly converge to a uniform state
_ = plot_CA_class([4, 32, 172], "Class One")
Cellular automata which rapidly converge to a repetitive or stable state
_ = plot_CA_class([50, 108, 173], "Class Two")
Cellular automata which appear to remain in a random state
_ = plot_CA_class([60, 106, 150], "Class Three")
Cellular automata which form areas of repetitive or stable states, but also form structures that interact with each other in complicated ways.
_ = plot_CA_class([54, 62, 110], "Class Four")
Amazingly, the interacting structures which emerge from rule 110 has been shown to be capable of universal computation.
In all the examples above a random initial state was used, but another interesting case is when a single 1 is initialized with all other values set to zero.
initial = np.zeros(300)
initial[300 // 2] = 1
data = CA_run(initial, 150, 30)
fig, ax = plt.subplots(figsize=(10, 5))
ax.matshow(data)
ax.axis(False)
For certain rules, the emergent structures interact in chaotic and interesting ways.
I hope you enjoyed this brief look into the world of elementary cellular automata, and are inspired to make some pretty pictures of your own.
]]>Finally moving on from the Held Karp relaxation, we arrive at the second step of the Asadpour asymmetric traveling salesman problem algorithm. Referencing the Algorithm 1 from the Asadpour paper, we are now finally on step two.
Algorithm 1 An \(O(\log n / \log \log n)\)-approximation algorithm for the ATSP
Input: A set \(V\) consisting of \(n\) points and a cost function \(c\ :\ V \times V \rightarrow \mathbb{R}^+\) satisfying the triangle inequality.
Output: \(O(\log n / \log \log n)\)-approximation of the asymmetric traveling salesman problem instance described by \(V\) and \(c\).
Solve the Held-Karp LP relaxation of the ATSP instance to get an optimum extreme point solution \(x^*\). Define \(z^*\) as in (5), making it a symmetrized and scaled down version of \(x^*\). Vector \(z^*\) can be viewed as a point in the spanning tree polytope of the undirected graph on the support of \(x^*\) that one obtains after disregarding the directions of arcs (See Section 3.)
Let \(E\) be the support graph of \(z^*\) when the direction of the arcs are disregarded. Find weights \({\tilde{\gamma}}_{e \in E}\) such that the exponential distribution on the spanning trees, \(\tilde{p}(T) \propto \exp(\sum_{e \in T} \tilde{\gamma}_e)\) (approximately) preserves the marginals imposed by \(z^*\), i.e. for any edge \(e \in E\), \[\sum_{T \in \mathcal{T} : T \ni e} \tilde{p}(T) \leq (1 + \epsilon) z^*_e\] for a small enough value of \(\epsilon\). (In this paper we show that \(\epsilon = 0.2\) suffices for our purpose. See Section 7 and 8 for a description of how to compute such a distribution.)
Sample \(2\lceil \log n \rceil\) spanning trees \(T_1, \dots, T_{2\lceil \log n \rceil}\) from \(\tilde{p}(.)\). For each of these trees, orient all its edges so as to minimize its cost with respect to our (asymmetric) cost function \(c\). Let \(T^*\) be the tree whose resulting cost is minimal among all of the sampled trees.
Find a minimum cost integral circulation that contains the oriented tree \(\vec{T}^*\). Shortcut this circulation to a tour and output it. (See Section 4.)
Sections 7 and 8 provide two different methods to find the desired probability distribution, with section 7 using a combinatorial approach and section 8 the ellipsoid method. Considering that there is no ellipsoid solver in the scientific python ecosystem, and my mentors and I have already decided not to implement one within this project, I will be using the method in section 7.
The algorithm given in section 7 is as follows:
- Set \(\gamma = \vec{0}\).
- While there exists an edge \(e\) with \(q_e(\gamma) > (1 + \epsilon) z_e\):
- Compute \(\delta\) such that if we define \(\gamma’\) as \(\gamma_e’ = \gamma_e - \delta\), and \(\gamma_f’ = \gamma_f\) for all \(f \in E\ \backslash {e}\), then \(q_e(\gamma’) = (1 + \epsilon/2)z_e\).
- Set \(\gamma \leftarrow \gamma’\).
- Output \(\tilde{\gamma} := \gamma\).
This structure is fairly straightforward, but we need to know what \(q_e(\gamma)\) is and how to calculate \(\delta\).
Finding \(\delta\) is very easy, the formula is given in the Asadpour paper (Although I did not realize this at the time that I wrote my GSoC proposal and re-derived the equation for delta. Fortunately my formula matches the one in the paper.)
\[ \delta = \ln \frac{q_e(\gamma)(1 - (1 + \epsilon / 2)z_e)}{(1 - q_e(\gamma))(1 + \epsilon / 2) z_e} \]
Notice that the formula for \(\delta\) is reliant on \(q_e(\gamma)\). The paper defines \(q_e(\gamma)\) as
\[ q_e(\gamma) = \frac{\sum_{T \ni e} \exp(\gamma(T))}{\sum_{T \in \mathcal{T}} \exp(\gamma(T))} \]
where \(\gamma(T) = \sum_{f \in T} \gamma_f\).
The first thing that I noticed is that in the denominator the summation is over all spanning trees for in the graph, which for the complete graphs we will be working with is exponetial so a `brute force’ approach here is useless. Fortunately, Asadpour and team realized we can use Kirchhoff’s matrix tree theorem to our advantage.
As an aside about Kirchhoff’s matrix tree theorem, I was not familiar with this theorem before this project so I had to do a bit of reading about it. Basically, if you have a laplacian matrix (the adjacency matrix minus the degree matrix), the absolute value of any cofactor is the number of spanning trees in the graph. This was something completely unexpected to me, and I think that it is very cool that this type of connection exists.
The details of using Kirchhoff’s theorem are given in section 5.3. We will be using a weighted laplacian \(L\) defined by
\[ L_{i, j} = \left\{ \begin{array}{l l} -\lambda_e & e = (i, j) \in E \\\ \sum_{e \in \delta({i})} \lambda_e & i = j \\\ 0 & \text{otherwise} \end{array} \right. \]
where \(\lambda_e = \exp(\gamma_e)\).
Now, we know that applying Krichhoff’s theorem to \(L\) will return
\[ \sum_{t \in \mathcal{T}} \prod_{e \in T} \lambda_e \]
but which part of \(q_e(\gamma)\) is that?
If we apply \(\lambda_e = \exp(\gamma_e)\), we find that
\[ \begin{array}{r c l} \sum_{T \in \mathcal{T}} \prod_{e \in T} \lambda_e &=& \sum_{T \in \mathcal{T}} \prod_{e \in T} \exp(\gamma_e) \\\ && \sum_{T \in \mathcal{T}} \exp\left(\sum_{e \in T} \gamma_e\right) \\\ && \sum_{T \in \mathcal{T}} \exp(\gamma(T)) \\\ \end{array} \]
So moving from the first row to the second row is a confusing step, but essentially we are exploiting the properties of exponents. Recall that \(\exp(x) = e^x\), so could have written it as \(\prod_{e \in T} e^{\gamma_e}\) but this introduces ambiguity as we would have multiple meanings of \(e\). Now, for all values of \(e\), \(e_1, e_2, \dots, e_{n-1}\) in the spanning tree \(T\) that product can be expanded as
\[ \prod_{e \in T} e^{\gamma_e} = e^{\gamma_{e_1}} \times e^{\gamma_{e_2}} \times \dots \times e^{\gamma_{e_{n-1}}} \]
Each exponential factor has the same base, so we can collapse that into
\[ e^{\gamma_{e_1} + \gamma_{e_2} + \dots + \gamma_{e_{n-1}}} \]
which is also
\[ e^{\sum_{e \in T} \gamma_e} \]
but we know that \(\sum_{e \in T} \gamma_e\) is \(\gamma(T)\), so it becomes
\[ e^{\gamma(T)} = \exp(\gamma(T)) \]
Once we put that back into the summation we arrive at the denominator in \(q_e(\gamma)\), \(\sum_{T \in \mathcal{T}} \exp(\gamma(T))\).
Next, we need to find the numerator for \(q_e(\gamma)\). Just as before, a `brute force’ approach would be exponential in complexity, so we have to find a better way. Well, the only difference between the numerator and denominator is the condition on the outer summation, which the \(T \in \mathcal{T}\) being changed to \(T \ni e\) or every tree containing edge \(e\).
There is a way to use Krichhoff’s matrix tree theorem here as well. If we had a graph in which every spanning tree could be mapped in a one-to-one fashion onto every spanning tree in the original graph which contains the desired edge \(e\). In order for a spanning tree to contain edge \(e\), we know that the endpoints of \(e\), \((u, v)\) will be directly connected to each other. So we are then interested in every spanning tree in which we reach vertex \(u\) and then leave from vertex \(v\). (As opposed to the spanning trees where we reach vertex \(u\) and then leave from that same vertex). In a sense, we are treating vertices \(u\) and \(v\) is the same vertex. We can apply this literally by contracting \(e\) from the graph, creating \(G / {e}\). Every spanning tree in this graph can be uniquely mapped from \(G / {e}\) onto a spanning tree in \(G\) which contains the edge \(e\).
From here, the logic to show that a cofactor from \(L\) is actually the numerator of \(q_e(\gamma)\) parallels the logic for the denominator.
At this point, we have all of the needed information to create some pseudo code for the next function in the Asadpour method, spanning_tree_distribution()
.
Here I will use an inner function q()
to find \(q_e\).
def spanning_tree_distribution
input: z, the symmetrized and scaled output of the Held Karp relaxation.
output: gamma, the maximum entropy exponential distribution for sampling spanning trees
from the graph.
def q
input: e, the edge of interest
# Create the laplacian matrices
write lambda = exp(gamma) into the edges of G
G_laplace = laplacian(G, lambda)
G_e = nx.contracted_edge(G, e)
G_e_laplace = laplacian(G, lambda)
# Delete a row and column from each matrix to made a cofactor matrix
G_laplace.delete((0, 0))
G_e_laplace.delete((0, 0))
# Calculate the determinant of the cofactor matrices
det_G_laplace = G_laplace.det
det_G_e_laplace = G_e_laplace.det
# return q_e
return det_G_e_laplace / det_G_laplace
# initialize the gamma vector
gamma = 0 vector of length G.size
while true
# We will iterate over the edges in z until we complete the
# for loop without changing a value in gamma. This will mean
# that there is not an edge with q_e > 1.2 * z_e
valid_count = 0
# Search for an edge with q_e > 1.2 * z_e
for e in z
q_e = q(e)
z_e = z[e]
if q_e > 1.2 * z_e
delta = ln(q_e * (1 - 1.1 * z_e) / (1 - q_e) * 1.1 * z_e)
gamma[e] -= delta
else
valid_count += 1
if valid_count == number of edges in z
break
return gamma
The clear next step is to implement the function spanning_tree_distribution
using the pseudo code above as an outline.
I will start by writing q
and testing it with the same graphs which I am using to test the Held Karp relaxation.
Once q
is complete, the rest of the function seems fairly straight forward.
One thing that I am concerned about is my ability to test spanning_tree_distribution
.
There are no examples given in the Asadpour research paper and no other easy resources which I could turn to in order to find an oracle.
The only method that I can think of right now would be to complete this function, then complete sample_spanning_tree
.
Once both functions are complete, I can sample a large number of spanning trees to find an experimental probability for each tree, then run a statistical test (such as an h-test) to see if the probability of each tree is near \(\exp(\gamma(T))\) which is the desired distribution.
An alternative test would be to use the marginals in the distribution and have to manually check that
\[ \sum_{T \in \mathcal{T} : T \ni e} p(T) \leq (1 + \epsilon) z^*_e,\ \forall\ e \in E \]
where \(p(T)\) is the experimental data from the sampled trees.
Both methods seem very computationally intensive and because they are sampling from a probability distribution they may fail randomly due to an unlikely sample.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
]]>Google Summer of Code 2020’s first evaluation is completed. I passed!!! Hurray! Now we are in the mid way of the second evaluation. This post discusses about the progress so far in the first two weeks of the second coding period from 30 June to 12 July 2020.
We successfully created the matplotlib_baseline_images package. It contains the matplotlib and the matplotlib toolkit baseline images. Symlinking is done for the baseline images, related changes for Travis, appvoyer, azure pipelines etc. are functional and tests/test_data is created as discussed in the previous blog. PR is reviewed and suggested work is done.
We have divide the work in two parts. The first part is the generation of the baseline images discussed below. The second part is the modification of the baseline images which happens when some baseline images gets modified due to git push
or git merge
. Modification of baseline images will be further divided into two sub tasks: addition of new baseline image and the deletion of the previous baseline image. This will be discussed in the second half of the second phase of the Google Summer of Code 2020.
After the changes proposed in the previous PR, the developer will have no baseline images on fresh install of matplotlib. The developer would need to install the sub-wheel matplotlib_baseline_images package to get started with the testing part of the mpl. Now, we have started removing the use of the matplotlib_baseline_images package. It will require two steps as discussed above.
The images can be generated by the image comparison tests. Once these images are generated for the first time, then they can be used as the baseline images for the later times for comparison. This is the main principle adopted. The images are first created in the result_images
directory. Then they will be moved to the lib/matplotlib/tests/baseline_images
directory. Later on, running the pytests will start the image comparison.
I learned about the pytest hooks and fixtures. I build a command line flag matplotlib_baseline_image_generation
which will create the baseline images in the result_images
directory. The full command will be python3 pytest --matplotlib_baseline_image_generation
. In order to do this, we have done changes in the conftest.py
and also added markers to the image_comparison
decorator.
I came to know about the git worktree and the scenarios in which we can use it. I also know more about virtual environments and their need in different scenarios.
Once the generation of the baseline images is completed in the current PR, we will move to the modification of the baseline images in the second half of the second coding phase.
Monday to Thursday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Thomas, Antony and Hannah for helping me so far.
]]>This should be my final post about the Held-Karp relaxation! Since my last post titled Implementing The Held Karp Relaxation, I have been testing both the ascent method as well as the branch and bound method.
My first test was to use a truly asymmetric graph rather than a directed graph where the cost in each direction happened to be the same.
In order to create such a test, I needed to know the solution to any such proposed graphs.
I wrote a python script called brute_force_optimal_tour.py
which will generate a random graph, print its adjacency matrix and then check every possible combination of edges to find the optimal tour.
import networkx as nx
from itertools import combinations
import numpy as np
import math
import random
def is_1_arborescence(G):
"""
Returns true if `G` is a 1-arborescence
"""
return (
G.number_of_edges() == G.order()
and max(d for n, d in G.in_degree()) <= 1
and nx.is_weakly_connected(G)
)
# Generate a random adjacency matrix
size = (7, 7)
G_array = np.empty(size, dtype=int)
random.seed()
for r in range(size[0]):
for c in range(size[1]):
if r == c:
G_array[r][c] = 0
continue
G_array[r][c] = random.randint(1, 100)
# Print that adjacency matrix
print(G_array)
G = nx.from_numpy_array(G_array, create_using=nx.DiGraph)
num_nodes = G.order()
combo_count = 0
min_weight_tour = None
min_tour_weight = math.inf
test_combo = nx.DiGraph()
for combo in combinations(G.edges(data="weight"), G.order()):
combo_count += 1
test_combo.clear()
test_combo.add_weighted_edges_from(combo)
# Test to see if test_combo is a tour.
# This means first that it is an 1-arborescence
if not is_1_arborescence(test_combo):
continue
# It also means that every vertex has a degree of 2
arborescence_weight = test_combo.size("weight")
if (
len([n for n, deg in test_combo.degree if deg == 2]) == num_nodes
and arborescence_weight < min_tour_weight
):
# Tour found
min_weight_tour = test_combo.copy()
min_tour_weight = arborescence_weight
print(
f"Minimum tour found with weight {min_tour_weight} from {combo_count} combinations of edges\n"
)
for u, v, d in min_weight_tour.edges(data="weight"):
print(f"({u}, {v}, {d})")
This is useful information as every though the ascent method returns a vector, because if the ascent method returns this solution (a.k.a \(f(\pi) = 0\)) we can calculate that vector off of the edges in the solution without having to explicitly enumerate the dict returned by held_karp_ascent()
.
The first output from the program was a six vertex graph and is presented below.
~ time python3 brute_force_optimal_tour.py
[[ 0 45 39 92 29 31]
[72 0 4 12 21 60]
[81 6 0 98 70 53]
[49 71 59 0 98 94]
[74 95 24 43 0 47]
[56 43 3 65 22 0]]
Minimum tour found with weight 144.0 from 593775 combinations of edges
(0, 5, 31)
(5, 4, 22)
(1, 3, 12)
(3, 0, 49)
(2, 1, 6)
(4, 2, 24)
real 0m9.596s
user 0m9.689s
sys 0m0.241s
First I checked that the ascent method was returning a solution with the same weight, 144, which it was.
Also, every entry in the vector was \(0.866\overline{6}\) which is \(\frac{5}{6}\) or the scaling factor from the Asadpour paper so I know that it was finding the exact solution.
Because if this, my test in test_traveling_salesman.py
checks that for all edges in the solution edge set both \((u, v)\) and \((v, u)\) are equal to \(\frac{5}{6}\).
For my next test, I created a \(7 \times 7\) matrix to test with, and as expected the running time of the python script was much slower.
~ time python3 brute_force_optimal_tour.py
[[ 0 26 63 59 69 31 41]
[62 0 91 53 75 87 47]
[47 82 0 90 15 9 18]
[68 19 5 0 58 34 93]
[11 58 53 55 0 61 79]
[88 75 13 76 98 0 40]
[41 61 55 88 46 45 0]]
Minimum tour found with weight 190.0 from 26978328 combinations of edges
(0, 1, 26)
(1, 3, 53)
(3, 2, 5)
(2, 5, 9)
(5, 6, 40)
(4, 0, 11)
(6, 4, 46)
real 7m28.979s
user 7m29.048s
sys 0m0.245s
Once again, the value of \(f(\pi)\) hit 0, so the ascent method returned an exact solution and my testing procedure was the same as for the six vertex graph.
The branch and bound method was not working well with the two example graphs I generated. First, on the seven vertex matrix, I programmed the test and let it run… and run… and run… until I stopped it at just over an hour of execution time. If it took one eight of that time to brute force the solution, then the branch and bound method truly is not efficient.
I moved to the six vertex graph with high hopes, I already had a six vertex graph which was correctly executing in a reasonable amount of time. The six vertex graph created a large number of exceptions and errors when I ran the tests. I was able to determine why the errors were being generated, but the context did not conform which my expectations for the branch and bound method.
Basically, direction_of_ascent_kilter()
was finding a vertex which was out-of-kilter and returning the corresponding direction of ascent, but find_epsilon()
was not finding any valid cross over edges and returning a maximum direction of travel of \(\infty\).
While I could change the default value for the return value of find_epsilon()
to zero, that would not solve the problem because the value of the vector \(\pi\) would get stuck and the program would enter an infinite loop.
I do have an analogy for this situation. Imagine that you are in an unfamiliar city and you have to meet somebody at the tallest building in that city. However, you don’t know the address and have no way to get a GPS route to that building. Instead of wandering around aimlessly, you decide to scan the skyline for the tallest building you can see and start walking down the street which is the closest to matching that direction. Additionally, you have the ability to tell at any given direction how far down the chosen street to go before you need to re-evaluate and pick a new street.
This hypothetical is a better approximation of the ascent method, but the problem here can be demostrated non the less.
After this procedure works for a while, you suddenly find yourself in an unusual situation. You can still see the tallest building, so you know you are not there yet. You know what street will take you closer to the building, but for some reason you cannot move down that street.
From my understanding of the ascent and branch and bound methods, if the direction of ascent exists, then we have to be able to move some amount in that direction without fail, but the branch and bound method was failing to provide an adequate distance to move.
Considering the trouble with the branch and bound method, and that it is not going to be used in the final Asadpour algorithm, I plan on removing it from the NetworkX pull request and moving onwards using only the ascent method for the rest of the Ascent method.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
M. Held, R. M. Karp, The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>Imagine zooming an image over and over and never go out of finer details. It may sound bizarre but the mathematical concept of fractals opens the realm towards this intricating infinity. This strange geometry exhibits the same or similar patterns irrespectively of the scale. We can see one fractal example in the image above.
The fractals may seem difficult to understand due to their peculiarity, but that’s not the case. As Benoit Mandelbrot, one of the founding fathers of the fractal geometry said in his legendary TED Talk:
A surprising aspect is that the rules of this geometry are extremely short. You crank the formulas several times and at the end, you get things like this (pointing to a stunning plot)
– Benoit Mandelbrot
In this tutorial blog post, we will see how to construct fractals in Python and animate them using the amazing Matplotlib’s Animation API. First, we will demonstrate the convergence of the Mandelbrot Set with an enticing animation. In the second part, we will analyze one interesting property of the Julia Set. Stay tuned!
We all have a common sense of the concept of similarity. We say two objects are similar to each other if they share some common patterns.
This notion is not only limited to a comparison of two different objects. We can also compare different parts of the same object. For instance, a leaf. We know very well that the left side matches exactly the right side, i.e. the leaf is symmetrical.
In mathematics, this phenomenon is known as self-similarity. It means a given object is similar (completely or to some extent) to some smaller part of itself. One remarkable example is the Koch Snowflake as shown in the image below:
We can infinitely magnify some part of it and the same pattern will repeat over and over again. This is how fractal geometry is defined.
Mandelbrot Set is defined over the set of complex numbers. It consists of all complex numbers c, such that the sequence zᵢ₊ᵢ = zᵢ² + c, z₀ = 0 is bounded. It means, after a certain number of iterations the absolute value must not exceed a given limit. At first sight, it might seem odd and simple, but in fact, it has some mind-blowing properties.
The Python implementation is quite straightforward, as given in the code snippet below:
def mandelbrot(x, y, threshold):
"""Calculates whether the number c = x + i*y belongs to the
Mandelbrot set. In order to belong, the sequence z[i + 1] = z[i]**2 + c
must not diverge after 'threshold' number of steps. The sequence diverges
if the absolute value of z[i+1] is greater than 4.
:param float x: the x component of the initial complex number
:param float y: the y component of the initial complex number
:param int threshold: the number of iterations to considered it converged
"""
# initial conditions
c = complex(x, y)
z = complex(0, 0)
for i in range(threshold):
z = z**2 + c
if abs(z) > 4.0: # it diverged
return i
return threshold - 1 # it didn't diverge
As we can see, we set the maximum number of iterations encoded in the variable threshold
. If the magnitude of the
sequence at some iteration exceeds 4, we consider it as diverged (c does not belong to the set) and return the
iteration number at which this occurred. If this never happens (c belongs to the set), we return the maximum
number of iterations.
We can use the information about the number of iterations before the sequence diverges. All we have to do is to associate this number to a color relative to the maximum number of loops. Thus, for all complex numbers c in some lattice of the complex plane, we can make a nice animation of the convergence process as a function of the maximum allowed iterations.
One particular and interesting area is the 3x3 lattice starting at position -2 and -1.5 for the real and imaginary axis respectively. We can observe the process of convergence as the number of allowed iterations increases. This is easily achieved using the Matplotlib’s Animation API, as shown with the following code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
x_start, y_start = -2, -1.5 # an interesting region starts here
width, height = 3, 3 # for 3 units up and right
density_per_unit = 250 # how many pixles per unit
# real and imaginary axis
re = np.linspace(x_start, x_start + width, width * density_per_unit)
im = np.linspace(y_start, y_start + height, height * density_per_unit)
fig = plt.figure(figsize=(10, 10)) # instantiate a figure to draw
ax = plt.axes() # create an axes object
def animate(i):
ax.clear() # clear axes object
ax.set_xticks([], []) # clear x-axis ticks
ax.set_yticks([], []) # clear y-axis ticks
X = np.empty((len(re), len(im))) # re-initialize the array-like image
threshold = round(1.15 ** (i + 1)) # calculate the current threshold
# iterations for the current threshold
for i in range(len(re)):
for j in range(len(im)):
X[i, j] = mandelbrot(re[i], im[j], threshold)
# associate colors to the iterations with an iterpolation
img = ax.imshow(X.T, interpolation="bicubic", cmap="magma")
return [img]
anim = animation.FuncAnimation(fig, animate, frames=45, interval=120, blit=True)
anim.save("mandelbrot.gif", writer="imagemagick")
We make animations in Matplotlib using the FuncAnimation
function from the Animation API. We need to specify
the figure
on which we draw a predefined number of consecutive frames
. A predetermined interval
expressed in
milliseconds defines the delay between the frames.
In this context, the animate
function plays a central role, where the input argument is the frame number, starting
from 0. It means, in order to animate we always have to think in terms of frames. Hence, we use the frame number
to calculate the variable threshold
which is the maximum number of allowed iterations.
To represent our lattice we instantiate two arrays re
and im
: the former for the values on the real axis
and the latter for the values on the imaginary axis. The number of elements in these two arrays is defined by
the variable density_per_unit
which defines the number of samples per unit step. The higher it is, the better
quality we get, but at a cost of heavier computation.
Now, depending on the current threshold
, for every complex number c in our lattice, we calculate the number of
iterations before the sequence zᵢ₊ᵢ = zᵢ² + c, z₀ = 0 diverges. We save them in an initially empty matrix called X
.
In the end, we interpolate the values in X
and assign them a color drawn from a prearranged colormap.
After cranking the animate
function multiple times we get a stunning animation as depicted below:
The Julia Set is quite similar to the Mandelbrot Set. Instead of setting z₀ = 0 and testing whether for some complex number c = x + i*y the sequence zᵢ₊ᵢ = zᵢ² + c is bounded, we switch the roles a bit. We fix the value for c, we set an arbitrary initial condition z₀ = x + i*y, and we observe the convergence of the sequence. The Python implementation is given below:
def julia_quadratic(zx, zy, cx, cy, threshold):
"""Calculates whether the number z[0] = zx + i*zy with a constant c = x + i*y
belongs to the Julia set. In order to belong, the sequence
z[i + 1] = z[i]**2 + c, must not diverge after 'threshold' number of steps.
The sequence diverges if the absolute value of z[i+1] is greater than 4.
:param float zx: the x component of z[0]
:param float zy: the y component of z[0]
:param float cx: the x component of the constant c
:param float cy: the y component of the constant c
:param int threshold: the number of iterations to considered it converged
"""
# initial conditions
z = complex(zx, zy)
c = complex(cx, cy)
for i in range(threshold):
z = z**2 + c
if abs(z) > 4.0: # it diverged
return i
return threshold - 1 # it didn't diverge
Obviously, the setup is quite similar as the Mandelbrot Set implementation. The maximum number of iterations is
denoted as threshold
. If the magnitude of the sequence is never greater than 4, the number z₀ belongs to
the Julia Set and vice-versa.
The number c is giving us the freedom to analyze its impact on the convergence of the sequence, given that the number of maximum iterations is fixed. One interesting range of values for c is for c = r cos α + i × r sin α such that r=0.7885 and α ∈ [0, 2π].
The best possible way to make this analysis is to create an animated visualization as the number c changes. This ameliorates our visual perception and understanding of such abstract phenomena in a captivating manner. To do so, we use the Matplotlib’s Animation API, as demonstrated in the code below:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
x_start, y_start = -2, -2 # an interesting region starts here
width, height = 4, 4 # for 4 units up and right
density_per_unit = 200 # how many pixles per unit
# real and imaginary axis
re = np.linspace(x_start, x_start + width, width * density_per_unit)
im = np.linspace(y_start, y_start + height, height * density_per_unit)
threshold = 20 # max allowed iterations
frames = 100 # number of frames in the animation
# we represent c as c = r*cos(a) + i*r*sin(a) = r*e^{i*a}
r = 0.7885
a = np.linspace(0, 2 * np.pi, frames)
fig = plt.figure(figsize=(10, 10)) # instantiate a figure to draw
ax = plt.axes() # create an axes object
def animate(i):
ax.clear() # clear axes object
ax.set_xticks([], []) # clear x-axis ticks
ax.set_yticks([], []) # clear y-axis ticks
X = np.empty((len(re), len(im))) # the initial array-like image
cx, cy = r * np.cos(a[i]), r * np.sin(a[i]) # the initial c number
# iterations for the given threshold
for i in range(len(re)):
for j in range(len(im)):
X[i, j] = julia_quadratic(re[i], im[j], cx, cy, threshold)
img = ax.imshow(X.T, interpolation="bicubic", cmap="magma")
return [img]
anim = animation.FuncAnimation(fig, animate, frames=frames, interval=50, blit=True)
anim.save("julia_set.gif", writer="imagemagick")
The logic in the animate
function is very similar to the previous example. We update the number c as a function
of the frame number. Based on that we estimate the convergence of all complex numbers in the defined lattice, given the
fixed threshold
of allowed iterations. Same as before, we save the results in an initially empty matrix X
and
associate them to a color relative to the maximum number of iterations. The resulting animation is illustrated below:
The fractals are really mind-gobbling structures as we saw during this blog. First, we gave a general intuition of the fractal geometry. Then, we observed two types of fractals: the Mandelbrot and Julia sets. We implemented them in Python and made interesting animated visualizations of their properties.
]]>I have now completed my implementation of the ascent and the branch and bound method detailed in the 1970 paper The Traveling-Salesman Problem and Minimum Spanning Trees by Micheal Held and Richard M. Karp.
In my last post, titled Understanding the Ascent Method, I completed the first iteration of the ascent method and found an important bug in the find_epsilon()
method and found a more efficient way to determine substitutes in the graph.
However the solution being given was still not the optimal solution.
After discussing my options with my GSoC mentors, I decided to move onto the branch and bound method anyways with the hope that because the method is more human-computable and an example was given in the paper by Held and Karp that I would be able to find the remaining flaws. Fortunately, this was indeed the case and I was able to correctly implement the branch and bound method and fix the last problem with the ascent method.
The branch and bound method follows from the ascent method, but tweaks how we determine the direction of ascent and simplifies the expression used for \(\epsilon\). As a reminder, we use the notion of an out-of-kilter vertex to find directions of ascent which are unit vectors or negative unit vectors. An out-of-kilter vertex is a vertex which is consistently not connected enough or connected too much in the set of minimum 1-arborescences of a graph. The formal definition is given on page 1151 as
Vertex \(i\) is said to be out-of-kilter high at the point \(\pi\), if, for all \(k \in K(\pi), v_{ik} \geqq 1\); similarly, vertex \(i\) is out-of-kilter low at the point \(\pi\) if, for all \(k \in K(\pi), v_{ik} = -1\).
Where \(v_{ik}\) is the degree of the vertex minus two.
First, I created a function called direction_of_ascent_kilter()
which returns a direction of ascent based on whether a vertex is out-of-kilter.
However, I did not use the method mentioned on the paper by Held and Karp, which is to find a member of \(K(\pi, u_i)\) where \(u_i\) is the unit vector with 1 in the \(i\)th location and check if vertex \(i\) had a degree of 1 or more than two.
Instead, I knew that I could find the elements of \(K(\pi)\) with existing code and decided to check the value of \(v_{ik}\) for all \(k \in K(\pi)\) and once it is determined that a vertex is out-of-kilter simply move on to the next vertex.
Once I have a mapping of all vertices to their kilter state, find one which is out-of-kilter and return the corresponding direction of ascent.
The changes to find_epsilon()
were very minor, basically removing the denominator from the formula for \(\epsilon\) and adding a check to see if we have a negative direction of ascent so that the crossover distances become positive and thus valid.
The brand new function which was needed was branch()
, which well… branches according to the Held and Karp paper.
The first thing it does is run the linear program to form the ascent method to determine if a direction of ascent exists.
If the direction does exist, branch.
If not, search the set of minimum 1-arborescences for a tour and then branch if it does not exist.
The branch process itself is rather simple, find the first open edge (an edge not in the partition sets \(X\) and \(Y\)) and then create two new configurations where that edges is either included or excluded respectively.
Finally the overall structure of the algorithm, written in pseudocode is
Initialize pi to be the zero vector.
Add the configuration (∅, ∅, pi, w(0)) to the configuration priority queue.
while configuration_queue is not empty:
config = configuration_queue.get()
dir_ascent = direction_of_ascent_kilter()
if dir_ascent is None:
branch()
if solution returned by branch is not None
return solution
else:
max_dist = find_epsilon()
update pi
update edge weights
update config pi and bound value
My initial implementation of the branch and bound method returned the same, incorrect solution is the ascent method, but with different edge weights. As a reminder, I wanted a solution which looked like this:
and I now had two algorithms returning this solution:
As I mentioned before, the branch and bound method is more human-computable than the ascent method, so I decided to follow the execution of my implementation with the one given in [1]. Below, the left side is the data from the Held and Karp paper and on the right my program’s execution on the directed version.
Undirected Graph | Directed Graph |
---|---|
Iteration 1: | |
Starting configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, 196)\) | Starting configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, 196)\) |
Minimum 1-Trees: | Minimum 1-Arborescences: |
Vertex 3 out-of-kilter LOW | Vertex 3 out-of-kilter LOW |
\(d = \begin{bmatrix} 0 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}\) | \(d = \begin{bmatrix} 0 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}\) |
\(\epsilon(\pi, d) = 5\) | \(\epsilon(\pi, d) = 5\) |
New configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & -5 & 0 & 0 \end{bmatrix}, 201)\) | New configuration: \((\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & -5 & 0 & 0 \end{bmatrix}, 212)\) |
Iteration 2: | |
Minimum 1-Trees: | Minimum 1-Arborescences: |
In order to get these results, I forbid the program from being able to choose to connect vertex 0 to the same other vertex for both the incoming and outgoing edge. However, it is very clear that from the start, iteration two was not going to be the same.
I noticed that in the first iteration, there were twice as many 1-arborescences as 1-trees and that the difference was that the cycle can be traversed in both directions. This creates a mapping between 1-trees and 1-arborescences. In the second iteration, there is not as twice as many 1-arborescences and that mapping is not present. Vertex 0 always connects to vertex 3 in the arborescences and vertex 5 in the trees. Additionally, the cost of the 1-arborescences are higher than the costs of the 1-trees.
I knew that the choice of root node in the arborescences affects the total price from working on the ascent method. I now wondered if a minimum 1-arborescence could come from a non-minimum spanning arborescence. So it would be, the answer is yes.
In order to test this hypothesis, I created a simple python script using a modified version of k_pi()
.
The entire thing is longer than I’d like to put here, but the gist was simple; iterate over all of the spanning arborescences in the graph, tracking the minimum weight and then printing the minimum 1-arborescences that this program finds to compare to the ones that the unaltered one finds.
The output is below:
Adding arborescence with weight 212.0
Adding arborescence with weight 212.0
Adding arborescence with weight 212.0
Adding arborescence with weight 204.0
Adding arborescence with weight 204.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Found 6 minimum 1-arborescences
(1, 5, 30)
(2, 1, 41)
(2, 3, 21)
(4, 2, 35)
(5, 0, 52)
(0, 4, 17)
(1, 2, 41)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
(0, 5, 52)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
(5, 2, 41)
(0, 5, 52)
(2, 4, 35)
(3, 2, 16)
(4, 0, 17)
(5, 1, 30)
(5, 3, 46)
(0, 5, 52)
(2, 3, 21)
(3, 5, 41)
(4, 2, 35)
(5, 1, 30)
(5, 0, 52)
(0, 4, 17)
(2, 3, 21)
(2, 5, 41)
(4, 2, 35)
(5, 1, 30)
(5, 0, 52)
(0, 4, 17)
This was very enlightening. The 1-arborescences of weight 212 were the ones that my branch and bound method was using in the second iteration, but not the true minimum ones. Graphically, those six 1-arborescences look like this:
And suddenly that mapping between the 1-trees and 1-arborescences is back! But why can minimum 1-arborescences come from non-minimum spanning arborescences? Remember that we create 1-arborescences by find spanning arborescences on the vertex set \({2, 3, \dots, n}\) and then connecting that missing vertex to the root of the spanning arborescence and the minimum weight incoming edge.
This means that even among the true minimum spanning arborescences, the final weight of the 1-arborescence can vary based on the cost of connecting ‘vertex 1’ to the root of the arborescence. I already had to deal with this issue earlier in the implementation of the ascent method. Now suppose that not every vertex in the graph is a root of an arborescence in the set of minimum spanning arborescences. Let the minimum root be the root vertex of the arborescence which is the cheapest to connect to and the maximum root the root vertex which is the most expensive to connect to. If we needed to, we could order the roots from minimum to maximum based on the weight of the edge from ‘vertex 1’ to that root.
Finally, suppose that the result of considering only the set of minimum spanning arborescences results in a set of minimum 1-arborescenes which do not use the minimum root and have a total cost \(c\) more than the cost of the minimum spanning arborescence plus the cost of connecting to the minimum root.
Continue to consider spanning arborescences in increasing weight, such as the ones returned by the ArborescenceIterator
.
Eventually the ArborescenceIterator
will return a spanning arborescence which has the minimum root.
If the cost of the minimum spanning arborescence is \(c_{min}\) and the cost of this arborescence is less than \(c_{min} + c\) then a new minimum 1-arborescence has been found from a non-minimum spanning arborescence.
It is obviously impractical to consider all of the spanning arborescences in the graph, and because ArborescenceIterator
returns arborescences in order of increasing weight, there is a weight after which it is impossible to produce a minimum 1-arborescence.
Let the cost of a minimum spanning arborescence be \(c_{min}\) and the total costs of connecting the roots range from \(r_{min}\) to \(r_{max}\).
The worst case cost of the minimum 1-arborescence is \(c_{min} + r_{max}\) which would connect the minimum spanning arborescence to the most expensive root and the best case minimum 1-arborescence would be \(c_{min} + r_{min}\).
With regard to the weight of the spanning arborescence itself, once it exceeds \(c_{min} + r_{max} - r_{min}\) we know that even if it uses the minimum root that the total weight will be greater than worst case minimum 1-arborescence so that is the bound which we use the ArborescenceIterator
with.
After implementing this boundary for checking spanning arborescences to find minimum 1-arborescences, both methods executed successfully on the test graph.
Now that both the ascent and branch and bound methods are working, they must be tested both for accuracy and performance. Surprisingly, on the test graph I have been using, which is originally from the Held and Karp paper, the ascent method is between 2 and 3 times faster than the branch and bound method. However, this six vertex graph is small and the branch and bound method may yet have better performance on larger graphs. I will have to create larger test graphs and then select whichever method has better performance overall.
Additionally, this is an example where \(f(\pi)\), the gap between a tour and 1-arborescence, converges to 0. This is not always the case, so I will need to test on an example where the minimum gap is greater than 0.
Finally, the output of my Held Karp relaxation program is a tour. This is just one part of the Asadpour asymmetric traveling salesperson problem and that algorithm takes a modified vector which is produced based on the final result of the relaxation. I still need to convert the output to match the expectation of the overall algorithm I am seeking to implement this summer of code.
I hope to move onto the next step of the Asadpour algorithm on either June 30th or July 1st.
[1] Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>Google Summer of Code 2020’s first evaluation is about to complete. This post discusses about the progress so far in the last two weeks of the first coding period from 15 June to 30 June 2020.
We successfully created the demo app and uploaded it to the test.pypi. It contains the main and the secondary package. The main package is analogous to the matplotlib and secondary package is analogous to the matplotlib_baseline_images package as discussed in the previous blog.
I came across another way to merge the master into the branch to resolve conflicts is by rebasing the master. I understood how to create modular commits inside a pull request for easy reviewal process and better understandability of the code.
Then, we implemented the similar changes to create the matplotlib_baseline_images
package. Finally, we were successful in uploading it to the test.pypi. This package is involved in the sub-wheels
directory so that more packages can be added in the same directory, if needed in future. The matplotlib_baseline_images
package contain baseline images for both matplotlib
and mpl_toolkits
.
Some changes were required in the main matplotlib
package’s setup.py so that it will not take information from the packages present in the sub-wheels
directory.
As baseline images are moved out of the lib/matplotlib
and lib/mpl_toolkits
directory. We symlinked the locations where they are used, namely in lib/matplotlib/testing/decorator.py
, tools/triage_tests.py
, lib/matplotlib/tests/__init__.py
and lib/mpl_toolkits/tests/__init__.py
.
There are some test data that is present in the baseline_images
which doesn’t need to be moved to the matplotlib_baseline_images
package. So, that is stored under the lib/matplotlib/tests/test_data
folder.
I came across the Continuous Integration tools used at mpl. We tried to install the matplotlib
followed by matplotlib_baseline_images
package in all three travis, appvoyer and azure-pipeline.
Once the current PR is merged, we will move to the Proposal for the baseline images problem.
Everyday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Antony and Hannah for helping me so far.
]]>It has been far longer than I would have prefered since I wrote a blog post. As I expected in my original GSoC proposal, the Held-Karp relaxation is proving to be quite difficult to implement.
My mentors and I agreed that the branch and bound method discussed in Held and Karp’s 1970 paper The Traveling-Salesman Problem and Minimum Spanning Trees which first required the implementation of the ascent method because it is used in the branch and bound method. For the last week and a half I have been implementing and debugging the ascent method and wanted to take some time to reflect on what I have learned.
I will start by saying that as of the writing of this post, my version of the ascent method is not giving what I expect to be the optimal solution. For my testing, I took the graph which Held and Karp use in their example of the branch and bound method, a weighted \(\mathcal{K}_6\), and converted to a directed but symmetric version given in the following adjacency matrix.
\[ \begin{bmatrix} 0 & 97 & 60 & 73 & 17 & 52 \\\ 97 & 0 & 41 & 52 & 90 & 30 \\\ 60 & 41 & 0 & 21 & 35 & 41 \\\ 73 & 52 & 21 & 0 & 95 & 46 \\\ 17 & 90 & 35 & 95 & 0 & 81 \\\ 52 & 30 & 41 & 46 & 81 & 0 \end{bmatrix} \]
The original solution is an undirected tour but in the directed version, the expected solutions depend on which way they are traversed. Both of these cycles have a total weight of 207.
This is the cycle returned by the program, which has a total weight of 246.
All of this code goes into the function _held_karp()
within traveling_saleaman.py
in NetworkX and I tried to follow the algorithm outlined in the paper as closely as I could.
The _held_karp()
function itself has three inner functions, k_pi()
, direction_of_ascent()
and find_epsilon()
which represent the main three steps used in each iteration of the ascent method.
k_pi()
k_pi()
uses the ArborescenceIterator
I implemented during the first week of coding for the Summer of Code to find all of the minimum 1-arborescences in the graph.
My original assessment of creating 1-arborescences was slightly incorrect.
I stated that
In order to connect vertex 1, we would choose the outgoing arc with the smallest cost and the incoming arc with the smallest cost.
In reality, this method would produce graphs which are almost arborescences based solely on the fact that the outgoing arc would almost certainly create a vertex with two incoming arcs. Instead, we need to connect vertex 1 with the incoming edge of lowest cost and the edge connecting to the root node of the arborescence on nodes \({2, 3, \dots, n}\) that way the in-degree constraint is not violated.
For the test graph on the first iteration of the ascent method, k_pi()
returned 10 1-arborescences but the costs were not all the same.
Notice that because we have no agency in choosing the outgoing edge of vertex 1 that the total cost of the 1-arborescence will vary by the difference between the cheapest root to connect to and the most expensive node to connect to.
My original writing of this function was not very efficient and it created the 1-arborescence from all of the minimum spanning arborescences and then iterated over them to delete all of the non-minimum ones.
Yesterday I re-wrote this function so that once a 1-arborescence of lower weight was found it would delete all of the current minimum ones in favor on the new one and not add any 1-arborescences it found with greater weight to the set of minimum 1-arborescences.
The real reason that I re-wrote the method was to try something new in hopes of pushing the program from a suboptimal solution to the optimal one.
As I mentioned early, the forced choice of connecting to the root node created 1-arborescences of different weight.
I suspected then that different choices of vertex 1 would be able to create 1-arborescences of even lower weight than just arbitrarily using the one returned by next(G.__iter__())
.
So I wrapped all of k_pi()
with a for
loop over the vertices of the graph and found that the choice of vertex 1 made a difference.
Excluded node: 0, Total Weight: 161.0
Chosen incoming edge for node 0: (4, 0), chosen outgoing edge for node 0: (0, 4)
(2, 3, 21)
(2, 5, 41)
(4, 2, 35)
(4, 0, 17)
(5, 1, 30)
(0, 4, 17)
Excluded node: 0, Total Weight: 161.0
Chosen incoming edge for node 0: (4, 0), chosen outgoing edge for node 0: (0, 4)
(1, 5, 30)
(2, 1, 41)
(2, 3, 21)
(4, 2, 35)
(4, 0, 17)
(0, 4, 17)
Excluded node: 1, Total Weight: 174.0
Chosen incoming edge for node 1: (5, 1), chosen outgoing edge for node 1: (1, 5)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 2, 41)
(5, 1, 30)
(1, 5, 30)
Excluded node: 2, Total Weight: 187.0
Chosen incoming edge for node 2: (3, 2), chosen outgoing edge for node 2: (2, 3)
(0, 4, 17)
(3, 5, 46)
(3, 2, 21)
(5, 0, 52)
(5, 1, 30)
(2, 3, 21)
Excluded node: 3, Total Weight: 165.0
Chosen incoming edge for node 3: (2, 3), chosen outgoing edge for node 3: (3, 2)
(1, 5, 30)
(2, 1, 41)
(2, 4, 35)
(2, 3, 21)
(4, 0, 17)
(3, 2, 21)
Excluded node: 3, Total Weight: 165.0
Chosen incoming edge for node 3: (2, 3), chosen outgoing edge for node 3: (3, 2)
(2, 4, 35)
(2, 5, 41)
(2, 3, 21)
(4, 0, 17)
(5, 1, 30)
(3, 2, 21)
Excluded node: 4, Total Weight: 178.0
Chosen incoming edge for node 4: (0, 4), chosen outgoing edge for node 4: (4, 0)
(0, 5, 52)
(0, 4, 17)
(1, 2, 41)
(2, 3, 21)
(5, 1, 30)
(4, 0, 17)
Excluded node: 4, Total Weight: 178.0
Chosen incoming edge for node 4: (0, 4), chosen outgoing edge for node 4: (4, 0)
(0, 5, 52)
(0, 4, 17)
(2, 3, 21)
(5, 1, 30)
(5, 2, 41)
(4, 0, 17)
Excluded node: 5, Total Weight: 174.0
Chosen incoming edge for node 5: (1, 5), chosen outgoing edge for node 5: (5, 1)
(1, 2, 41)
(1, 5, 30)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
Note that because my test graph is symmetric it likes to make cycles with only two nodes. The weights of these 1-arborescences range from 161 to 178, so I tried to run the test which had been taking about 300 ms using the new approach… and the program was non-terminating. I created breakpoints in PyCharm after 200 iterations of the ascent method and found that the program was stuck in a loop where it alternated between two different minimum 1-arborescences. This was a long shot, but it did not work out so I reverted the code to always pick the same vertex for vertex 1.
Either way, the fact that I had almost entirely re-written this function without a change in output suggests that this function is not the source of the problem.
direction_of_ascent()
This was the one function which has pseudocode in the Held and Karp paper:
- Set \(d\) equal to the zero \(n\)-vector.
- Find a 1-tree \(T^k\) such that \(k \in K(\pi, d)\). [A method of executing Step 2 follows from the results of Section 6 (the greedy algorithm).]
- If \(\sum_{i=1}^{i=n} d_i v_{i k} > 0\), STOP.
- \(d_i \rightarrow d_i + v_{i k}\), for \(i = 2, 3, \dots, n\)
- GO TO 2.
Using this as a guide, the implementation of this function was simple until I got to the terminating condition, which is a linear program discussed on page 1149 as
Thus, when failure to terminate is suspected, it is necessary to check whether no direction of ascent exists; by the Minkowski-Farkas lemma this is equivalent to the existence of nonnegative coefficients \(\alpha_k\) such that
\( \sum_{k \in K(\pi)} \alpha_kv_{i k} = 0, \quad i = 1, 2, \dots, n \)
This can be checked by linear programming.
While I was able to implement this without much issue, one very important constraint of the linear program was not mentioned here, but rather the page before during a proof. That constraint is
\[ \sum_{k \in K(\pi)} \alpha_k = 1 \]
Once I spent several hours trying to debug the original linear program and noticed the missing constraint. The linear program started to behave correctly, terminating the program when a tour is found.
find_epsilon()
This function requires a completely different implementation compared to the one described in the Held and Karp paper.
The basic idea in both my implementation for directed graphs and the description for undirected graphs is finding edges which are substitutes for each other, or an edge outside the 1-arborescence which can replace an edge in the arborescence and will result in a 1-arborescence.
The undirected version uses the idea of fundamental cycles in the tree to find the substitutes, and I tried to use this idea as will with the find_cycle()
function in the NetworkX library.
I executed the first iteration of the ascent method by hand and noticed that what I computed for all of the possible values of \(\epsilon\) and what the program found did not match.
I had found several that it had missed and it found several that I missed.
For the example graph, I found that the following edge pairs are substitutes where the first edge is not in the 1-arborescence and the second one is the one in the 1-arborescence which it can replace using the below minimum 1-arborescence.
\[ \begin{array}{l} (0, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = 56 \\\ (0, 2) \rightarrow (4, 2) \text{ valid: } \epsilon = 25 \\\ (0, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 52 \\\ (0, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = \frac{30 - 52}{0 - 0} \text{, not valid} \\\ (1, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 15.5 \\\ (2, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = 5.5 \\\ (3, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = 5.5 \\\ (3, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = \frac{30 - 46}{-1 + 1} \text{, not valid} \\\ (4, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = \frac{41 - 90}{1 - 1} \text{, not valid} \\\ (4, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = \frac{30 - 95}{1 - 1} \text{, not valid} \\\ (4, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = -25.5 \text{, not valid (negative }\epsilon) \\\ (5, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 25 \\\ \end{array} \]
I missed the following substitutes which the program did find.
\[ \begin{array}{l} (1, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 80 \\\ (1, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = 73 \\\ (2, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = \frac{17 - 60}{1 - 1} \text{, not valid} \\\ (2, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = -18 \text{, not valid (negative }\epsilon) \\\ (3, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 28 \\\ (3, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = 78 \\\ (5, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 35 \\\ (5, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = \frac{17 - 81}{0 - 0} \text{, not valid} \\\ \end{array} \]
Notice that some substitutions do not cross over if we move in the direction of ascent, which are the pairs which have a zero as the denominator. Additionally, \(\epsilon\) is a distance, and the concept of a negative distance does not make sense. Interpreting a negative distance as a positive distance in the opposite direction, if we needed to move in that direction, the direction of ascent vector would be pointing the other way.
The reason that my list did not match the list of the program was because find_cycle()
did not always return the fundamental cycle containing the new edge.
If I called find_cycle()
on a vertex in the other cycle in the graph (in this case \({(0, 4), (4, 0)}\)), it would return that rather than the true fundamental cycle.
This prompted me to think about what really determines if edges in a 1-arborescence are substitutes for each other. In every case where a substitute was valid, both of those edges lead to the same vertex. If they did not, then the degree constraint of the arborescence would be violated because we did not replace the edge leading into a node with another edge leading into the same node. This is true regardless of if the edges are part of the same fundamental cycle or not.
Thus, find_epsilon()
now takes every edge in the graph but not the chosen 1-arborescence \(k \in K(\pi, d)\) and find the other edge in \(k\) pointing to the same vertex, swaps them and then checks that the degree constraint is not violated, it has the correct number of edges and it is still connected.
This is a more efficient method to use, and it found more valid substitutions as well so I was hopeful that it would finally bring the returned solution down to the optimal solution, perhaps because it was missing the correct value of \(\epsilon\) on even just one of the iterations.
It did not.
At this point I have no real course forward, but two unappealing options.
find_epsilon()
by executing the first iteration of the ascent method by hand. It took about 90 minutes.
I could try to continue this process and hope that while iteration 1 is executing correctly I find some other bug in the code, but I doubt that I will ever reach the 9 iterations the program needs
to find the faulty solution.I will be discussing the next steps with my GSoC mentors soon.
Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>The ocean is a key component of the Earth climate system. It thus needs a continuous real-time monitoring to help scientists better understand its dynamic and predict its evolution. All around the world, oceanographers have managed to join their efforts and set up a Global Ocean Observing System among which Argo is a key component. Argo is a global network of nearly 4000 autonomous probes or floats measuring pressure, temperature and salinity from the surface to 2000m depth every 10 days. The localisation of these floats is nearly random between the 60th parallels (see live coverage here). All data are collected by satellite in real-time, processed by several data centers and finally merged in a single dataset (collecting more than 2 millions of vertical profiles data) made freely available to anyone.
In this particular case, we want to plot temperature (surface and 1000m deep) data measured by those floats, for the period 2010-2020 and for the Mediterranean sea. We want this plot to be circular and animated, now you start to get the title of this post: Animated polar plot.
First we need some data to work with. To retrieve our temperature values from Argo, we use Argopy, which is a Python library that aims to ease Argo data access, manipulation and visualization for standard users, as well as Argo experts and operators. Argopy returns xarray dataset objects, which make our analysis much easier.
import pandas as pd
import numpy as np
from argopy import DataFetcher as ArgoDataFetcher
argo_loader = ArgoDataFetcher(cache=True)
# Query surface and 1000m temp in Med sea with argopy
df1 = argo_loader.region(
[-1.2, 29.0, 28.0, 46.0, 0, 10.0, "2009-12", "2020-01"]
).to_xarray()
df2 = argo_loader.region(
[-1.2, 29.0, 28.0, 46.0, 975.0, 1025.0, "2009-12", "2020-01"]
).to_xarray()
Here we create some arrays we’ll use for plotting, we set up a date array and extract day of the year and year itself that will be usefull. Then to build our temperature array, we use xarray very usefull methods : where()
and mean()
. Then we build a pandas Dataframe, because it’s prettier!
# Weekly date array
daterange = np.arange("2010-01-01", "2020-01-03", dtype="datetime64[7D]")
dayoftheyear = pd.DatetimeIndex(
np.array(daterange, dtype="datetime64[D]") + 3
).dayofyear # middle of the week
activeyear = pd.DatetimeIndex(
np.array(daterange, dtype="datetime64[D]") + 3
).year # extract year
# Init final arrays
tsurf = np.zeros(len(daterange))
t1000 = np.zeros(len(daterange))
# Filling arrays
for i in range(len(daterange)):
i1 = (df1["TIME"] >= daterange[i]) & (df1["TIME"] < daterange[i] + 7)
i2 = (df2["TIME"] >= daterange[i]) & (df2["TIME"] < daterange[i] + 7)
tsurf[i] = df1.where(i1, drop=True)["TEMP"].mean().values
t1000[i] = df2.where(i2, drop=True)["TEMP"].mean().values
# Creating dataframe
d = {"date": np.array(daterange, dtype="datetime64[D]"), "tsurf": tsurf, "t1000": t1000}
ndf = pd.DataFrame(data=d)
ndf.head()
This produces:
date tsurf t1000
0 2009-12-31 0.0 0.0
1 2010-01-07 0.0 0.0
2 2010-01-14 0.0 0.0
3 2010-01-21 0.0 0.0
4 2010-01-28 0.0 0.0
Then it’s time to plot, for that we first need to import what we need, and set some usefull variables.
import matplotlib.pyplot as plt
import matplotlib
plt.rcParams["xtick.major.pad"] = "17"
plt.rcParams["axes.axisbelow"] = False
matplotlib.rc("axes", edgecolor="w")
from matplotlib.lines import Line2D
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
big_angle = 360 / 12 # How we split our polar space
date_angle = (
((360 / 365) * dayoftheyear) * np.pi / 180
) # For a day, a corresponding angle
# inner and outer ring limit values
inner = 10
outer = 30
# setting our color values
ocean_color = ["#ff7f50", "#004752"]
Now we want to make our axes like we want, for that we build a function dress_axes
that will be called during the animation process. Here we plot some bars with an offset (combination of bottom
and ylim
after). Those bars are actually our background, and the offset allows us to plot a legend in the middle of the plot.
def dress_axes(ax):
ax.set_facecolor("w")
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
# Here is how we position the months labels
middles = np.arange(big_angle / 2, 360, big_angle) * np.pi / 180
ax.set_xticks(middles)
ax.set_xticklabels(
[
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
)
ax.set_yticks([15, 20, 25])
ax.set_yticklabels(["15°C", "20°C", "25°C"])
# Changing radial ticks angle
ax.set_rlabel_position(359)
ax.tick_params(axis="both", color="w")
plt.grid(None, axis="x")
plt.grid(axis="y", color="w", linestyle=":", linewidth=1)
# Here is the bar plot that we use as background
bars = ax.bar(
middles,
outer,
width=big_angle * np.pi / 180,
bottom=inner,
color="lightgray",
edgecolor="w",
zorder=0,
)
plt.ylim([2, outer])
# Custom legend
legend_elements = [
Line2D(
[0],
[0],
marker="o",
color="w",
label="Surface",
markerfacecolor=ocean_color[0],
markersize=15,
),
Line2D(
[0],
[0],
marker="o",
color="w",
label="1000m",
markerfacecolor=ocean_color[1],
markersize=15,
),
]
ax.legend(handles=legend_elements, loc="center", fontsize=13, frameon=False)
# Main title for the figure
plt.suptitle(
"Mediterranean temperature from Argo profiles",
fontsize=16,
horizontalalignment="center",
)
From there we can plot the frame of our plot.
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, polar=True)
dress_axes(ax)
plt.show()
Then it’s finally time to plot our data. Since we want to animated the plot, we’ll build a function that will be called in FuncAnimation
later on. Since the state of the plot changes on every time stamp, we have to redress the axes for each frame, easy with our dress_axes
function. Then we plot our temperature data using basic plot()
: thin lines for historical measurements, thicker lines for the current year.
def draw_data(i):
# Clear
ax.cla()
# Redressing axes
dress_axes(ax)
# Limit between thin lines and thick line, this is current date minus 51 weeks basically.
# why 51 and not 52 ? That create a small gap before the current date, which is prettier
i0 = np.max([i - 51, 0])
ax.plot(
date_angle[i0 : i + 1],
ndf["tsurf"][i0 : i + 1],
"-",
color=ocean_color[0],
alpha=1.0,
linewidth=5,
)
ax.plot(
date_angle[0 : i + 1],
ndf["tsurf"][0 : i + 1],
"-",
color=ocean_color[0],
linewidth=0.7,
)
ax.plot(
date_angle[i0 : i + 1],
ndf["t1000"][i0 : i + 1],
"-",
color=ocean_color[1],
alpha=1.0,
linewidth=5,
)
ax.plot(
date_angle[0 : i + 1],
ndf["t1000"][0 : i + 1],
"-",
color=ocean_color[1],
linewidth=0.7,
)
# Plotting a line to spot the current date easily
ax.plot([date_angle[i], date_angle[i]], [inner, outer], "k-", linewidth=0.5)
# Display the current year as a title, just beneath the suptitle
plt.title(str(activeyear[i]), fontsize=16, horizontalalignment="center")
# Test it
draw_data(322)
plt.show()
Finally it’s time to animate, using FuncAnimation
. Then we save it as a mp4 file or we display it in our notebook with HTML(anim.to_html5_video())
.
anim = FuncAnimation(
fig, draw_data, interval=40, frames=len(daterange) - 1, repeat=False
)
# anim.save('ArgopyUseCase_MedTempAnimation.mp4')
HTML(anim.to_html5_video())
We are coming into the end of the first week of coding for the Summer of Code, and I have implemented two new, but related, features in NetworkX. In this post, I will discuss how I implemented them, some of the challenges and how I tested them. Those two new features are a spanning tree iterator and a spanning arborescence iterator.
The arborescence iterator is the feature that I will be using directly in my GSoC project, but I though that it was a good idea to implement the spanning tree iterator first as it would be easier and I could directly refer back to the research paper as needed. The partition schemes between the two are the same, so once I figured it out for the spanning tress what I learned there would directly port into the arborescence iterator and there I could focus on modifying Edmond’s algorithm to respect the partition.
This was the first of the new freatures. It follows the algorithm detailed in a paper by Sörensen and Janssens from 2005 titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost which can be found here [2].
Now, I needed to tweak the implementation of the algorithm because I wanted to implement a python iterator, so somebody can write
for tree in nx.SpanningTreeIterator(G):
pass
and that loop would return spanning trees starting with the ones of minimum cost and climbing to the ones of maximum cost.
In order to implement this feature, my first step was to ensure that once I know what the edge partition of the graph was, I could find a minimum spanning tree which respected the partition. As a brief reminder, the edge partition creates two disjoint sets of edges of which one must appear in the resulting spanning tree and one cannot appear in the spanning tree. Edges which are neither included or excluded from the spanning tree and called open.
The easiest algorithm to implement this which is Kruskal’s algorithm. The included edges are all added to the spanning tree first, and then the algorithm can join the components created with the included edges using the open edges.
This was easy to implement in NetworkX. The Kruskal’s algorithm in NetworkX is a generator which returns the edges in the minimum spanning tree one at a time using a sorted list of edges. All that I had to do was change the sorting process so that the included edges where always at the front of that list, then the algorithm would always select them, regardless of weight for the spanning tree.
Additionally, since the general spanning tree of a graph is a partitioned tree where the partition has no included or excluded edges, I was about to convert the normal Kruskal’s implementation into a wrapper for my partition respecting one in order to reduce redunant code.
As for the partitioning process itself, that proved to be a bit more tricky mostly stemming from my own limited python experience.
(I have only been working with python since the start of the calendar year)
In order to implement the partitioning scheme I needed an ordered data structure and choose the PriorityQueue
class.
This was convienct, but for elements with the same weight for their minimum spanning trees it tried to compare the dictionaries hold the edge data was is not a supported operation.
Thus, I implemented a dataclass where only the weight of the spanning tree was comparable.
This means that for ties in spanning tree weight, the oldest partition with that weight is considered first.
Once the implementation details were ironed out, I moved on to testing.
At the time of this writting, I have tested the SpanningTreeIterator
on the sample graph in the Sörensen and Janssens paper.
That graph is
It has eight spanning trees, ranging in weight from 17 to 23 which are all shown below.
Since this graph only has a few spanning trees, it was easy to explicitly test that each graph returned from the iterator was the next one in the sequence. The iterator also works backwards, so calling
for tree in nx.SpanningTreeIterator(G, minimum=False):
pass
starts with the maximum spanning tree and works down to the minimum spanning tree.
The code for the spanning tree iterator can be found here starting around line 761.
The arborescence iterator is what I actually need for my GSoC project, and as expected was more complicated to implement.
In my original post titled Finding All Minimum Arborescences, I discussed cases that Edmond’s algorithm [1] would need to handle and proposed a change to the desired_edge
method.
These changes where easy to make, but were not the extent of the changes that needed to be made as I originally thought. The original graph from Edmonds’ 1967 paper is below
In my first test, which was limited to the minimum spanning arborescence of a random partition I created, the results where close. Below, the blue edges are included and the red one is excluded.
The minimum spanning arborescence initially is shown below.
While the \((3, 0)\) edge is properly excluded and the \((2, 3)\) edge is included, the \((6, 2)\) is not present in the arborescence (show as a dashed edge). Tracking this problem down was a hassle, but the way that Edmonds’ algorithm works is that a cycle, which would have been present if the \((6, 2)\) edge was included, are collasped into a signle vertex as the algorithm moves to the next iteration. Once that cycle is collapsed into a vertex, it still has to choose how to access that vertex and the choice is based on the best edge as before (this is step I1 in [1]). Then, when the algorithm expands the cycle out, it will remove the edge which is
Which is this case, would be \((6, 2)\) shown in red in the next image. Represented visually, the cycle with incoming edges would look like
And that would be collapsed into a new vertex, \(N\) from which the incoming edge with weight 12 would be selected.
In this example we want to forbid the algorithm from picking the edge with weight 12, so that when the cycle is reconstructed the included edge \((6, 2)\) is still present. Once we make one of the incoming edges an included edge, we know from the definition of an arborescence that we cannot get to that vertex from any other edges. They are all effectivily excluded, so once we find an included edge directed towards a vertex we can made all of the other incoming edges excluded.
Returning to the example, the collapsed vertex \(N\) would have the edge of weight 12 excluded and would pick the edge of weight 13.
At this point the iterator would find 236 arborescences with cost ranging from 96 to 125. I thought that I was very close to being finished and I knew that the cost of the minimum spanning arborescence was 96, until I checked to see what the weight of the maximum spanning arborescence was: 131.
This means that I was removing partitions which contained a valid arborescence before they were being added to priority queue.
My check_partition
method within the ArborescenceIterator
was doing the following:
False
.Rather than try to debug what I though was a good method, I decided to change my process.
I moved the last bullet point into the write_partition
method and then stopped using the check_partition
method.
If an edge partition does not have a spanning arborescence, the partition_spanning_arborescence
function will return None
and I discard the partition.
This approach is more computationally intensive, but it increased the number of returned spanning araborescences from 236 to 680 and the range expanded to the proper 96 - 131.
But how do I know that it isn’t skipping arborescences within that range? Since 680 arborescences is too many to explicitly check, I decided to write another test case. This one would check that the number of arborescences was correct and that the sequence never decreases.
In order to check the number of arborescecnes, I dicided to take a brute force approach. There are
\[ \binom{18}{8} = 43,758 \]
possible combinations of edges which could be arborescences. That’s a lot of combintation, more than I wanted to check by hand so I wrote a short python script.
from itertools import combinations
import networkx as nx
edgelist = [
(0, 2),
(0, 4),
(1, 0),
(1, 5),
(2, 1),
(2, 3),
(2, 5),
(3, 0),
(3, 4),
(3, 6),
(4, 7),
(5, 6),
(5, 8),
(6, 2),
(6, 8),
(7, 3),
(7, 6),
(8, 7),
]
combo_count = 0
arbor_count = 0
for combo in combinations(edgelist, 8):
combo_count += 1
combo_test = nx.DiGraph()
combo_test.add_edges_from(combo)
if nx.is_arborescence(combo_test):
arbor_count += 1
print(
f"There are {combo_count} possible combinations of eight edges which "
f"could be an arboresecnce."
)
print(f"Of those {combo_count} combinations, {arbor_count} are arborescences.")
The output of this script is
There are 43758 possible combinations of eight edges which could be an arboresecnce.
Of those 43758 combinations, 680 are arborescences.
So now I know how many arborescences where in the graph and it matched the number returned from the iterator. Thus, I believe that the iterator is working well.
The iterator code is here and starts around line 783. It can be used in the same way as the spanning tree iterator.
Attached is a sample output from the iterator detailing all 680 arborescences of the test graph. Since Jekyll will not let me put up the txt file I had to convert it into a pdf which is 127 pages to show the 6800 lines of output from displaying all of the arborescences.
[1] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[2] G.K. Janssens, K. Sörensen, An algoirthm to generate all spanning trees in order of incresing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
]]>I Sidharth Bansal, was waiting for the coding period to start from the March end so that I can make my hands dirty with the code. Finally, coding period has started. Two weeks have passed. This blog contains information about the progress so far from 1 June to 14 June 2020.
Initially, we thought of creating a mpl-test and mpl package. Mpl-test package would contain the test suite and baseline images while the other package would contain parts of repository other than test and baseline-images related files and folders. We changed our decision to creation of mpl and mpl-baseline-images packages as we don’t need to create separate package for entire test suite. Our main aim was to eliminate baseline_images from the repository. Mpl-baseline-images package will contain the data[/baseline images] and related information. The other package will contain files and folders other than baseline images. We are now trying to create the following structure for the repository:
mpl/
setup.py
lib/mpl/...
lib/mpl/tests/... [contains the tests .py files]
baseline_images/
setup.py
data/... [contains the image files]
It will involve:
pip install mpl-baseline-images
).I am creating a prototype first with two packages - main package and sub-wheel package. Once the demo app works well on Test PyPi, we can do similar changes to the main mpl repository. The structure of demo app is analogous to the work needed for separation of baseline-images to a new package mpl-baseline-images as given below:
testrepo/
setup.py
lib/testpkg/__init__.py
baseline_images/setup.py
baseline_images/testdata.txt
This will also include related MANIFEST files and setup.cfg.template files. The setup.py will also contain logic for exclusion of baseline-images folder from the main mpl-package.
After the current PR is merged, we will focus on eliminating the baseline-images from the mpl-baseline-images package. Then we will do similar changes for the Travis CI.
Every Tuesday and every Friday meeting is initiated at 8:30pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Antony and Hannah for helping me so far.
]]>There is only one thing that I need to figure out before the first coding period for GSoC starts on Monday: how to find all of the minimum arborescences of a graph. This is the set \(K(\pi)\) in the Held and Karp paper from 1970 which can be refined down to \(K(\pi, d)\) or \(K_{X, Y}(\pi)\) as needed. For more information as to why I need to do this, please see my last post here.
This is a place where my contributions to NetworkX to implement the Asadpour algorithm [1] for the directed traveling salesman problem will be useful to the rest of the NetworkX community (I hope). The research paper that I am going to template this off of is this 2005 paper by Sörensen and Janssens titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost [4].
The basic idea here is to implement their algorithm and then generate spanning trees until we find the first one with a cost that is greater than the first one generated, which we know is a minimum, so that we have found all of the minimum spanning trees. I know what you guys are saying, “Matt, this paper discusses spanning trees, not spanning arborescences, how is this helpful?”. Well, the heart of this algorithm is to partition the vertices into either excluded edges which cannot appear in the tree, included edges which must appear in the tree and open edges which can be but are not required to be in the tree. Once we have a partition, we need to be able to find a minimum spanning tree or minimum spanning arborescence that respects the partitioned edges.
In NetworkX, the minimum spanning arborescences are generated using Chu-Liu/Edmonds’ Algorithm developed by Yoeng-Jin Chu and Tseng-Hong Liu in 1965 and independently by Jack Edmonds in 1967. I believe that Edmonds’ Algorithm [2] can be modified to require an arc to be either included or excluded from the resulting spanning arborescence, thus allowing me to implement Sörensen and Janssens’ algorithm for directed graphs.
First, let’s explore whether the partition scheme discussed in the Sörensen and Janssens paper [4] will work for a directed graph. The critical ideas for creating the partitions are given on pages 221 and 222 and are as follows:
Given an MST of a partition, this partition can be split into a set of resulting partitions in such a way that the following statements hold:
- the intersection of any two resulting partitions is the empty set,
- the MST of the original partition is not an element of any of the resulting partitions,
- the union of the resulting partitions is equal to the original partition, minus the MST of the original partition.
In order to achieve these conditions, they define the generation of the partitions using this definition for a minimum spanning tree
\[ s(P) = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1}} \]
where the \((i, j)\) edges are the included edges of the original parition and the \((t, v)\) are from the open edges of the original partition. Now, to create the next set of partitions, take each of the \((t, v)\) edges sequentially and introduce them one at a time, make that edge an excluded edge in the first partition it appears in and an included edge in all subsequent partitions. This will produce something to the effects of
\[ \begin{array}{l} P_1 = {(i_1, j_1), \dots, (i_r, j_r), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_1, v_1})} \\\ P_2 = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_2, v_2})} \\\ P_3 = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), (t_2, v_2), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_3, v_3})} \\\ \vdots \\\ \begin{multline*} P_{n-r-1} = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-2}, v_{n-r-2}), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), \\\ (\overline{t_{n-r-1}, v_{n-r-1}})} \end{multline*} \\\ \end{array} \]
Now, if we extend this to a directed graph, our included and excluded edges become included and excluded arcs, but the definition of the spanning arborescence of a partition does not change. Let \(s_a(P)\) be the minimum spanning arborescence of a partition \(P\). Then
\[ s_a(P) = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1}} \]
\(s_a(P)\) is still constructed of all of the included arcs of the partition and a subset of the open arcs of that partition. If we partition in the same manner as the Sörensen and Janssens paper [4], then their cannot be spanning trees which both include and exclude a given edge and this conflict exists for every combintaion of partitions.
Clearly the original arborescence, which includes all of the \((t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1})\) cannot be an element of any of the resulting partitions.
Finally, there is the claim that the union of the resulting partitions is the original partition minus the original minimum spanning tree. Being honest here, this claim took a while for me to understand. In fact, I had a whole paragraph talking about how this claim doesn’t make sense before all of a sudden I realized that it does. The important thing to remember here is that the union of all of the partitions isn’t the union of the sets of included and excluded edges (which is where I went wrong the first time), it is a subset of spanning trees. The original partition contains many spanning trees, one or more of which are minimum, but each tree in the partition is a unique subset of the edges of the original graph. Now, because each of the resulting partitions cannot include one of the edges of the original partition’s minimum spanning tree we know that the original minimum spanning tree is not an element of the union of the resulting partitions. However, because every other spanning tree in the original partition which was not the selected minimum one is different by at least one edge it is a member of at least one of the resulting partitions, specifically the one where that one edge of the selected minimum spanning tree which it does not contain is the excluded edge.
So now we know that this same partition scheme which works for undirected graphs will work for directed ones. We need to modify Edmonds’ algorithm to mandate that certain arcs be included and others excluded. To start, a review of this algorithm is in order. The original description of the algorithm is given on pages 234 and 235 of Jack Edmonds’ 1967 paper Optimum Branchings [2] and roughly speaking it has three major steps.
Now that we are familiar with the minimum arborescence algorithm, we can discuss modifying it to force it to include certain edges or reject others. The changes will be primarily located in step 1. Under the normal operation of the algorithm, the consideration which happens at each vertex might look like this.
Where the bolded arrow is chosen by the algorithm as it is the incoming arc with minimum weight. Now, if we were required to include a different edge, say the weight 6 arc, we would want this behavior even though it is strictly speaking not optimal. In a similar case, if the arc of weight 2 was excluded we would also want to pick the arc of weight 6. Below the excluded arc is a dashed line.
But realistically, these are routine cases that would not be difficult to implement. A more interesting case would be if all of the arcs were excluded or if more than one are included.
Under this case, there is no spanning arborescence for the partition because the graph is not connected. The Sörensen and Janssens paper characterize these as empty partitions and they are ignored.
In this case, things start to get a bit tricky. With two (or more) included arcs leading to this vertex, it is but definition not an arborescence as according to Edmonds on page 233
A branching is a forest whose edges are directed so that each is directed toward a different node. An arborescence is a connected branching.
At first I thought that there was a case where because this case could result in the creation of a cycle that it was valid, but I realize now that in step 3 of Edmonds’ algorithm that one of those arcs would be removed anyways. Thus, any partition with multiple included arcs leading to a single vertex is empty by definition. While there are ways in which the algorithm can handle the inclusion of multiple arcs, one (or more) of them by definition of an arborescence will be deleted by the end of the algorithm.
I propose that these partitions are screened out before we hand off to Edmonds’ algorithm to find the arborescences.
As such, Edmonds’ algorithm will need to be modified for the cases of at most one included edge per vertex and any number of excluded edges per vertex.
The critical part of altering Edmonds’ Algorithm is contained within the desired_edge
function in the NetworkX implementation starting on line 391 in algorithms.tree.branchings
.
The whole function is as follows.
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if new_weight > weight:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
The function would be changed to automatically return an included arc and then skip considering any excluded arcs.
Because this is an inner function, we can access parameters passed to the parent function such as something along the lines as partition=None
where the value of partition
is the edge attribute detailing true
if the arc is included and false
if it is excluded.
Open edges would not need this attribute or could use None
.
The creation of an enum is also possible which would unify the language if I talk to my GSoC mentors about how it would fit into the NetworkX ecosystem.
A revised version of desired_edge
using the true
and false
scheme would then look like this:
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if data[partition]:
return edge, data[attr]
if new_weight > weight and not data[partition]:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
And a version using the enum might look like
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if data[partition] is Partition.INCLUDED:
return edge, data[attr]
if new_weight > weight and data[partition] is not Partition.EXCLUDED:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
Once Edmonds’ algorithm has been modified to be able to use partitions, the pseudocode from the Sörensen and Janssens paper would be applicable.
Input: Graph G(V, E) and weight function w
Output: Output_File (all spanning trees of G, sorted in order of increasing cost)
List = {A}
Calculate_MST(A)
while MST ≠ ∅ do
Get partition Ps in List that contains the smallest spanning tree
Write MST of Ps to Output_File
Remove Ps from List
Partition(Ps)
And the corresponding Partition
function being
P1 = P2 = P
for each edge i in P do
if i not included in P and not excluded from P then
make i excluded from P1
make i include in P2
Calculate_MST(P1)
if Connected(P1) then
add P1 to List
P1 = P2
I would need to change the format of the first code block as I would like it to be a Python iterator so that a for
loop would be able to iterate through all of the spanning arborescences and then stop once the cost increases in order to limit it to only minimum spanning arborescences.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), p. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[3] M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees, Operations research, 1970-11-01, Vol.18 (6), p.1138-1162, https://www.jstor.org/stable/169411
[4] G.K. Janssens, K. Sörensen, An algoirthm to generate all spanning trees in order of incresing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
]]>After talking with my GSoC mentors about what we all believe to be the most difficult part of the Asadpour algorithm, the Held-Karp relaxation, we came to several conclusions:
Thus, alternative methods for solving the Held-Karp relaxation needed to be investigated. To this end, we turned to the original 1970 paper by Held and Karp, The Traveling Salesman Problem and Minimum Spanning Trees to see how they proposed solving the relaxation (Note that this paper was published before the ellipsoid algorithm was applied to linear programming in 1979). The Held and Karp paper discusses three methods for solving the relaxation:
But before we explore the methods that Held and Karp discuss, we need to ensure that these methods still apply to solving the Held-Karp relaxation within the context of the Asadpour paper. The definition of the Held-Karp relaxation that I have been using on this blog comes from the Asadpour paper, section 3 and is listed below.
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
The closest match to this program in the Held Karp paper is their linear program 3, which is a linear programming representation of the entire traveling salesman problem, not solely the relaxed version. Note that Held and Karp were dealing with the symmetric TSP (STSP) while Asadpour is addressing the asymmetric or directed TSP (ATSP).
\[ \begin{array}{c l l} \text{min} & \sum_{1 \leq i < j \leq n} c_{i j}x_{i j} \\\ \text{s.t.} & \sum_{j > i} x_{i j} + \sum_{j < i} x_{j i} = 2 & (i = 1, 2, \dots, n) \\\ & \sum_{i \in S\\\ j \in S\\\ i < j} x_{i j} \leq |S| - 1 & \text{for any proper subset } S \subset {2, 3, \dots, n} \\\ & 0 \leq x_{i j} \leq 1 & (1 \leq i < j \leq n) \\\ & x_{i j} \text{integer} \\\ \end{array} \]
The last two constraints on the second linear program is correctly bounded and fits within the scope of the original problem while the first two constraints do most of the work in finding a TSP tour. Additionally, changing the last two constraints to be \(x_{i j} \geq 0\) is the Held Karp relaxation. The first constraint, \(\sum_{j > i} x_{i j} + \sum_{j < i} x_{j i} = 2\), ensures that for every vertex in the resulting tour there is one edge to get there and one edge to leave by. This matches the second constraint in the Asadpour ATSP relaxation. The second constraint in the Held Karp formulation is another form of the subtour elimination constraint seen in the Asadpour linear program.
Held and Karp also state that
In this section, we show that minimizing the gap \(f(\pi)\) is equivalent to solving this program without the integer constraints.
on page 1141, so it would appear that solving one of the equivalent programs that Held and Karp forumalate should work here.
The Column Generation technique seeks to solve linear program 2 from the Held and Karp paper, stated as
\[ \begin{array}{c l} \text{min} & \sum_{k} c_ky_k \\\ \text{s.t.} & y_k \geq 0 \\\ & \sum_k y_k = 1 \\\ & \sum_{i = 2}^{n - 1} (-v_{i k})y_k = 0 \\\ \end{array} \]
Where \(v_{i k}\) is the degree of vertex \(i\) in 1-Tree \(k\) minus two, or \(v_{i k} = d_{i k} - 2\) and each variable \(y_k\) corresponds to a 1-Tree \(T^k\). The associated cost \(c_k\) for each tree is the weight of \(T^k\).
The rest of this method uses a simplex algorithm to solve the linear program. We only focus on the edges which are in each of the 1-Trees, giving each column the form
\[ \begin{bmatrix} 1 & -v_{2k} & -v_{3k} & \dots & -v_{n-1,k} \end{bmatrix}^T \]
and the column which enters the solution in the 1-Tree for which \(c_k + \theta + \sum_{j=2}^{n-1} \pi_jv_{j k}\) is a minimum where \(\theta\) and \(\pi_j\) come from the vector of ‘shadow prices’ given by \((\theta, \pi_2, \pi_3, \dots, \pi_{n-1})\). Now the basis is \((n - 1) \times (n - 1)\) and we can find the 1-Tree to add to the basis using a minimum 1-Tree algorithm which Held and Karp say can be done in \(O(n^2)\) steps.
I am already familar with the simplex method, so I will not detail it’s implementation here.
This technique is slow to converge. Held and Karp programmed in on an IBM/360 and where able to solve problems consestinal for up to \(n = 12\). Now, on a modern computer the clock rate is somewhere between 210 and 101,500 times faster (depending on the model of IBM/360 used), so we expect better performance, but cannot say at this time how much of an improvement.
They also talk about a heuristic procedure in which a vertex is eliminated from the program whenever the choice of its adjacent vertices was ’evident’. Technical details for the heuristic where essentially non-existent, but
The procedure showed promise on examples up to \(n = 48\), but was not explored systematically
This paper from Held and Karp is about minimizing \(f(\pi)\) where \(f(\pi)\) is the gap between the permuted 1-Trees and a TSP tour. One way to do this is to maximize the dual of \(f(\pi)\) which is written as \(\text{max}_{\pi}\ w(\pi)\) where
\[ w(\pi) = \text{min}_k\ (c_k + \sum_{i=1}^{i=n} \pi_iv_{i k}) \]
This method uses the set of indices of 1-Trees that are of minimum weight with respect to the weights \(\overline{c}_{i j} = c_{i j} + \pi_i + \pi_j\).
\[ K(\pi) = {k\ |\ w(\pi) = c_k + \sum_{i=1}^{i=n} \pi_i v_{i k}} \]
If \(\pi\) is not a maximum point of \(w\), then there will be a vector \(d\) called the direction of ascent at \(\pi\). This is theorem 3 and a proof is given on page 1148. Let the functions \(\Delta(\pi, d)\) and \(K(\pi, d)\) be defined as below.
\[ \Delta(\pi, d) = \text{min}_{k \in K(\pi)}\ \sum_{i=1}^{i=n} d_iv_{i k} \\\ K(\pi, d) = {k\ |\ k \in K(\pi) \text{ and } \sum_{i=1}^{i=n} d_iv_{i k} = \Delta(\pi, d)} \]
Now for a sufficiently small \(\epsilon\), \(K(\pi + \epsilon d) = K(\pi, d)\) and \(w(\pi + \epsilon d) = w(\pi) + \epsilon \Delta(\pi, d)\), or the value of \(w(\pi)\) increases and the growth rate of the minimum 1-Trees is at its smallest so we maintain the low weight 1-Trees and progress farther towards the optimal value. Finally, let \(\epsilon(\pi, d)\) be the following quantity
\[ \epsilon(\pi, d) = \text{max}\ {\epsilon\ |\text{ for } \epsilon’ < \epsilon,\ K(\pi + \epsilon’d = K(\pi, d)} \]
So in other words, \(\epsilon(\pi, d)\) is the maximum distance in the direction of \(d\) that we can travel to maintain the desired behavior.
If we can find \(d\) and \(\epsilon\) then we can set \(\pi = \pi + \epsilon d\) and move to the next iteration of the ascent method. Held and Karp did give a protocol for finding \(d\) on page 1149.
There are two things which must be refined about this procedure in order to make it implementable in Python.
Held and Karp have provided guidance on both of these points.
In section 6 on matroids, we are told to use a method developed by Dijkstra in A Note on Two Problems in Connexion with Graphs, but in this particular case that is not the most helpful.
I have found this document, but there is a function called minimum_spanning_arborescence
already within NetworkX which we can use to create a minimum 1-Arborescence.
That process would be to find a minimum spanning arborescence on only the vertices in \({2, 3, \dots, n}\) and then connect vertex 1 to create the cycle.
In order to connect vertex 1, we would choose the outgoing arc with the smallest cost and the incoming arc with the smallest cost.
Finally, at the maximum value of \(w(\pi)\), there is no direction of ascent and the procedure outlined by Held and Karp will not terminate. Their article states on page 1149 that
Thus, when failure to terminate is suspected, it is necessary to check whether no direction of ascent exists; by the Minkowski-Farkas lemma this is equivalent to the existence of nonnegative coefficients \(\alpha_k\) such that
\( \sum_{k \in K(\pi)} \alpha_kv_{i k} = 0, \quad i = 1, 2, \dots, n \)
This can be checked by linear programming.
While it is nice that they gave that summation, the rest of the linear program would have been useful too. The entire linear program would be written as follows
\[ \begin{array}{c l l} \text{max} & \sum_k \alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{i k} = 0 & \forall\ i \in {1, 2, \dots n} \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} \]
This linear program is not in standard form, but it is not difficult to convert it. First, change the maximization to a minimization by minimizing the negative.
\[ \begin{array}{c l l} \text{min} & \sum_k -\alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{i k} = 0 & \forall\ i \in {1, 2, \dots n} \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} \]
While the constraint is not intuitively in standard form, a closer look reveals that it is. Each column in the matrix form will be for one entry of \(\alpha_k\), and each row will represent a different value of \(i\), or a different vertex. The one constraint is actually a collection of very similar one which could be written as
\[ \begin{array}{c l} \text{min} & \sum_k -\alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{1 k} = 0 \\\ & \sum_{k \in K(\pi)} \alpha_k v_{2 k} = 0 \\\ & \vdots \\\ & \sum_{k \in K(\pi)} \alpha_k v_{n k} = 0 \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} \]
Because all of the summations must equal zero, no stack and surplus variables are required, so the constraint matrix for this program is \(n \times k\).
The \(n\) obivously has a linear growth rate, but I’m not sure how big to expect \(k\) to become.
\(k\) is the set of minimum 1-Trees, so I believe that it will be manageable.
This linear program can be solved using the built in linprog
function in the SciPy library.
As an implementation note, to start with I would probably check the terminating condition every iteration, but eventually we can find a number of iterations it has to execute before it starts to check for the terminating condition to save computational power.
One possible difficulty with the terminating condition is that we need to run the linear program with data from every minimum 1-Trees or 1-Arborescences, which means that we need to be able to generate all of the minimum 1-Trees. There does not seem to be an easy way to do this within NetworkX at the moment. Looking through the tree algorithms here they seem exclusively focused on finding one minimum branching of the required type and not all of those branchings.
Now we have to find \(\epsilon\). Theorem 4 on page 1150 states that
Let \(k\) be any element of \(K(\pi, d)\), where \(d\) is a direction of ascent at \(\pi\). Then \(\epsilon(\pi, d) = \text{min}{\epsilon\ |\text{ for some pair } (e, e’),\ e’ \text{ is a substitute for } e \text{ in } T^k \\\ \text{ and } e \text{ and } e’ \text{ cross over at } \epsilon }\)
The first step then is to determine if \(e\) and \(e’\) are substitutes. \(e’\) is a substitute if for a 1-Tree \(T^k\), \((T^k - {e}) \cup {e’}\) is also a 1-Tree. The edges \(e = {r, s}\) and \(e’ = {i, j}\) cross over at \(\epsilon\) if the pairs \((\overline{c}_{i j}, d_i + d_j)\) and \((\overline{c}_{r s}, d_r + d_s)\) are different but
\[ \overline{c}_{i j} + \epsilon(d_i + d_j) = \overline{c}_{r s} + \epsilon(d_r + d_s) \]
From that equation, we can derive a formula for \(\epsilon\).
\[ \begin{array}{r c l} \overline{c}_{i j} + \epsilon(d_i + d_j) &=& \overline{c}_{r s} + \epsilon(d_r + d_s) \\\ \epsilon(d_i + d_j) &=& \overline{c}_{r s} + \epsilon(d_r + d_s) - \overline{c}_{i j} \\\ \epsilon(d_i + d_j) - \epsilon(d_r + d_s) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon\left((d_i + d_j) - (d_r + d_s)\right) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon(d_i + d_j - d_r - d_s) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon &=& \displaystyle \frac{\overline{c}_{r s} - \overline{c}_{i j}}{d_i + d_j - d_r - d_s} \end{array} \]
So we can now find \(epsilon\) for any two pairs of edges which are substitutes for each other, but we need to be able to find substitutes in the 1-Tree.
We know that \(e’\) is a substitute for \(e\) if and only if \(e\) and \(e’\) are both incident to vertex 1 or \(e\) is in a cycle of \(T^k \cup {e’}\) that does not pass through vertex 1.
In a more formal sense, we are trying to find edges in the same fundamental cycle as \(e’\).
A fundamental cycle is created when any edge not in a spanning tree is added to that spanning tree.
Because the endpoints of this edge are connected by one, unique path this creates a unique cycle.
In order to find this cycle, we will take advantage of find_cycle
within the NetworkX library.
Below is a pseudocode procedure that uses Theorem 4 to find \(\epsilon(\pi, d)\) that I sketched out. It is not well optimized, but will find \(\epsilon(\pi, d)\).
# Input: An element k of K(pi, d), the vector pi and the vector d.
# Output: epsilon(pi, d) using Theorem 4 on page 1150.
for each edge e in the graph G
if e is in k:
continue
else:
add e to k
let v be the terminating end of e
c = find_cycle(k, v)
for each edge a in c not e:
if a[cost] = e[cost] and d[i] + d[j] = d[r] + d[s]:
continue
epsilon = (a[cost] - e[cost])/(d[i] + d[j] - d[r] - d[s])
min_epsilon = min(min_epsilon, epsilon)
remove e from k
return min_epsilon
The ascent method is also slow, but would be better on a modern computer. When Held and Karp programmed it, they tested it on some small problems up to 25 vertices and while the time per iteration was small, the number of iterations grew quickly. They do not comment on if this is a better method than the Column Generation technique, but do point up that they did not determine if this method always converges to a maximum point of \(w(\pi)\).
After talking with my GSoC mentors, we believe that this is the best method we can implement for the Held-Karp relaxation as needed by the Asadpour algorithm. The ascent method is embedded within this method, so the in depth exploration of the previous method is required to implement this one. Most of the notation in this method is reused from the ascent method.
The branch and bound method utilizes the concept that a vertex can be out-of-kilter. A vertex \(i\) is out-of-kilter high if
\[ \forall\ k \in K(\pi),\ v_{i k} \geq 1 \]
Similarly, vertex \(i\) is out-of-kilter low if
\[ \forall\ k \in K(\pi),\ v_{i k} = -1 \]
Remember that \(v_{i k}\) is the degree of the vertex minus 2. We know that all the vertices have a degree of at least one, otherwise the 1-Tree \(T^k\) would not be connected. An out-of-kilter high vertex has a degree of 3 or higher in every minimum 1-Tree and an out-of-kilter low vertex has a degree of only one in all of the minimum 1-Trees. Our goal is a minimum 1-Tree where every vertex has a degree of 2.
If we know that a vertex is out-of-kilter in either direction, we know the direction of ascent and that direction is a unit vector. Let \(u_i\) be an \(n\)-dimensional unit vector with 1 in the \(i\)-th coordinate. \(u_i\) is the direction of ascent if vertex \(i\) is out-of-kilter high and \(-u_i\) is the direction of ascent if vertex \(i\) is out-of-kilter low.
Corollaries 3 and 4 from page 1151 also show that finding \(\epsilon(\pi, d)\) is simpler when a vertex is out-of-kilter as well.
Corollary 3. Assume vertex \(i\) is out-of-kilter low and let \(k\) be an element of \(K(\pi, -u_i)\). Then \(\epsilon(\pi, -u_i) = \text{min} (\overline{c}_{i j} - \overline{c}_{r s})\) such that \({i, j}\) is a substitute for \({r, s}\) in \(T^k\) and \(i \not\in {r, s}\).
Corollary 4. Assume vertex \(r\) is out-of-kilter high. Then \(\epsilon(\pi, u_r) = \text{min} (\overline{c}_{i j} - \overline{c}_{r s})\) such that \({i, j}\) is a substitute for \({r, s}\) in \(T^k\) and \(r \not\in {i, j}\).
These corollaries can be implemented with a modified version of the pseudocode listing above for finding \(\epsilon\) in the ascent method section.
Once there are no more out-of-kilter vertices, the direction of ascent is not a unit vector and fractional weights are introduced. This is the cause of a major slow down in the convergence of the ascent method to the optimal solution, so it should be avoided if possible.
Before we can discuss implementation details, there are still some more primaries to be reviewed. Let \(X\) and \(Y\) be disjoint sets of edges in the graph. Then let \(\mathsf{T}(X, Y)\) denote the set of 1-Trees which include all edges in \(X\) but none of the edges in \(Y\). Finally, define \(w_{X, Y}(\pi)\) and \(K_{X, Y}(\pi)\) as follows.
\[ w_{X, Y}(\pi) = \text{min}_{k \in \mathsf{T}(X, Y)} (c_k + \sum_{i=1}^{i=n} \pi_i v_{i k}) \\\ K_{X, Y}(\pi) = {k\ |\ c_k + \sum \pi_i v_{i k} = w_{X, Y}(\pi)} \]
From these functions, a revised definition of out-of-kilter high and low arise, allowing a vertex to be out-of-kilter relative to \(X\) and \(Y\).
During the completion of the branch and bound method, the branches are tracking in a list where each entry has the following format.
\[[X, Y, \pi, w_{X, Y}(\pi)]\]
Where \(X\) and \(Y\) are the disjoint sets discussed earlier, \(\pi\) is the vector we are using to perturb the edge weights and \(w_{X, Y}(\pi)\) is the bound of the entry.
At each iteration of the method, we consider the list entry with the minimum bound and try to find an out-of-kilter vertex. If we find one, we apply one iteration of the ascent method using the simplified unit vector as the direction of ascent. Here we can take advantage of integral weights if they exist. Perhaps the documentation for the Asadpour implementation in NetworkX should state that integral edge weights will perform better but that claim will have to be supported by our testing.
If there is not an out-of-kilter vertex, we still need to find the direction of ascent in order to determine if we are at the maximum of \(w(\pi)\). If the direction of ascent exists, we branch. If there is no direction of ascent, we search for a tour among \(K_{X, Y}(\pi)\) and if none is found, we also branch.
The branching process is as follows. From entry \([X, Y, \pi, w_{X, Y}(\pi)]\) an edge \(e \not\in X \cup Y\) is chosen (Held and Karp do not give any criteria to branch on, so I believe the choose can be arbitrary) and the parent entry is replaced with two other entries of the forms
\[ [X \cup {e}, Y^*, \pi, w_{X \cup {e}, Y^*}(\pi)] \quad \text{and} \quad [X^*, Y \cup {e}, \pi, w_{X^*, Y \cup {e}}(\pi)] \]
An example of the branch and bound method is given on pages 1153 through 1156 in the Held and Karp paper.
In order to implement this method, we need to be able to determine in addition to modifying some of the details of the ascent method.
The Held and Karp paper states that in order to find an out-of-kilter vertex, all we need to do is test the unit vectors. If for arbitrary member \(k\) of \(K(\pi, u_i)\), \(v_{i k} \geq 1\) and the appropriate inverse holds for out-of-kilter low. From this process we can find out-of-kilter vertices by sequentially checking the \(u_i\)’s in an \(O(n^2)\) procedure.
Searching \(K_{X, Y}(\pi)\) for a tour would be easy if we can enumerate that set minimum 1-Trees. While I know how find one of the minimum 1-Trees, or a member of \(K(\pi)\), I am not sure how to find elements in \(K(\pi, d)\) or even all of the members of \(K(\pi)\). Using the properties in the Held and Karp paper, I do know how to refine \(K(\pi)\) into \(K(\pi, d)\) and \(K(\pi)\) into \(K_{X, Y}(\pi)\). This will have to a blog post for another time.
The most promising research paper I have been able to find on this problem is this 2005 paper by Sörensen and Janssens titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost. From here we generate spanning trees or arborescences until the cost moves upward at which point we have found all elements of \(K(\pi)\).
Held and Karp did not program this method. We have some reason to believe that the performance of this method will be the best due to the fact that it is designed to be an improvement over the ascent method which was tested (somewhat) until \(n = 25\) which is still better than the column generation technique which was only consistently able to solve up to \(n = 12\).
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>To get acquainted with the basics of plotting with matplotlib
, let’s try plotting how much distance an object under free-fall travels with respect to time and also, its velocity at each time step.
If, you have ever studied physics, you can tell that is a classic case of Newton’s equations of motion, where
$$ v = a \times t $$
$$ S = 0.5 \times a \times t^{2} $$
We will assume an initial velocity of zero.
import numpy as np
time = np.arange(0.0, 10.0, 0.2)
velocity = np.zeros_like(time, dtype=float)
distance = np.zeros_like(time, dtype=float)
We know that under free-fall, all objects move with the constant acceleration of $$g = 9.8~m/s^2$$
g = 9.8 # m/s^2
velocity = g * time
distance = 0.5 * g * np.power(time, 2)
The above code gives us two numpy
arrays populated with the distance and velocity data points.
When using matplotlib
we have two approaches:
pyplot
interface / functional interface.matplotlib
on the surface is made to imitate MATLAB’s method of generating plots, which is called pyplot
. All the pyplot
commands make changes and modify the same figure. This is a state-based interface, where the state (i.e., the figure) is preserved through various function calls (i.e., the methods that modify the figure). This interface allows us to quickly and easily generate plots. The state-based nature of the interface allows us to add elements and/or modify the plot as we need, when we need it.
This interface shares a lot of similarities in syntax and methodology with MATLAB. For example, if we want to plot a blue line where each data point is marked with a circle, we can use the string 'bo-'
.
import matplotlib.pyplot as plt
plt.figure(figsize=(9, 7), dpi=100)
plt.plot(time, distance, "bo-")
plt.xlabel("Time")
plt.ylabel("Distance")
plt.legend(["Distance"])
plt.grid(True)
The plot shows how much distance was covered by the free-falling object with each passing second.
plt.figure(figsize=(9, 7), dpi=100)
plt.plot(time, velocity, "go-")
plt.xlabel("Time")
plt.ylabel("Velocity")
plt.legend(["Velocity"])
plt.grid(True)
The plot below shows us how the velocity is increasing.
Let’s try to see what kind of plot we get when we plot both distance and velocity in the same plot.
plt.figure(figsize=(9, 7), dpi=100)
plt.plot(time, velocity, "g-")
plt.plot(time, distance, "b-")
plt.ylabel("Distance and Velocity")
plt.xlabel("Time")
plt.legend(["Distance", "Velocity"])
plt.grid(True)
Here, we run into some obvious and serious issues. We can see that since both the quantities share the same axis but have very different magnitudes, the graph looks disproportionate. What we need to do is separate the two quantities on two different axes. This is where the second approach to making plot comes into play.
Also, the pyplot
approach doesn’t really scale when we are required to make multiple plots or when we have to make intricate plots that require a lot of customisation. However, internally matplotlib
has an Object-Oriented interface that can be accessed just as easily, which allows to reuse objects.
When using the OO interface, it helps to know how the matplotlib
structures its plots. The final plot that we see as the output is a ‘Figure’ object. The Figure
object is the top level container for all the other elements that make up the graphic image. These “other” elements are called Artists
. The Figure
object can be thought of as a canvas, upon which different artists act to create the final graphic image. This Figure
can contain any number of various artists.
Things to note about the anatomy of a figure are:
Artists
. Artists
are basically all the elements that are rendered onto the figure. This can include text, patches (like arrows and shapes), etc. Thus, all the following Figure
, Axes
and Axis
objects are also Artists.Axes
object. The Axes
object holds the actual data that we are going to display. It will also contain X- and Y-axis labels, a title. Each Axes
object will contain two or more Axis
objects.Axis
objects set the data limits. It also contains the ticks and ticks labels. ticks
are the marks that we see on a axis.Understanding this hierarchy of Figure
, Artist
, Axes
and Axis
is immensely important, because it plays a crucial role in how me make an animation in matplotlib
.
Now that we understand how plots are generated, we can easily solve the problem we faced earlier. To make Velocity and Distance plot to make more sense, we need to plot each data item against a seperate axis, with a different scale. Thus, we will need one parent Figure
object and two Axes
objects.
fig, ax1 = plt.subplots()
ax1.set_ylabel("distance (m)")
ax1.set_xlabel("time")
ax1.plot(time, distance, "blue")
ax2 = ax1.twinx() # create another y-axis sharing a common x-axis
ax2.set_ylabel("velocity (m/s)")
ax2.set_xlabel("time")
ax2.plot(time, velocity, "green")
fig.set_size_inches(7, 5)
fig.set_dpi(100)
plt.show()
This plot is still not very intuitive. We should add a grid and a legend. Perhaps, we can also change the color of the axis labels and tick labels to the color of the lines.
But, something very weird happens when we try to turn on the grid, which you can see here at Cell 8. The grid lines don’t align with the tick labels on the both the Y-axes. We can see that tick values matplotlib
is calculating on its own are not suitable to our needs and, thus, we will have to calculate them ourselves.
fig, ax1 = plt.subplots()
ax1.set_ylabel("distance (m)", color="blue")
ax1.set_xlabel("time")
ax1.plot(time, distance, "blue")
ax1.set_yticks(np.linspace(*ax1.get_ybound(), 10))
ax1.tick_params(axis="y", labelcolor="blue")
ax1.xaxis.grid()
ax1.yaxis.grid()
ax2 = ax1.twinx() # create another y-axis sharing a common x-axis
ax2.set_ylabel("velocity (m/s)", color="green")
ax2.set_xlabel("time")
ax2.tick_params(axis="y", labelcolor="green")
ax2.plot(time, velocity, "green")
ax2.set_yticks(np.linspace(*ax2.get_ybound(), 10))
fig.set_size_inches(7, 5)
fig.set_dpi(100)
fig.legend(["Distance", "Velocity"])
plt.show()
The command ax1.set_yticks(np.linspace(*ax1.get_ybound(), 10))
calculates the tick values for us. Let’s break this down to see what is happening:
np.linspace
command will create a set of n
no. of partitions between a specified upper and lower limit.ax1.get_ybound()
returns a list which contains the maximum and minimum limits for that particular axis (which in our case is the Y-axis).*
acts as an unpacking operator when prepended before a list
or tuple
. Thus, it will convert a list [1, 2, 3, 4]
into seperate values 1, 2, 3, 4
. This is an immensely powerful feature.np.linspace
method to divide the interval between the maximum and minimum tick values into 10 equal parts.set_yticks
method.The same process is repeated for the second axis.
In this part, we covered some basics of matplotlib
plotting, covering the basic two approaches of how to make plots. In the next part, we will cover how to make simple animations. If you like the content of this blog post, or you have any suggestions or comments, drop me an email or tweet or ping me on IRC. Nowadays, you will find me hanging around #matplotlib on Freenode. Thanks!
This post is part of a series I’m doing on my personal blog. This series is basically going to be about how to animate stuff using python’s matplotlib
library. matplotlib
has an excellent documentation where you can find a detailed documentation on each of the methods I have used in this blog post. Also, I will be publishing each part of this series in the form of a jupyter notebook, which can be found here.
The series will have three posts which will cover:
matplotlib
.FuncAnimation
.I would like to say a few words about the methodology of these series:
A while back, I came across this cool repository to create emoji-art from images. I wanted to use it to transform my mundane Facebook profile picture to something more snazzy. The only trouble? It was written in Rust.
So instead of going through the process of installing Rust, I decided to take the easy route and spin up some code to do the same in Python using matplotlib.
Because that’s what anyone sane would do, right?
In this post, I’ll try to explain my process as we attempt to recreate similar mosaics as this one below. I’ve aimed this post at people who’ve worked with some sort of image data before; but really, anyone can follow along.
import numpy as np
from tqdm import tqdm
from scipy import spatial
from matplotlib import cm
import matplotlib.pyplot as plt
import matplotlib
import scipy
print(f"Matplotlib:{matplotlib.__version__}")
print(f"Numpy:{np.__version__}")
print(f"Scipy: {scipy.__version__}")
## Matplotlib: '3.2.1'
## Numpy: '1.18.1'
## Scipy: '1.4.1'
Let’s read in our image:
img = plt.imread(r"naomi_32.png", 1)
dim = img.shape[0] ##we'll need this later
plt.imshow(img)
Note: The image displayed above is 100x100 but we’ll use a 32x32 from here on since that’s gonna suffice all our needs.
So really, what is an image? To numpy and matplotlib (and for almost every image processing library out there), it is, essentially, just a matrix (say A), where every individual pixel (p) is an element of A. If it’s a grayscale image, every pixel (p) is just a single number (or a scalar) - in the range [0,1] if float, or [0,255] if integer. If it’s not grayscale - like in our case - every pixel is a vector of either dimension 3 - Red (R), Green (G), and Blue (B), or dimension 4 - RGBA (A stands for Alpha, which is basically transparency).
If anything is unclear so far, I’d strongly suggest going through a post like this or this. Knowing that an image can be represented as a matrix (or a numpy array
) greatly helps us as almost every transformation of the image can be represented in terms of matrix maths.
To prove my point, let’s look at img
a little.
## Let's check the type of img
print(type(img))
# <class 'numpy.ndarray'>
## The shape of the array img
print(img.shape)
# (32, 32, 4)
## The value of the first pixel of img
print(img[0][0])
# [128 144 117 255]
## Let's view the color of the first pixel
fig, ax = plt.subplots()
color = img[0][0] / 255.0 ##RGBA only accepts values in the 0-1 range
ax.fill([0, 1, 1, 0], [0, 0, 1, 1], color=color)
That should give you a square filled with the color of the first pixel of img
.
We want to go from a plain image to an image full of emojis - or in other words, an image of images. Essentially, we’re going to replace all pixels with emojis. However, to ensure that our new emoji-image looks like the original image and not just random smiley faces, the trick is to make sure that every pixel is replaced my an emoji which has similar color to that pixel. That’s what gives the result the look of a mosaic.
‘Similar’ really just means that the mean (median is also worth trying) color of the emoji should be close to the pixel it replaces.
So how do you find the mean color of an entire image? Easy. We just take all the RGBA arrays and average the Rs together, and then the Gs together, and then the Bs together, and then the As together (the As, by the way, are just all 1 in our case, so the mean is also going to be 1). Here’s that idea expressed formally:
\[ (r, g, b){\mu}=\left(\frac{\left(r{1}+r_{2}+\ldots+r_{N}\right)}{N}, \frac{\left(g_{1}+g_{2}+\ldots+g_{N}\right)}{N}, \frac{\left(b_{1}+b_{2}+\ldots+b_{N}\right)}{N}\right) \]
The resulting color would be single array of RGBA values: \[ [r_{\mu}, g_{\mu}, b_{\mu}, 1] \]
So now our steps become somewhat like this:
Part I - Get emoji matches
Part II - Reshape emojis to image
That’s pretty much it!
I took care of this for you beforehand with a bit of BeautifulSoup and requests magic. Our emoji collection is a numpy array of shape 1506, 16, 16, 4
- that’s 1506 emojis with each being a 16x16 array of RGBA values. You can find it here.
emoji_array = np.load("emojis_16.npy")
print(emoji_array.shape)
## 1506, 16, 16, 4
##plt.imshow(emoji_array[0]) ##to view the first emoji
We’ve seen the formula above; here’s the numpy code for it. We’re gonna iterate over all all the 1506 emojis and create an array emoji_mean_array
out of them.
emoji_mean_array = np.array(
[ar.mean(axis=(0, 1)) for ar in emoji_array]
) ##`np.median(ar, axis=(0,1))` for median instead of mean
The easiest way to do that would be use Scipy’s KDTree
to create a tree
object of all average RGBA values we calculated in #2. This enables us to perform fast lookup for every pixel using the query
method. Here’s how the code for that looks -
tree = spatial.KDTree(emoji_mean_array)
indices = []
flattened_img = img.reshape(-1, img.shape[-1]) ##shape = [1024, 16, 16, 4]
for pixel in tqdm(flattened_img, desc="Matching emojis"):
_, index = tree.query(pixel) ##returns distance and index of closest match.
indices.append(index)
emoji_matches = emoji_array[indices] ##our emoji_matches
The final step is to reshape the array a little more to enable us to plot it using the imshow function. As you can see above, to loop over the pixels we had to flatten the image out into the flattened_img
. Now we have to sort of un-flatten it back; to make sure it’s back in the form of an image. Fortunately, using numpy’s reshape
function makes this easy.
resized_ar = emoji_matches.reshape(
(dim, dim, 16, 16, 4)
) ##dim is what we got earlier when we read in the image
The last bit is the trickiest. The problem with the output we’ve got so far is that it’s too nested. Or in simpler terms, what we have is a image where every individual pixel is itself an image. That’s all fine but it’s not valid input for imshow and if we try to pass it in, it tells us exactly that.
TypeError: Invalid shape (32, 32, 16, 16, 4) for image data
To grasp our problem intuitively, think about it this way. What we have right now are lots of images like these:
What we want is to merge them all together. Like so:
To think about it slightly more technically, what we have right now is a five dimensional array. What we need is to rehshape it in such a way that it’s - at maximum - three dimensional. However, it’s not as easy as a simple np.reshape
(I’d suggest you go ahead and try that anyway).
Don’t worry though, we have Stack Overflow to the rescue! This excellent answer does exactly that. You don’t have to go through it, I have copied the relevant code in here.
def np_block_2D(chops):
"""Converts list of chopped images to one single image"""
return np.block([[[x] for x in row] for row in chops])
final_img = np_block_2D(resized_ar)
print(final_img.shape)
## (512, 512, 4)
The shape looks correct enough. Let’s try to plot it.
plt.imshow(final_img)
Et Voilà
Of course, the result looks a little meh but that’s because we only used 32x32 emojis. Here’s what the same code would do with 10000 emojis (100x100).
Better?
Now, let’s try and create nine of these emoji-images and grid them together.
def canvas(gray_scale_img):
"""
Plot a 3x3 matrix of the images using different colormaps
param gray_scale_img: a square gray_scale_image
"""
fig, axes = plt.subplots(nrows=3, ncols=3, figsize=(13, 8))
axes = axes.flatten()
cmaps = [
"BuPu_r",
"bone",
"CMRmap",
"magma",
"afmhot",
"ocean",
"inferno",
"PuRd_r",
"gist_gray",
]
for cmap, ax in zip(cmaps, axes):
cmapper = cm.get_cmap(cmap)
rgba_image = cmapper(gray_scale_img)
single_plot(rgba_image, ax)
# ax.imshow(rgba_image) ##try this if you just want to plot the plain image in different color spaces, comment the single_plot call above
ax.set_axis_off()
plt.subplots_adjust(hspace=0.0, wspace=-0.2)
return fig, axes
The code does mostly the same stuff as before. To get the different colours, I used a simple hack. I first converted the image to grayscale and then used 9 different colormaps on it. Then I used the RGB values returned by the colormap to get the absolute values for our new input image. After that, the only part left is to just feed the new input image through the pipeline we’ve discussed so far and that gives us our emoji-image.
Here’s what that looks like:
Pretty
Some final thoughts to wrap this up.
I’m not sure if my way to get different colours using different cmaps is what people usually do. I’m almost certain there’s a better way and if you know one, please submit a PR to the repo (link below).
Iterating over every pixel is not really the best idea. We got away with it since it’s just 1024 (32x32) pixels but for images with higher resolution, we’d have to either iterate over grids of images at once (say a 3x3 or 2x2 window) or resize the image itself to a more workable shape. I prefer the latter since that way we can also just resize it to a square shape in the same call which also has the additional advantage of fitting in nicely in our 3x3 mosaic. I’ll leave the readers to work that out themselves using numpy (and, no, please don’t use cv2.resize
).
The KDTree
was not part of my initial code. Initially, I’d just looped over every emoji for every pixel and then calculated the Euclidean distance (using np.linalg.norm(a-b)
). As you can probably imagine, the nested loop in there slowed down the code tremendously - even a 32x32 emoji-image took around 10 minutes to run - right now the same code takes ~19 seconds. Guess that’s the power of vectorization for you all.
It’s worth messing around with median instead of mean to get the RGBA values of the emojis. Most emojis are circular in shape and hence there’s a lot of space left outside the area of the circular region which sort of waters down the average color in turn watering down the end result. Considering the median might sort out this problem for some images which aren’t very rich.
While I’ve tried to go in a linear manner with (what I hope was) a good mix of explanation and code, I’d strongly suggest looking at the full code in the repository here in case you feel like I sprung anything on you.
I hope you enjoyed this post and learned something from it. If you have any feedback, criticism, questions, please feel free to DM me on Twitter or email me (preferably the former since I’m almost always on there). Thank you, and take care!
]]>Now that my porposal was accepted by NetworkX for the 2021 Google Summer of Code (GSoC), I can get more into the technical details of how I plan to implement the Asadpour algorithm within NetworkX.
In this post I am going to outline my thought process for the control scheme of my implementation and create function stubs according to my GSoC proposal.
Most of the work for this project will happen in netowrkx.algorithms.approximation.traveling_salesman.py
, where I will finish the last algorithm for the Traveling Salesman Problem so it can be merged into the project. The main function in traveling_salesman.py
is
def traveling_salesman_problem(G, weight="weight", nodes=None, cycle=True, method=None):
"""
...
Parameters
----------
G : NetworkX graph
Undirected possibly weighted graph
nodes : collection of nodes (default=G.nodes)
collection (list, set, etc.) of nodes to visit
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
cycle : bool (default: True)
Indicates whether a cycle should be returned, or a path.
Note: the cycle is the approximate minimal cycle.
The path simply removes the biggest edge in that cycle.
method : function (default: None)
A function that returns a cycle on all nodes and approximates
the solution to the traveling salesman problem on a complete
graph. The returned cycle is then used to find a corresponding
solution on `G`. `method` should be callable; take inputs
`G`, and `weight`; and return a list of nodes along the cycle.
Provided options include :func:`christofides`, :func:`greedy_tsp`,
:func:`simulated_annealing_tsp` and :func:`threshold_accepting_tsp`.
If `method is None`: use :func:`christofides` for undirected `G` and
:func:`threshold_accepting_tsp` for directed `G`.
To specify parameters for these provided functions, construct lambda
functions that state the specific value. `method` must have 2 inputs.
(See examples).
...
"""
All user calls to find an approximation to the traveling salesman problem will go through this function.
My implementation of the Asadpour algorithm will also need to be compatible with this function.
traveling_salesman_problem
will handle creating a new, complete graph using the weight of the shortest path between nodes \(u\) and \(v\) as the weight of that arc, so we know that by the time the graph is passed to the Asadpour algorithm it is a complete digraph which satisfies the triangle inequality.
The main function also handles the nodes
and cycles
parameters by only copying the necessary nodes into the complete digraph before calling the requested method and afterwards searching for and removing the largest arc within the returned cycle.
Thus, the parent function for the Asadpour algorithm only needs to deal with the graph itself and the weights or costs of the arcs in the graph.
My controlling function will have the following signature and I have included a draft of the docstring as well.
def asadpour_tsp(G, weight="weight"):
"""
Returns an O( log n / log log n ) approximate solution to the traveling
salesman problem.
This approximate solution is one of the best known approximations for
the asymmetric traveling salesman problem developed by Asadpour et al,
[1]_. The algorithm first solves the Held-Karp relaxation to find a
lower bound for the weight of the cycle. Next, it constructs an
exponential distribution of undirected spanning trees where the
probability of an edge being in the tree corresponds to the weight of
that edge using a maximum entropy rounding scheme. Next we sample that
distribution $2 \\\\\\log n$ times and saves the minimum sampled tree once
the direction of the arcs is added back to the edges. Finally,
we argument then short circuit that graph to find the approximate tour
for the salesman.
Parameters
----------
G : nx.DiGraph
The graph should be a complete weighted directed graph.
The distance between all pairs of nodes should be included.
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
Returns
-------
cycle : list of nodes
Returns the cycle (list of nodes) that a salesman can follow to minimize
the total weight of the trip.
Raises
------
NetworkXError
If `G` is not complete, the algorithm raises an exception.
References
----------
.. [1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi,
An o(log n/log log n)-approximation algorithm for the asymmetric
traveling salesman problem, Operations research, 65 (2017),
pp. 1043–1061
"""
pass
Following my GSoC proposal, the next function is held_karp
, which will solve the Held-Karp relaxation on the complete digraph using the ellipsoid method (See my last two posts here and here for my thoughts on why and how to accomplish this).
Solving the Held-Karp relaxation is the first step in the algorithm.
Recall that the Held-Karp relaxation is defined as the following linear program:
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
and that it is a semi-infinite program so it is too large to be solved in conventional forms. The algorithm uses the solution to the Held-Karp relaxation to create a vector \(z^*\) which is a symmetrized and slightly scaled down version of the true Held-Karp solution \(x^*\). \(z^*\) is defined as
\[ z^*_{{u, v}} = \frac{n - 1}{n} \left(x^*_{uv} + x^*_{vu}\right) \]
and since this is what the algorithm using to build the rest of the approximation, this should be one of the return values from held_karp
.
I will also return the value of the cost of \(x^*\), which is denoted as \(c(x^*)\) or \(OPT_{HK}\) in the Asadpour paper [1].
Additionally, the separation oracle will be defined as an inner function within held_karp
.
At the present moment I am not sure what the exact parameters for the separation oracle, sep_oracle
, but it should be the the point the algorithm wishes to test and will need to access the graph the algorithm is relaxing.
In particular, I’m not sure yet how I will represent the hyperplane which is returned by the separation oracle.
def _held_karp(G, weight="weight"):
"""
Solves the Held-Karp relaxation of the input complete digraph and scales
the output solution for use in the Asadpour [1]_ ASTP algorithm.
The Held-Karp relaxation defines the lower bound for solutions to the
ATSP, although it does return a fractional solution. This is used in the
Asadpour algorithm as an initial solution which is later rounded to a
integral tree within the spanning tree polytopes. This function solves
the relaxation with the ellipsoid method for linear programs.
Parameters
----------
G : nx.DiGraph
The graph should be a complete weighted directed graph.
The distance between all paris of nodes should be included.
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
Returns
-------
OPT : float
The cost for the optimal solution to the Held-Karp relaxation
z_star : numpy array
A symmetrized and scaled version of the optimal solution to the
Held-Karp relaxation for use in the Asadpour algorithm
References
----------
.. [1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi,
An o(log n/log log n)-approximation algorithm for the asymmetric
traveling salesman problem, Operations research, 65 (2017),
pp. 1043–1061
"""
def sep_oracle(point):
"""
The separation oracle used in the ellipsoid algorithm to solve the
Held-Karp relaxation.
This 'black-box' takes a point and check to see if it violates any
of the Held-Karp constraints, which are defined as
- The out-degree of all non-empty subsets of $V$ is at lest one.
- The in-degree and out-degree of each vertex in $V$ is equal to
one. Note that if a vertex has more than one incoming or
outgoing arcs the values of each could be less than one so long
as they sum to one.
- The current value for each arc is greater
than zero.
Parameters
----------
point : numpy array
The point in n dimensional space we will to test to see if it
violations any of the Held-Karp constraints.
Returns
-------
numpy array
The hyperplane which was the most violated by `point`, i.e the
hyperplane defining the polytope of spanning trees which `point`
was farthest from, None if no constraints are violated.
"""
pass
pass
Next the algorithm uses the symmetrized and scaled version of the Held-Karp solution to construct an exponential distribution of undirected spanning trees which preserves the marginal probabilities.
def _spanning_tree_distribution(z_star):
"""
Solves the Maximum Entropy Convex Program in the Asadpour algorithm [1]_
using the approach in section 7 to build an exponential distribution of
undirected spanning trees.
This algorithm ensures that the probability of any edge in a spanning
tree is proportional to the sum of the probabilities of the trees
containing that edge over the sum of the probabilities of all spanning
trees of the graph.
Parameters
----------
z_star : numpy array
The output of `_held_karp()`, a scaled version of the Held-Karp
solution.
Returns
-------
gamma : numpy array
The probability distribution which approximately preserves the marginal
probabilities of `z_star`.
"""
pass
Now that the algorithm has the distribution of spanning trees, we need to sample them. Each sampled tree is a \(\lambda\)-random tree and can be sampled using algorithm A8 in [2].
def _sample_spanning_tree(G, gamma):
"""
Sample one spanning tree from the distribution defined by `gamma`,
roughly using algorithm A8 in [1]_ .
We 'shuffle' the edges in the graph, and then probabilistically
determine whether to add the edge conditioned on all of the previous
edges which were added to the tree. Probabilities are calculated using
Kirchhoff's Matrix Tree Theorem and a weighted Laplacian matrix.
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
gamma : numpy array
The probabilities associated with each of the edges in the undirected
graph `G`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by `gamma`.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
algorithms, 11 (1990), pp. 185–207
"""
pass
At this point there is only one function left to discuss, laplacian_matrix
.
This function already exists within NetworkX at networkx.linalg.laplacianmatrix.laplacian_matrix
, and even though this is relatively simple to implement, I’d rather use an existing version than create duplicate code within the project.
A deeper look at the function signature reveals
@not_implemented_for("directed")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse matrix
The Laplacian matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges weights are summed.
See Also
--------
to_numpy_array
normalized_laplacian_matrix
laplacian_spectrum
"""
Which is exactly what I need, except the decorator states that it does not support directed graphs and this algorithm deals with those types of graphs. Fortunately, our distribution of spanning trees is for trees in a directed graph once the direction is disregarded, so we can actually use the existing function. The definition given in the Asadpour paper [1], is
\[ L_{i,j} = \left\{ \begin{array}{l l} -\lambda_e & e = (i, j) \in E \\\ \sum_{e \in \delta({i})} \lambda_e & i = j \\\ 0 & \text{otherwise} \end{array} \right. \]
Where \(E\) is defined as “Let \(E\) be the support of graph of \(z^*\) when the direction of the arcs are disregarded” on page 5 of the Asadpour paper. Thus, I can use the existing method without having to create a new one, which will save time and effort on this GSoC project.
In addition to being discussed here, these function stubs have been added to my fork of NetworkX
on the bothTSP
branch.
The commit, Added function stubs and draft docstrings for the Asadpour algorithm
is visible on my GitHub using that link.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] V. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207
]]>Continuing the theme of my last post, we know that the Held-Karp relaxation in the Asadpour Asymmetric Traveling Salesman Problem cannot be practically written into the standard matrix form of a linear program. Thus, we need a different method to solve the relaxation, which is where the ellipsoid method comes into play. The ellipsoid method can be used to solve semi-infinite linear programs, which is what the Held-Karp relaxation is.
One of the keys to the ellipsoid method is the separation oracle. From the perspective of the algorithm itself, the oracle is a black-box program which takes a vector and determines
In the most basic form, the ellipsoid method is a decision algorithm rather than an optimization algorithm, so it terminates once a single, but almost certainly nonoptimal, vector within the feasible region is found. However, we can convert the ellipsoid method into an algorithm which is truly an optimization one. What this means for us is that we can assume that the separation oracle will return a hyperplane.
The hyperplane that the oracle returns is then used to construct the next ellipsoid in the algorithm, which is of smaller volume and contains a half-ellipsoid from the originating ellipsoid. This is, however, a topic for another post. Right now I want to focus on this ‘black-box’ separation oracle.
The reason that the Held-Karp relaxation is semi-infinite is because for a graph with \(n\) vertices, there are \(2^n + 2n\) constraints in the program. A naive approach to the separation oracle would be to check each constraint individually for the input vector, creating a program with \(O(2^n)\) running time. While it would terminate eventually, it certainly would take a long time to do so.
So, we look for a more efficient way to do this. Recall from the Asadpour paper [1] that the Held-Karp relaxation is the following linear program.
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
The first set of constraints ensures that the output of the relaxation is connected. This is called subtour elimination, and it prevents a solution with multiple disconnected clusters by ensuring that every set of vertices has at least one total outgoing arc (we are currently dealing with fractional arcs). From the perspective of the separation oracle, we do not care about all of the sets of vertices for which \(x(\delta^+(U)) \geqslant 1\), only trying to find one such subset of the vertices where \(x(\delta^+(U)) < 1\).
In order to find such a set of vertices \(U \in V\) where \(x(\delta^+(U)) < 1\) we can find the subset \(U\) with the smallest value of \(\delta^+(x)\) for all \(U \subset V\). That is, find the global minimum cut in the complete digraph using the edge capacities given by the input vector to the separation oracle. Using lecture notes by Michel X. Goemans (who is also one of the authors of the Asadpour algorithm this project seeks to implement), [2] we can find such a minimum cut with \(2(n - 1)\) maximum flow calculations.
The algorithm described in section 6.4 of the lecture notes [2] is fairly simple. Let \(S\) be a subset of \(V\) and \(T\) be a subset of \(V\) such that the \(s-t\) cut is the global minimum cut for the graph. First, we pick an arbitrary \(s\) in the graph. By definition, \(s\) is either in \(S\) or it is in \(T\). We now iterate through every other vertex in the graph \(t\), and compute the \(s-t\) and \(t-s\) minimum cut. If \(s \in S\) than we will find that one of the choices of \(t\) will produce the global minimum cut and the case where \(s \not\in S\) or \(s \in T\) is covered by using the \(t-s\) cuts.
According to Geoman [2], the complexity of finding the global min cut in a weighted digraph, using an effeicent maxflow algorithm, is \(O(mn^2\log(n^2/m))\).
The second constraint can be checked in \(O(n)\) time with a simple loop. It makes sense to actually check this one first as it is computationally simpler and thus if one of these conditions are violated we will be able to return the violated hyperplane faster.
Now we have reduced the complexity of the oracle from \(O(2^n)\) to the same as finding the global min cut, \(O(mn^2\log(n^2/m))\) which is substantially better. For example, let us consider an initial graph with 100 vertices. Using the \(O(2^n)\) method, that is \(1.2677 \times 10^{30}\) subsets \(U\) that we need to check times whatever the complexity of actually determining whether the constraint violates \(x(\delta^+(U)) \geqslant 1\). For that same complete digraph on 100 vertices, we know that there \(n = 100\) and \(m = \binom{100}{2} = 4950\). Using the global min cut approach, the complexity which includes finding the max flow as well as the number of times it needs to be found, is \(15117042\) or \(1.5117 \times 10^7\) which is faster by a factor of \(10^{23}\).
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] M. X. Goemans, Lecture notes on flows and cuts, Handout 18, Massachusetts Institute of Technology, Cambridge, MA, 2009 http://www-math.mit.edu/~goemans/18433S09/flowscuts.pdf.
]]>The other day I was homeschooling my kids, and they asked me: “Daddy, can you draw us all possible non-isomorphic graphs of 3 nodes”? Or maybe I asked them that? Either way, we happily drew all possible graphs of 3 nodes, but already for 4 nodes it got hard, and for 5 nodes - plain impossible!
So I thought: let me try to write a brute-force program to do it! I spent a few hours sketching some smart dynamic programming solution to generate these graphs, and went nowhere, as apparently the problem is quite hard. I gave up, and decided to go with a naive approach:
This strategy seemed more reasonable, but writing a “graph-comparator” still felt like a cumbersome task, and more importantly, this part would itself be slow, as I’d still have to go through a whole tree of options for every graph comparison. So after some more head-scratching, I decided to simplify it even further, and use the fact that these days the memory is cheap:
For the first task, I went with the edge list, which made the task identical to generating all binary numbers of length \(\frac{N(N-1)}{2}\) with a recursive function, except instead of writing zeroes you skip edges, and instead of writing ones, you include them. Below is the function that does the trick, and has an additional bonus of listing all edges in a neat orderly way. For every edge \(i \rightarrow j\) we can be sure that \(i\) is lower than \(j\), and also that edges are sorted as words in a dictionary. Which is good, as it restricts the set of possible descriptions a bit, which will simplify our life later.
def make_graphs(n=2, i=None, j=None):
"""Make a graph recursively, by either including, or skipping each edge.
Edges are given in lexicographical order by construction."""
out = []
if i is None: # First call
out = [[(0, 1)] + r for r in make_graphs(n=n, i=0, j=1)]
elif j < n - 1:
out += [[(i, j + 1)] + r for r in make_graphs(n=n, i=i, j=j + 1)]
out += [r for r in make_graphs(n=n, i=i, j=j + 1)]
elif i < n - 1:
out = make_graphs(n=n, i=i + 1, j=i + 1)
else:
out = [[]]
return out
If you run this function for a small number of nodes (say, \(N=3\)), you can see how it generates all possible graph topologies, but that some of the descriptions would actually lead to identical pictures, if drawn (graphs 2 and 3 in the list below).
[(0, 1), (0, 2), (1, 2)]
[(0, 1), (0, 2)]
[(0, 1), (1, 2)]
[(0, 1)]
Also, while building a graph from edges means that we’ll never get lonely unconnected points, we can get graphs that are smaller than \(n\) nodes (the last graph in the list above), or graphs that have unconnected parts. It is impossible for \(n=3\), but starting with \(n=4\) we would get things like [(0,1), (2,3)]
, which is technically a graph, but you cannot exactly wear it as a piece of jewelry, as it would fall apart. So at this point I decided to only visualize fully connected graphs of exactly \(n\) vertices.
To continue with the plan, we now need to make a function that for every graph would generate a family of its “alternative representations” (given the constraints of our generator), to make sure duplicates would not slip under the radar. First we need a permutation function, to permute the nodes (you could also use a built-in function in numpy
, but coding this one from scratch is always fun, isn’t it?). Here’s the permutation generator:
def perm(n, s=None):
"""All permutations of n elements."""
if s is None:
return perm(n, tuple(range(n)))
if not s:
return [[]]
return [[i] + p for i in s for p in perm(n, tuple([k for k in s if k != i]))]
Now, for any given graph description, we can permute its nodes, sort the \(i,j\) within each edge, sort the edges themselves, remove duplicate alt-descriptions, and remember the list of potential impostors:
def permute(g, n):
"""Create a set of all possible isomorphic codes for a graph,
as nice hashable tuples. All edges are i<j, and sorted lexicographically."""
ps = perm(n)
out = set([])
for p in ps:
out.add(
tuple(sorted([(p[i], p[j]) if p[i] < p[j] else (p[j], p[i]) for i, j in g]))
)
return list(out)
Say, for an input description of [(0, 1), (0, 2)]
, the function above returns three “synonyms”:
((0, 1), (1, 2))
((0, 1), (0, 2))
((0, 2), (1, 2))
I suspect there should be a neater way to code that, to avoid using the list → set → list
pipeline to get rid of duplicates, but hey, it works!
At this point, the only thing that’s missing is the function to check whether the graph comes in one piece, which happens to be a famous and neat algorithm called the “Union-Find”. I won’t describe it here in detail, but in short, it goes though all edges and connects nodes to each other in a special way; then counts how many separate connected components (like, chunks of the graph) remain in the end. If all nodes are in one chunk, we like it. If not, I don’t want to see it in my pictures!
def connected(g):
"""Check if the graph is fully connected, with Union-Find."""
nodes = set([i for e in g for i in e])
roots = {node: node for node in nodes}
def _root(node, depth=0):
if node == roots[node]:
return (node, depth)
else:
return _root(roots[node], depth + 1)
for i, j in g:
ri, di = _root(i)
rj, dj = _root(j)
if ri == rj:
continue
if di <= dj:
roots[ri] = rj
else:
roots[rj] = ri
return len(set([_root(node)[0] for node in nodes])) == 1
Now we can finally generate the “overkill” list of graphs, filter it, and plot the pics:
def filter(gs, target_nv):
"""Filter all improper graphs: those with not enough nodes,
those not fully connected, and those isomorphic to previously considered."""
mem = set({})
gs2 = []
for g in gs:
nv = len(set([i for e in g for i in e]))
if nv != target_nv:
continue
if not connected(g):
continue
if tuple(g) not in mem:
gs2.append(g)
mem |= set(permute(g, target_nv))
return gs2
# Main body
NV = 6
gs = make_graphs(NV)
gs = filter(gs, NV)
plot_graphs(gs, figsize=14, dotsize=20)
For plotting the graphs I wrote a small wrapper for the MatPlotLib-based NetworkX visualizer, splitting the figure into lots of tiny little facets using Matplotlib subplot
command. “Kamada-Kawai” layout below is a popular and fast version of a spring-based layout, that makes the graphs look really nice.
def plot_graphs(graphs, figsize=14, dotsize=20):
"""Utility to plot a lot of graphs from an array of graphs.
Each graphs is a list of edges; each edge is a tuple."""
n = len(graphs)
fig = plt.figure(figsize=(figsize, figsize))
fig.patch.set_facecolor("white") # To make copying possible (white background)
k = int(np.sqrt(n))
for i in range(n):
plt.subplot(k + 1, k + 1, i + 1)
g = nx.Graph() # Generate a Networkx object
for e in graphs[i]:
g.add_edge(e[0], e[1])
nx.draw_kamada_kawai(g, node_size=dotsize)
print(".", end="")
Here are the results. To build the anticipation, let’s start with something trivial: all graphs of 3 nodes:
All graphs of 4 nodes:
All graphs of 5 nodes:
Generating figures above is of course all instantaneous on a decent computer, but for 6 nodes (below) it takes a few seconds:
For 7 nodes (below) it takes about 5-10 minutes. It’s easy to see why: the brute-force approach generates all \(2^{\frac{n(n-1)}{2}}\) possible graphs, which means that the number of operations grows exponentially! Every increase of \(n\) by one, gives us \(n-1\) new edges to consider, which means that the time to run the program increases by \(~2^{n-1}\). For \(n=7\) it brought me from seconds to minutes, for \(n=8\) it would have shifted me from minutes to hours, and for \(n=9\), from hours, to months of computation. Isn’t it fun? We are all specialists in exponential growth these days, so here you are :)
The code is available as a Jupyter Notebook on my GitHub. I hope you enjoyed the pictures, and the read! Which of those charms above would bring most luck? Which ones seem best for divination? Let me know what you think! :)
]]>When I, Sidharth Bansal, heard I got selected in Google Summer of Code(GSOC) 2020 with Matplotlib under Numfocus, I was jumping and dancing. In this post, I talk about my past experiences, how I got selected for GSOC with Matplotlib, and my project details. I am grateful to the community :)
I am currently pursuing a Bachelor’s in Technology in Software Engineering at Delhi Technological University, Delhi, India. I started my journey of open source with Public Lab, an open-source organization as a full-stack Ruby on Rails web developer. I initially did the Google Summer of Code there. I built a Multi-Party Authentication System which involves authentication of the user through multiple websites linked like mapknitter.org and spectralworkbench.org with OmniAuth providers like Facebook, twitter, google, and Github. I also worked on a Multi-Tag Subscription project there. It involved tag/category subscription by the user so that users will be notified of subsequent posts in the category they subscribe to earlier. I have also mentored there as for Google Code-In and GSoC last year. I also worked there as a freelancer.
Apart from this, I also successfully completed an internship in the Google Payments team at Google, India this year as a Software Engineering Intern. I built a PAN Collection Flow there. PAN(Taxation Number) information is collected from the user if the total amount claimed by the user through Scratch cards in the current financial year exceeds PAN_LIMIT. Triggered PAN UI at the time of scratching the reward. Enabled Paisa-Offers to uplift their limit to grant Scratch Cards after crossing PAN_LIMIT. Used different technologies like Java, Guice, Android, Spanner Queues, Protocol Buffers, JUnit, etc.
I also have a keen interest in Machine Learning and Natural Language Processing and have done a couple of projects at my university. I have researched on Query Expansion using fuzzy logic
. I will be publishing it in some time. It involves the fuzzification of the traditional wordnet for query expansion.
Our paper Experimental Comparison & Scientometric Inspection of Research for Word Embeddings
got accepted in ESCI Journal and Springer LNN past week. It explains the ongoing trends in universal embeddings and compares them.
I chose matplotlib as it is an organization with so much cool stuff relating to plotting. I have always wanted to work on such things. People are really friendly, always eager to help!
The first step is getting involved with the community. I started using the Gitter channel to know about the maintainers. I started learning the different pieces which tie up for the baseline image problem. I started with learning the system architecture of matplotlib. Then I installed the matplotlib, learned the cool tech stack related to matplotlib like sphinx, python, pypi etc.
Learning is a continuous task. Taking guidance from mentors about the various use case scenarios involved in the GSoC project helped me to gain a lot of insights. I solved a couple of small issues. I learned about the code-review process followed here, sphinx documentation, how releases work. I did some PRs. It was a great learning experience.
The project is about the generation of baseline images instead of downloading them. The baseline images are problematic because they cause the repo size to grow rather quickly by adding more baseline images. Also, the baseline images force matplotlib contributors to pin to a somewhat old version of FreeType because nearly every release of FreeType causes tiny rasterization changes that would entail regenerating all baseline images. Thus, it causes even more repository size growth. The idea is not to store the baseline images at all in the Github repo. It involves dividing the matplotlib package into two separate packages - mpl-test and mpl-notest. Mpl-test will have test suite and related information. The functionality of mpl plotting library will be present in mpl-notest. We will then create the logic for generating and grabbing the latest release. Some caching will be done too. We will then implement an analogous strategy to the CI.
Mentor Antony Lee
Thanks a lot for reading….having a great time coding with great people at Matplotlib. I will be right back with my work progress in subsequent posts.
]]>In linear programming, we sometimes need to take what would be a integer program and ‘relax’ it, or unbound the values of the variables so that they are continuous. One particular application of this process is Held-Karp relaxation used the first part of the Asadpour algorithm for the Asymmetric Traveling Salesman Problem, where we find the lower bound of the approximation. Normally the relaxation is written as follows.
\[ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} \]
This is a convenient way to write the program, but if we want to solve it, and we definitely do, we need it written in standard form for a linear program. Standard form is represented using a matrix for the set of constraints and vectors for the objective function. It is shown below
\[ \begin{array}{c l} \text{min} & Z = c^TX \\\ \text{s.t.} & AX = b \\\ & X \geqslant 0 \end{array} \]
Where \(c\) is the coefficient vector for objective function, \(X\) is the vector for the values of all of the variables, \(A\) is the coefficient matrix for the constraints and \(b\) is a vector of what the constraints are equal to. Once a linear program is in this form there are efficient algorithms which can solve it.
In the Held-Karp relaxation, the objective function is a summation, so we can expand it to a summation. If there are \(n\) edges then it becomes
\[ \sum_{a} c(a)x_a = c(1)x_1 + c(2)x_2 + c(3)x_3 + \dots + c(n)_n \]
Where \(c(a)\) is the weight of that edge in the graph. From here it is easy to convert the objective function into two vectors which satisfies the standard form.
\[ \begin{array}{rCl} c &=& \begin{bmatrix} c_1 & c_2 & c_3 & \dots & c_n \end{bmatrix}^T \\\ X &=& \begin{bmatrix} x_1 & x_2 & x_3 & \dots & x_n \end{bmatrix}^T \end{array} \]
Now we have to convert the constraints to be in standard form. First and foremost, notice that the Held-Karp relaxation contains \(x_a \geqslant 0\ \forall\ a\) and the standard form uses \(X \geqslant 0\), so these constants match already and no work is needed. As for the others… well they do need some work.
Starting with the first constraint in the Held-Karp relaxation, \(x(\delta^+(U)) \geqslant 1\ \forall\ U \subset V\) and \(U \not= \emptyset\). This constraint specifies that for every subset of the vertex set \(V\), that subset must have at lest one arc with its tail in \(U\) and its head not in \(U\). For any given \(\delta^+(U)\), which is defined in the paper is \(\delta^+(U) = {a = (u, v) \in A: u \in U, v \not\in U}\) where \(A\) in this set is the set of all arcs in the graph, the coefficients on arcs not in \(U\) are zero. Arcs in \(\delta^+(U)\) have a coefficient of \(1\) as their full weight is counted as part of \(\delta^+(U)\). We know that there are about \(2^{|V|}\) subsets of the vertex \(V\), so this constraint adds that many rows to the constraint matrix \(A\).
Moving to the next constraint, \(x(\delta^+(v)) = x(\delta^-(v)) = 1\), we first need to split it in two.
\[ \begin{array}{rCl} x(\delta^+(v)) &=& 1 \\\ x(\delta^-(v)) &=& 1 \end{array} \]
Similar to the last constraint, each of these say that the number of arcs entering and leaving a vertex in the graph need to equal one. For each vertex \(v\) we find all the arcs which start at \(v\) and those are the members of \(\delta^+(v)\), so they have a weight of 1 and all others have a weight of zero. The opposite is true for \(\delta^-(v)\), every vertex which has a head on \(v\) has a weight or coefficient of 1 while the rest have a weight of zero. This adds \(2 \times |V|\) rows to \(A\), the coefficient matrix which brings the total to \(2^{|V|} + 2|V|\) rows.
We already know that \(A\) will have \(2^{|V|} + 2|V|\) rows. But how many columns will \(A\) have? We know that each arc is a variable so at lest \(|E|\) rows, but in a traditional matrix form of a linear program, we have to introduce slack and surplus variables so that \(AX = b\) and not \(AX \geqslant b\) or any other inequality operation. The \(2|V|\) rows already comply with this requirment, but the rows created with every subset of \(V\) do not, those rows only require that \(x(\delta^+(U)) \geqslant 1\), so we introduce a surplus variable for each of these rows bring the column count to \(|E| + 2^{|V|}\).
Now, the Held-Karp relaxation performed in the Asadpour algorithm in is done on the complete bi-directed graph. For a graph with \(n\) vertices, there will be \(2 \times \binom{n}{2}\) arcs in the graph. The updated value for the size of \(A\) is then that it is a
\[ \left(2^n + 2n \right)\times \left(2\binom{n}{2} + 2^n\right) \]
matrix. This is very large. For \(n = 100\) there are \(1.606 \times 10^{60}\) elements in the matrix. Allocating a measly 8 bits per entry sill consumes over \(1.28 \times 10^{52}\) gigabytes of memory.
This is an impossible amount of memory for any computer that we could run NetworkX on.
The Held-Karp relaxation must be solved in the Asadpour Asymmertic Traveling Salesman Problem Algorithm, but clearly putting it into standard form is not possible. This means that we will not be able to use SciPy’s linprog method which I was hoping to use. I will instead have to research and write an ellipsoid method solver, which hopefully will be able to solve the Held-Karp relaxation in both polynomial time and a practical amount of memory.
]]>Let’s make up some numbers, put them in a Pandas dataframe and plot them:
import pandas as pd
import matplotlib.pyplot as plt
df = pd.DataFrame({'A': [1, 3, 9, 5, 2, 1, 1],
'B': [4, 5, 5, 7, 9, 8, 6]})
df.plot(marker='o')
plt.show()
Not bad, but somewhat ordinary. Let’s customize it by using Seaborn’s dark style, as well as changing background and font colors:
plt.style.use("seaborn-dark")
for param in ['figure.facecolor', 'axes.facecolor', 'savefig.facecolor']:
plt.rcParams[param] = '#212946' # bluish dark grey
for param in ['text.color', 'axes.labelcolor', 'xtick.color', 'ytick.color']:
plt.rcParams[param] = '0.9' # very light grey
ax.grid(color='#2A3459') # bluish dark grey, but slightly lighter than background
It looks more interesting now, but we need our colors to shine more against the dark background:
fig, ax = plt.subplots()
colors = [
'#08F7FE', # teal/cyan
'#FE53BB', # pink
'#F5D300', # yellow
'#00ff41', # matrix green
]
df.plot(marker='o', ax=ax, color=colors)
Now, how to get that neon look? To make it shine, we redraw the lines multiple times, with low alpha value and slighty increasing linewidth. The overlap creates the glow effect.
n_lines = 10
diff_linewidth = 1.05
alpha_value = 0.03
for n in range(1, n_lines+1):
df.plot(marker='o',
linewidth=2+(diff_linewidth*n),
alpha=alpha_value,
legend=False,
ax=ax,
color=colors)
For some more fine tuning, we color the area below the line (via ax.fill_between
) and adjust the axis limits.
Here’s the full code:
import pandas as pd
import matplotlib.pyplot as plt
plt.style.use("dark_background")
for param in ['text.color', 'axes.labelcolor', 'xtick.color', 'ytick.color']:
plt.rcParams[param] = '0.9' # very light grey
for param in ['figure.facecolor', 'axes.facecolor', 'savefig.facecolor']:
plt.rcParams[param] = '#212946' # bluish dark grey
colors = [
'#08F7FE', # teal/cyan
'#FE53BB', # pink
'#F5D300', # yellow
'#00ff41', # matrix green
]
df = pd.DataFrame({'A': [1, 3, 9, 5, 2, 1, 1],
'B': [4, 5, 5, 7, 9, 8, 6]})
fig, ax = plt.subplots()
df.plot(marker='o', color=colors, ax=ax)
# Redraw the data with low alpha and slighty increased linewidth:
n_shades = 10
diff_linewidth = 1.05
alpha_value = 0.3 / n_shades
for n in range(1, n_shades+1):
df.plot(marker='o',
linewidth=2+(diff_linewidth*n),
alpha=alpha_value,
legend=False,
ax=ax,
color=colors)
# Color the areas below the lines:
for column, color in zip(df, colors):
ax.fill_between(x=df.index,
y1=df[column].values,
y2=[0] * len(df),
color=color,
alpha=0.1)
ax.grid(color='#2A3459')
ax.set_xlim([ax.get_xlim()[0] - 0.2, ax.get_xlim()[1] + 0.2]) # to not have the markers cut off
ax.set_ylim(0)
plt.show()
If this helps you or if you have constructive criticism, I’d be happy to hear about it! Please contact me via here or here. Thanks!
]]>As has been discussed in detail in Nadia Eghbal’s Roads and Bridges, the CZI EOSS program announcement, and in the NumFocus sustainability program goals, much of the critical software that science and industry are built on is maintained by a primarily volunteer community. While this has worked, it is not sustainable in the long term for the health of many projects or their contributors.
We are happy to announce that we have hired Elliott Sales de Andrade (QuLogic) as the Matplotlib Software Research Engineering Fellow supported by the Chan Zuckerberg Initiative Essential Open Source Software for Science effective March 1, 2020!
Elliott has been contributing to a broad variety of Free and Open Source projects for several years. He is an active Matplotlib contributor and has had commit rights since October 2015. In addition to working on Matplotlib, Elliott has contributed to a wide range of projects in the Scientific Python software stack, both downstream and upstream of Matplotlib, including Cartopy, ObsPy, and NumPy. Outside of Python, Elliott is a developer on the Pidgin project and a packager for Fedora Linux. In his work on Matplotlib, he is interested in advancing science through reproducible workflows and more accessible libraries.
We are already seeing a reduction in the backlog of open issues and pull requests, which we hope will make the library easier to contribute to and maintain long term. We also benefit from Elliott having the bandwidth to maintain a library wide view of all the on-going work and open bugs. Hiring Elliott as an RSEF is the start of ensuring that Matplotlib is sustainable in the long term.
Looking forward to all the good work we are going to do this year!
]]>This is my first post for the Matplotlib blog so I wanted to lead with an example of what I most love about it: How much control Matplotlib gives you. I like to use it as a programmable drawing tool that happens to be good at plotting data.
The default layout for Matplotlib works great for a lot of things, but sometimes you want to exert more control. Sometimes you want to treat your figure window as a blank canvas and create diagrams to communicate your ideas. Here, we will walk through the process for setting this up. Most of these tricks are detailed in this cheat sheet for laying out plots.
import matplotlib.pyplot as plt
import numpy as np
The first step is to choose the size of your canvas.
(Just a heads up, I love the metaphor of the canvas, so that’s how I am using the term here. The Canvas object is a very specific thing in the Matplotlib code base. That’s not what I’m referring to.)
I’m planning to make a diagram that is 16 centimeters wide and 9 centimeters high. This will fit comfortably on a piece of A4 or US Letter paper and will be almost twice as wide as it is high. It also scales up nicely to fit on a wide-format slide presentation.
The plt.figure()
function accepts a figsize
argument,
a tuple of (width, height)
in inches.
To convert from centimeters, we’ll divide by 2.54.
fig_width = 16 # cm
fig_height = 9 # cm
fig = plt.figure(figsize=(fig_width / 2.54, fig_height / 2.54))
The next step is to add an Axes object that we can draw on. By default, Matplotlib will size and place the Axes to leave a little border and room for x- and y-axis labels. However, we don’t want that this time around. We want our Axes to extend right up to the edge of the Figure.
The add_axes()
function lets us specify exactly where to place
our new Axes and how big to make it. It accepts a tuple of the format
(left, bottom, width, height)
. The coordinate frame of the Figure
is always (0, 0) at the bottom left corner and (1, 1) at the upper right,
no matter what size of Figure you are working with. Positions, widths,
and heights all become fractions of the total width and height of the Figure.
To fill the Figure with our Axes entirely, we specify a left position of 0, a bottom position of 0, a width of 1, and a height of 1.
ax = fig.add_axes((0, 0, 1, 1))
To make our diagram creation easier, we can set the axis limits so that one unit in the figure equals one centimeter. This grants us an intuitive way to control the size of objects in the diagram. A circle with a radius of 2 will be drawn as a circle (not an ellipse) in the final image and have a radius of 2 cm.
ax.set_xlim(0, fig_width)
ax.set_ylim(0, fig_height)
We can also do away with the automatically generated ticks and tick labels with this pair of calls.
ax.tick_params(bottom=False, top=False, left=False, right=False)
ax.tick_params(labelbottom=False, labeltop=False, labelleft=False, labelright=False)
At this point we have a big blank space of exactly the right size and shape. Now we can begin building our diagram. The foundation of the image will be the background color. White is fine, but sometimes it’s fun to mix it up. Here are some ideas to get you started.
ax.set_facecolor("antiquewhite")
We can also add a border to the diagram to visually set it apart.
ax.spines["top"].set_color("midnightblue")
ax.spines["bottom"].set_color("midnightblue")
ax.spines["left"].set_color("midnightblue")
ax.spines["right"].set_color("midnightblue")
ax.spines["top"].set_linewidth(4)
ax.spines["bottom"].set_linewidth(4)
ax.spines["left"].set_linewidth(4)
ax.spines["right"].set_linewidth(4)
Now we have a foundation and background in place and we’re finally ready to start drawing. You have complete freedom to draw curves and shapes, place points, and add text of any variety within our 16 x 9 garden walls.
Then when you’re done, the last step is to save the figure out as a
.png
file. In this format it can be imported to and added to whatever
document or presentation you’re working on
fig.savefig("blank_diagram.png", dpi=300)
If you’re making a collection of diagrams, you can make a convenient template for your blank canvas.
def blank_diagram(
fig_width=16, fig_height=9, bg_color="antiquewhite", color="midnightblue"
):
fig = plt.figure(figsize=(fig_width / 2.54, fig_height / 2.54))
ax = fig.add_axes((0, 0, 1, 1))
ax.set_xlim(0, fig_width)
ax.set_ylim(0, fig_height)
ax.set_facecolor(bg_color)
ax.tick_params(bottom=False, top=False, left=False, right=False)
ax.tick_params(labelbottom=False, labeltop=False, labelleft=False, labelright=False)
ax.spines["top"].set_color(color)
ax.spines["bottom"].set_color(color)
ax.spines["left"].set_color(color)
ax.spines["right"].set_color(color)
ax.spines["top"].set_linewidth(4)
ax.spines["bottom"].set_linewidth(4)
ax.spines["left"].set_linewidth(4)
ax.spines["right"].set_linewidth(4)
return fig, ax
Then you can take that canvas and add arbitrary text, shapes, and lines.
fig, ax = blank_diagram()
for x0 in np.arange(-3, 16, 0.5):
ax.plot([x0, x0 + 3], [0, 9], color="black")
fig.savefig("stripes.png", dpi=300)
Or more intricately:
fig, ax = blank_diagram()
centers = [(3.5, 6.5), (8, 6.5), (12.5, 6.5), (8, 2.5)]
radii = 1.5
texts = [
"\n".join(["My roommate", "is a Philistine", "and a boor"]),
"\n".join(["My roommate", "ate the last", "of the", "cold cereal"]),
"\n".join(["I am really", "really hungy"]),
"\n".join(["I'm annoyed", "at my roommate"]),
]
# Draw circles with text in the center
for i, center in enumerate(centers):
x, y = center
theta = np.linspace(0, 2 * np.pi, 100)
ax.plot(
x + radii * np.cos(theta),
y + radii * np.sin(theta),
color="midnightblue",
)
ax.text(
x,
y,
texts[i],
horizontalalignment="center",
verticalalignment="center",
color="midnightblue",
)
# Draw arrows connecting them
# https://e2eml.school/matplotlib_text.html#annotate
ax.annotate(
"",
(centers[1][0] - radii, centers[1][1]),
(centers[0][0] + radii, centers[0][1]),
arrowprops=dict(arrowstyle="-|>"),
)
ax.annotate(
"",
(centers[2][0] - radii, centers[2][1]),
(centers[1][0] + radii, centers[1][1]),
arrowprops=dict(arrowstyle="-|>"),
)
ax.annotate(
"",
(centers[3][0] - 0.7 * radii, centers[3][1] + 0.7 * radii),
(centers[0][0] + 0.7 * radii, centers[0][1] - 0.7 * radii),
arrowprops=dict(arrowstyle="-|>"),
)
ax.annotate(
"",
(centers[3][0] + 0.7 * radii, centers[3][1] + 0.7 * radii),
(centers[2][0] - 0.7 * radii, centers[2][1] - 0.7 * radii),
arrowprops=dict(arrowstyle="-|>"),
)
fig.savefig("causal.png", dpi=300)
Once you get started on this path, you can start making extravagantly annotated plots. It can elevate your data presentations to true storytelling.
Happy diagram building!
]]>This post will outline how we can leverage gridspec to create ridgeplots in Matplotlib. While this is a relatively straightforward tutorial, some experience working with sklearn would be beneficial. Naturally it being a vast undertaking, this will not be an sklearn tutorial, those interested can read through the docs here. However, I will use its KernelDensity
module from sklearn.neighbors
.
import pandas as pd
import numpy as np
from sklearn.neighbors import KernelDensity
import matplotlib as mpl
import matplotlib.pyplot as plt
import matplotlib.gridspec as grid_spec
I’ll be using some mock data I created. You can grab the dataset from GitHub here if you want to play along. The data looks at aptitude test scores broken down by country, age, and sex.
data = pd.read_csv("mock-european-test-results.csv")
country | age | sex | score |
---|---|---|---|
Italy | 21 | female | 0.77 |
Spain | 20 | female | 0.87 |
Italy | 24 | female | 0.39 |
United Kingdom | 20 | female | 0.70 |
Germany | 20 | male | 0.25 |
… |
GridSpec is a Matplotlib module that allows us easy creation of subplots. We can control the number of subplots, the positions, the height, width, and spacing between each. As a basic example, let’s create a quick template. The key parameters we’ll be focusing on are nrows
, ncols
, and width_ratios
.
nrows
and ncols
divide our figure into areas we can add axes to. width_ratios
controls the width of each of our columns. If we create something like GridSpec(2,2,width_ratios=[2,1])
, we are subsetting our figure into 2 rows, 2 columns, and setting our width ratio to 2:1, i.e., that the first column will take up two times the width of the figure.
What’s great about GridSpec is that now we have created those subsets, we are not bound to them, as we will see below.
Note: I am using my own theme, so plots will look different. Creating custom themes is outside the scope of this tutorial (but I may write one in the future).
gs = (grid_spec.GridSpec(2,2,width_ratios=[2,1]))
fig = plt.figure(figsize=(8,6))
ax = fig.add_subplot(gs[0:1,0])
ax1 = fig.add_subplot(gs[1:,0])
ax2 = fig.add_subplot(gs[0:,1:])
ax_objs = [ax,ax1,ax2]
n = ["",1,2]
i = 0
for ax_obj in ax_objs:
ax_obj.text(0.5,0.5,"ax{}".format(n[i]),
ha="center",color="red",
fontweight="bold",size=20)
i += 1
plt.show()
I won’t get into more detail about what everything does here. If you are interested in learning more about figures, axes, and gridspec, Akash Palrecha has written a very nice guide here.
We have a couple of options here. The easiest by far is to stick with the pipes built into pandas. All that’s needed is to select the column and add plot.kde
. This defaults to a Scott bandwidth method, but you can choose a Silverman method, or add your own. Let’s use GridSpec again to plot the distribution for each country. First we’ll grab the unique country names and create a list of colors.
countries = [x for x in np.unique(data.country)]
colors = ['#0000ff', '#3300cc', '#660099', '#990066', '#cc0033', '#ff0000']
Next we’ll loop through each country and color to plot our data. Unlike the above we will not explicitly declare how many rows we want to plot. The reason for this is to make our code more dynamic. If we set a specific number of rows and specific number of axes objects, we’re creating inefficient code. This is a bit of an aside, but when creating visualizations, you should always aim to reduce and reuse. By reduce, we specifically mean lessening the number of variables we are declaring and the unnecessary code associated with that. We are plotting data for six countries, what happens if we get data for 20 countries? That’s a lot of additional code. Related, by not explicitly declaring those variables we make our code adaptable and ready to be scripted to automatically create new plots when new data of the same kind becomes available.
gs = (grid_spec.GridSpec(len(countries),1))
fig = plt.figure(figsize=(8,6))
i = 0
#creating empty list
ax_objs = []
for country in countries:
# creating new axes object and appending to ax_objs
ax_objs.append(fig.add_subplot(gs[i:i+1, 0:]))
# plotting the distribution
plot = (data[data.country == country]
.score.plot.kde(ax=ax_objs[-1],color="#f0f0f0", lw=0.5)
)
# grabbing x and y data from the kde plot
x = plot.get_children()[0]._x
y = plot.get_children()[0]._y
# filling the space beneath the distribution
ax_objs[-1].fill_between(x,y,color=colors[i])
# setting uniform x and y lims
ax_objs[-1].set_xlim(0, 1)
ax_objs[-1].set_ylim(0,2.2)
i += 1
plt.tight_layout()
plt.show()
We’re not quite at ridge plots yet, but let’s look at what’s going on here. You’ll notice instead of setting an explicit number of rows, we’ve set it to the length of our countries list - gs = (grid_spec.GridSpec(len(countries),1))
. This gives us flexibility for future plotting with the ability to plot more or less countries without needing to adjust the code.
Just after the for loop we create each axes object: ax_objs.append(fig.add_subplot(gs[i:i+1, 0:]))
. Before the loop we declared i = 0
. Here we are saying create axes object from row 0 to 1, the next time the loop runs it creates an axes object from row 1 to 2, then 2 to 3, 3 to 4, and so on.
Following this we can use ax_objs[-1]
to access the last created axes object to use as our plotting area.
Next, we create the kde plot. We declare this as a variable so we can retrieve the x and y values to use in the fill_between
that follows.
Once again using GridSpec, we can adjust the spacing between each of the subplots. We can do this by adding one line outside of the loop before plt.tight_layout()
The exact value will depend on your distribution so feel free to play around with the exact value:
gs.update(hspace= -0.5)
Now our axes objects are overlapping! Great-ish. Each axes object is hiding the one layered below it. We could just add ax_objs[-1].axis("off")
to our for loop, but if we do that we will lose our xticklabels. Instead we will create a variable to access the background of each axes object, and we will loop through each line of the border (spine) to turn them off. As we only need the xticklabels for the final plot, we will add an if statement to handle that. We will also add in our country labels here. In our for loop we add:
# make background transparent
rect = ax_objs[-1].patch
rect.set_alpha(0)
# remove borders, axis ticks, and labels
ax_objs[-1].set_yticklabels([])
ax_objs[-1].set_ylabel('')
if i == len(countries)-1:
pass
else:
ax_objs[-1].set_xticklabels([])
spines = ["top","right","left","bottom"]
for s in spines:
ax_objs[-1].spines[s].set_visible(False)
country = country.replace(" ","\n")
ax_objs[-1].text(-0.02,0,country,fontweight="bold",fontsize=14,ha="center")
As an alternative to the above, we can use the KernelDensity
module from sklearn.neighbors
to create our distribution. This gives us a bit more control over our bandwidth. The method here is taken from Jake VanderPlas’s fantastic Python Data Science Handbook, you can read his full excerpt here. We can reuse most of the above code, but need to make a couple of changes. Rather than repeat myself, I’ll add the full snippet here and you can see the changes and minor additions (added title, label to xaxis).
countries = [x for x in np.unique(data.country)]
colors = ['#0000ff', '#3300cc', '#660099', '#990066', '#cc0033', '#ff0000']
gs = grid_spec.GridSpec(len(countries),1)
fig = plt.figure(figsize=(16,9))
i = 0
ax_objs = []
for country in countries:
country = countries[i]
x = np.array(data[data.country == country].score)
x_d = np.linspace(0,1, 1000)
kde = KernelDensity(bandwidth=0.03, kernel='gaussian')
kde.fit(x[:, None])
logprob = kde.score_samples(x_d[:, None])
# creating new axes object
ax_objs.append(fig.add_subplot(gs[i:i+1, 0:]))
# plotting the distribution
ax_objs[-1].plot(x_d, np.exp(logprob),color="#f0f0f0",lw=1)
ax_objs[-1].fill_between(x_d, np.exp(logprob), alpha=1,color=colors[i])
# setting uniform x and y lims
ax_objs[-1].set_xlim(0,1)
ax_objs[-1].set_ylim(0,2.5)
# make background transparent
rect = ax_objs[-1].patch
rect.set_alpha(0)
# remove borders, axis ticks, and labels
ax_objs[-1].set_yticklabels([])
if i == len(countries)-1:
ax_objs[-1].set_xlabel("Test Score", fontsize=16,fontweight="bold")
else:
ax_objs[-1].set_xticklabels([])
spines = ["top","right","left","bottom"]
for s in spines:
ax_objs[-1].spines[s].set_visible(False)
adj_country = country.replace(" ","\n")
ax_objs[-1].text(-0.02,0,adj_country,fontweight="bold",fontsize=14,ha="right")
i += 1
gs.update(hspace=-0.7)
fig.text(0.07,0.85,"Distribution of Aptitude Test Results from 18 – 24 year-olds",fontsize=20)
plt.tight_layout()
plt.show()
I’ll finish this off with a little project to put the above code into practice. The data provided also contains information on whether the test taker was male or female. Using the above code as a template, see how you get on creating something like this:
For those more ambitious, this could be turned into a split violin plot with males on one side and females on the other. Is there a way to combine the ridge and violin plot?
I’d love to see what people come back with so if you do create something, send it to me on twitter here!
]]>My name is Ted Petrou, founder of Dunder Data, and in this tutorial you will learn how to create the new Tesla Cybertruck using Matplotlib. I was inspired by the image below which was originally created by Lynn Fisher (without Matplotlib).
Before going into detail, let’s jump to the results. Here is the completed recreation of the Tesla Cybertruck that drives off the screen.
A tutorial now follows containing all the steps that creates a Tesla Cybertruck that drives. It covers the following topics:
Understanding these topics should give you enough to start animating your own figures in Matplotlib. This tutorial is not suited for those with no Matplotlib experience. You need to understand the relationship between the Figure and Axes and how to use the object-oriented interface of Matplotlib.
We first create a Matplotlib Figure without any Axes (the plotting surface). The function create_axes
adds an Axes to the Figure, sets the x-limits to be twice the y-limits (to match the ratio of the figure dimensions (16 x 8)), fills in the background with two different dark colors using fill_between
, and adds grid lines to make it easier to plot the objects in the exact place you desire. Set the draft
parameter to False
when you want to remove the grid lines, tick marks, and tick labels.
import numpy as np
import matplotlib.pyplot as plt
fig = plt.Figure(figsize=(16, 8))
def create_axes(draft=True):
ax = fig.add_subplot()
ax.grid(True)
ax.set_ylim(0, 1)
ax.set_xlim(0, 2)
ax.fill_between(x=[0, 2], y1=0.36, y2=1, color="black")
ax.fill_between(x=[0, 2], y1=0, y2=0.36, color="#101115")
if not draft:
ax.grid(False)
ax.axis("off")
create_axes()
fig
Most of the Cybertruck is composed of shapes (patches in Matplotlib terminology) - circles, rectangles, and polygons. These shapes are available in the patches Matplotlib module. After importing, we instantiate single instances of these patches and then call the add_patch
method to add the patch to the Axes.
For the Cybertruck, I used three patches, Polygon
, Rectangle
, and Circle
. They each have different parameters available in their constructor. I first constructed the body of the car as four polygons. Two other polygons were used for the rims. Each polygon is provided a list of x, y coordinates where the corner points are located. Matplotlib connects all the points in the order given and fills it in with the provided color.
Notice how the Axes is retrieved as the first line of the function. This is used throughout the tutorial.
from matplotlib.patches import Polygon, Rectangle, Circle
def create_body():
ax = fig.axes[0]
top = Polygon([[0.62, 0.51], [1, 0.66], [1.6, 0.56]], color="#DCDCDC")
windows = Polygon(
[[0.74, 0.54], [1, 0.64], [1.26, 0.6], [1.262, 0.57]], color="black"
)
windows_bottom = Polygon(
[[0.8, 0.56], [1, 0.635], [1.255, 0.597], [1.255, 0.585]], color="#474747"
)
base = Polygon(
[
[0.62, 0.51],
[0.62, 0.445],
[0.67, 0.5],
[0.78, 0.5],
[0.84, 0.42],
[1.3, 0.423],
[1.36, 0.51],
[1.44, 0.51],
[1.52, 0.43],
[1.58, 0.44],
[1.6, 0.56],
],
color="#1E2329",
)
left_rim = Polygon(
[
[0.62, 0.445],
[0.67, 0.5],
[0.78, 0.5],
[0.84, 0.42],
[0.824, 0.42],
[0.77, 0.49],
[0.674, 0.49],
[0.633, 0.445],
],
color="#373E48",
)
right_rim = Polygon(
[
[1.3, 0.423],
[1.36, 0.51],
[1.44, 0.51],
[1.52, 0.43],
[1.504, 0.43],
[1.436, 0.498],
[1.364, 0.498],
[1.312, 0.423],
],
color="#4D586A",
)
ax.add_patch(top)
ax.add_patch(windows)
ax.add_patch(windows_bottom)
ax.add_patch(base)
ax.add_patch(left_rim)
ax.add_patch(right_rim)
create_body()
fig
I used three Circle
patches for each of the tires. You must provide the center and radius. For the innermost circles (the “spokes”), I’ve set the zorder
to 99. The zorder
determines the order of how plotting objects are layered on top of each other. The higher the number, the higher up on the stack of layers the object will be plotted. During the next step, we will draw some rectangles through the tires and they need to be plotted underneath these spokes.
def create_tires():
ax = fig.axes[0]
left_tire = Circle((0.724, 0.39), radius=0.075, color="#202328")
right_tire = Circle((1.404, 0.39), radius=0.075, color="#202328")
left_inner_tire = Circle((0.724, 0.39), radius=0.052, color="#15191C")
right_inner_tire = Circle((1.404, 0.39), radius=0.052, color="#15191C")
left_spoke = Circle((0.724, 0.39), radius=0.019, color="#202328", zorder=99)
right_spoke = Circle((1.404, 0.39), radius=0.019, color="#202328", zorder=99)
left_inner_spoke = Circle((0.724, 0.39), radius=0.011, color="#131418", zorder=99)
right_inner_spoke = Circle((1.404, 0.39), radius=0.011, color="#131418", zorder=99)
ax.add_patch(left_tire)
ax.add_patch(right_tire)
ax.add_patch(left_inner_tire)
ax.add_patch(right_inner_tire)
ax.add_patch(left_spoke)
ax.add_patch(right_spoke)
ax.add_patch(left_inner_spoke)
ax.add_patch(right_inner_spoke)
create_tires()
fig
I used the Rectangle
patch to represent the two ‘axles’ (this isn’t the correct term, but you’ll see what I mean) going through the tires. You must provide a coordinate for the lower left corner, a width, and a height. You can also provide it an angle (in degrees) to control its orientation. Notice that they go under the spokes plotted from above. This is due to their lower zorder
.
def create_axles():
ax = fig.axes[0]
left_left_axle = Rectangle(
(0.687, 0.427), width=0.104, height=0.005, angle=315, color="#202328"
)
left_right_axle = Rectangle(
(0.761, 0.427), width=0.104, height=0.005, angle=225, color="#202328"
)
right_left_axle = Rectangle(
(1.367, 0.427), width=0.104, height=0.005, angle=315, color="#202328"
)
right_right_axle = Rectangle(
(1.441, 0.427), width=0.104, height=0.005, angle=225, color="#202328"
)
ax.add_patch(left_left_axle)
ax.add_patch(left_right_axle)
ax.add_patch(right_left_axle)
ax.add_patch(right_right_axle)
create_axles()
fig
The front bumper, head light, tail light, door and window lines are added below. I used regular Matplotlib lines for some of these. Those lines are not patches and get added directly to the Axes without any other additional method.
def create_other_details():
ax = fig.axes[0]
# other details
front = Polygon(
[[0.62, 0.51], [0.597, 0.51], [0.589, 0.5], [0.589, 0.445], [0.62, 0.445]],
color="#26272d",
)
front_bottom = Polygon(
[[0.62, 0.438], [0.58, 0.438], [0.58, 0.423], [0.62, 0.423]], color="#26272d"
)
head_light = Polygon(
[[0.62, 0.51], [0.597, 0.51], [0.589, 0.5], [0.589, 0.5], [0.62, 0.5]],
color="aqua",
)
step = Polygon(
[[0.84, 0.39], [0.84, 0.394], [1.3, 0.397], [1.3, 0.393]], color="#1E2329"
)
# doors
ax.plot([0.84, 0.84], [0.42, 0.523], color="black", lw=0.5)
ax.plot([1.02, 1.04], [0.42, 0.53], color="black", lw=0.5)
ax.plot([1.26, 1.26], [0.42, 0.54], color="black", lw=0.5)
ax.plot([0.84, 0.85], [0.523, 0.547], color="black", lw=0.5)
ax.plot([1.04, 1.04], [0.53, 0.557], color="black", lw=0.5)
ax.plot([1.26, 1.26], [0.54, 0.57], color="black", lw=0.5)
# window lines
ax.plot([0.87, 0.88], [0.56, 0.59], color="black", lw=1)
ax.plot([1.03, 1.04], [0.56, 0.63], color="black", lw=0.5)
# tail light
tail_light = Circle((1.6, 0.56), radius=0.007, color="red", alpha=0.6)
tail_light_center = Circle((1.6, 0.56), radius=0.003, color="yellow", alpha=0.6)
tail_light_up = Polygon(
[[1.597, 0.56], [1.6, 0.6], [1.603, 0.56]], color="red", alpha=0.4
)
tail_light_right = Polygon(
[[1.6, 0.563], [1.64, 0.56], [1.6, 0.557]], color="red", alpha=0.4
)
tail_light_down = Polygon(
[[1.597, 0.56], [1.6, 0.52], [1.603, 0.56]], color="red", alpha=0.4
)
ax.add_patch(front)
ax.add_patch(front_bottom)
ax.add_patch(head_light)
ax.add_patch(step)
ax.add_patch(tail_light)
ax.add_patch(tail_light_center)
ax.add_patch(tail_light_up)
ax.add_patch(tail_light_right)
ax.add_patch(tail_light_down)
create_other_details()
fig
The head light beam has a distinct color gradient that dissipates into the night sky. This is challenging to complete. I found an excellent answer on Stack Overflow from user Joe Kington on how to do this. We begin by using the imshow
function which creates images from 3-dimensional arrays. Our image will simply be a rectangle of colors.
We create a 1 x 100 x 4 array that represents 1 row by 100 columns of points of RGBA (red, green, blue, alpha) values. Every point is given the same red, green, and blue values of (0, 1, 1) which represents the color ‘aqua’. The alpha value represents opacity and ranges between 0 and 1 with 0 being completely transparent (invisible) and 1 being opaque. We would like the opacity to decrease as the light extends further from the head light (that is further to the left). The NumPy linspace
function is used to create an array of 100 numbers increasing linearly from 0 to 1. This array will be set as the alpha values.
The extent
parameter defines the rectangular region where the image will be shown. The four values correspond to xmin, xmax, ymin, and ymax. The 100 alpha values will be mapped to this region beginning from the left. The array of alphas begins at 0, which means that the very left of this rectangular region will be transparent. The opacity will increase moving to the right-side of the rectangle where it eventually reaches 1.
import matplotlib.colors as mcolors
def create_headlight_beam():
ax = fig.axes[0]
z = np.empty((1, 100, 4), dtype=float)
rgb = mcolors.colorConverter.to_rgb("aqua")
alphas = np.linspace(0, 1, 100)
z[:, :, :3] = rgb
z[:, :, -1] = alphas
im = ax.imshow(z, extent=[0.3, 0.589, 0.501, 0.505], zorder=1)
create_headlight_beam()
fig
The cloud of points surrounding the headlight beam is even more challenging to complete. This time, a 100 x 100 grid of points was used to control the opacity. The opacity is directly proportional to the vertical distance from the center beam. Additionally, if a point was outside of the diagonal of the rectangle defined by extent
, its opacity was set to 0.
def create_headlight_cloud():
ax = fig.axes[0]
z2 = np.empty((100, 100, 4), dtype=float)
rgb = mcolors.colorConverter.to_rgb("aqua")
z2[:, :, :3] = rgb
for j, x in enumerate(np.linspace(0, 1, 100)):
for i, y in enumerate(np.abs(np.linspace(-0.2, 0.2, 100))):
if x * 0.2 > y:
z2[i, j, -1] = 1 - (y + 0.8) ** 2
else:
z2[i, j, -1] = 0
im2 = ax.imshow(z2, extent=[0.3, 0.65, 0.45, 0.55], zorder=1)
create_headlight_cloud()
fig
All of our work from above can be placed in a single function that draws the car. This will be used when initializing our animation. Notice, that the first line of the function clears the Figure, which removes our Axes. If we don’t clear the Figure, then we will keep adding more and more Axes each time this function is called. Since this is our final product, we set draft
to False
.
def draw_car():
fig.clear()
create_axes(draft=False)
create_body()
create_tires()
create_axles()
create_other_details()
create_headlight_beam()
create_headlight_beam()
draw_car()
fig
Animation in Matplotlib is fairly straightforward. You must create a function that updates the position of the objects in your figure for each frame. This function is called repeatedly for each frame.
In the update
function below, we loop through each patch, line, and image in our Axes and reduce the x-value of each plotted object by .015. This has the effect of moving the truck to the left. The trickiest part was changing the x and y values for the rectangular tire ‘axles’ so that it appeared that the tires were rotating. Some basic trigonometry helps calculate this.
Implicitly, Matplotlib passes the update function the frame number as an integer as the first argument. We accept this input as the parameter frame_number
. We only use it in one place, and that is to do nothing during the first frame.
Finally, the FuncAnimation
class from the animation module is used to construct the animation. We provide it our original Figure, the function to update the Figure (update
), a function to initialize the Figure (draw_car
), the total number of frames, and any extra arguments used during update (fargs
).
from matplotlib.animation import FuncAnimation
def update(frame_number, x_delta, radius, angle):
if frame_number == 0:
return
ax = fig.axes[0]
for patch in ax.patches:
if isinstance(patch, Polygon):
arr = patch.get_xy()
arr[:, 0] -= x_delta
elif isinstance(patch, Circle):
x, y = patch.get_center()
patch.set_center((x - x_delta, y))
elif isinstance(patch, Rectangle):
xd_old = -np.cos(np.pi * patch.angle / 180) * radius
yd_old = -np.sin(np.pi * patch.angle / 180) * radius
patch.angle += angle
xd = -np.cos(np.pi * patch.angle / 180) * radius
yd = -np.sin(np.pi * patch.angle / 180) * radius
x = patch.get_x()
y = patch.get_y()
x_new = x - x_delta + xd - xd_old
y_new = y + yd - yd_old
patch.set_x(x_new)
patch.set_y(y_new)
for line in ax.lines:
xdata = line.get_xdata()
line.set_xdata(xdata - x_delta)
for image in ax.images:
extent = image.get_extent()
extent[0] -= x_delta
extent[1] -= x_delta
animation = FuncAnimation(
fig, update, init_func=draw_car, frames=110, repeat=False, fargs=(0.015, 0.052, 4)
)
Finally, we can save the animation as an mp4 file (you must have ffmpeg installed for this to work). We set the frames-per-second (fps
) to 30. From above, the total number of frames is 110 (enough to move the truck off the screen) so the video will last nearly four seconds (110 / 30).
animation.save("tesla_animate.mp4", fps=30, bitrate=3000)
I encourage you to add more components to your Cybertruck animation to personalize the creation. I suggest encapsulating each addition with a function as done in this tutorial.
]]>import numpy as np
import matplotlib.pyplot as plt
import matplotlib as mpl
A Top-Down runnable Jupyter Notebook with the exact contents of this blog can be found here
An interactive version of this guide can be accessed on Google Colab
Although a beginner can follow along with this guide, it is primarily meant for people who have at least a basic knowledge of how Matplotlib’s plotting functionality works.
Essentially, if you know how to take 2 NumPy arrays and plot them (using an appropriate type of graph) on 2 different axes in a single figure and give it basic styling, you’re good to go for the purposes of this guide.
If you feel you need some introduction to basic Matplotlib plotting, here’s a great guide that can help you get a feel for introductory plotting using Matplotlib
From here on, I will be assuming that you have gained sufficient knowledge to follow along this guide.
Also, in order to save everyone’s time, I will keep my explanations short, terse and very much to the point, and sometimes leave it for the reader to interpret things (because that’s what I’ve done throughout this guide for myself anyway).
The primary driver in this whole exercise will be code and not text, and I encourage you to spin up a Jupyter notebook and type in and try out everything yourself to make the best use of this resource.
This is not a guide about how to beautifully plot different kinds of data using Matplotlib, the internet is more than full of such tutorials by people who can explain it way better than I can.
This article attempts to explain the workings of some of the foundations of any plot you create using Matplotlib. We will mostly refrain from focusing on what data we are plotting and instead focus on the anatomy of our plots.
Matplotlib has many styles available, we can see the available options using:
plt.style.available
['seaborn-dark',
'seaborn-darkgrid',
'seaborn-ticks',
'fivethirtyeight',
'seaborn-whitegrid',
'classic',
'_classic_test',
'fast',
'seaborn-talk',
'seaborn-dark-palette',
'seaborn-bright',
'seaborn-pastel',
'grayscale',
'seaborn-notebook',
'ggplot',
'seaborn-colorblind',
'seaborn-muted',
'seaborn',
'Solarize_Light2',
'seaborn-paper',
'bmh',
'tableau-colorblind10',
'seaborn-white',
'dark_background',
'seaborn-poster',
'seaborn-deep']
We shall use seaborn
. This is done like so:
plt.style.use("seaborn")
Let’s get started!
# Creating some fake data for plotting
xs = np.linspace(0, 2 * np.pi, 400)
ys = np.sin(xs**2)
xc = np.linspace(0, 2 * np.pi, 600)
yc = np.cos(xc**2)
The usual way to create a plot using Matplotlib goes somewhat like this:
fig, ax = plt.subplots(2, 2, figsize=(16, 8))
# `Fig` is short for Figure. `ax` is short for Axes.
ax[0, 0].plot(xs, ys)
ax[1, 1].plot(xs, ys)
ax[0, 1].plot(xc, yc)
ax[1, 0].plot(xc, yc)
fig.suptitle("Basic plotting using Matplotlib")
plt.show()
Our goal today is to take apart the previous snippet of code and understand all of the underlying building blocks well enough so that we can use them separately and in a much more powerful way.
If you’re a beginner like I was before writing this guide, let me assure you: this is all very simple stuff.
Going into plt.subplots
documentation (hit Shift+Tab+Tab
in a Jupyter notebook) reveals some of the other Matplotlib internals that it uses in order to give us the Figure
and it’s Axes
.
These include :
plt.subplot
plt.figure
mpl.figure.Figure
mpl.figure.Figure.add_subplot
mpl.gridspec.GridSpec
mpl.axes.Axes
Let’s try and figure out what these functions / classes do.
Figure
? And what are Axes
?A Figure
in Matplotlib is simply your main (imaginary) canvas. This is where you will be doing all your plotting / drawing / putting images and what not. This is the central object with which you will always be interacting. A figure has a size defined for it at the time of creation.
You can define a figure like so (both statements are equivalent):
fig = mpl.figure.Figure(figsize=(10, 10))
# OR
fig = plt.figure(figsize=(10, 10))
Notice the word imaginary above. What this means is that a Figure by itself does not have any place for you to plot. You need to attach/add an Axes
to it to do any kind of plotting. You can put as many Axes
objects as you want inside of any Figure
you have created.
An Axes
:
Figure
Figure
.You can create an Axes
like so (both statements are equivalent):
ax1 = mpl.axes.Axes(fig=fig, rect=[0, 0, 0.8, 0.8], facecolor="red")
# OR
ax1 = plt.Axes(fig=fig, rect=[0, 0, 0.8, 0.8], facecolor="red")
#
The first parameter fig
is simply a pointer to the parent Figure
to which an Axes will belong.
The second parameter rect
has four numbers : [left_position, bottom_position, height, width]
to define the position of the Axes
inside the Figure
and the height and width with respect to the Figure
. All these numbers are expressed in percentages.
A Figure
simply holds a given number of Axes
at any point of time
We will go into some of these design decisions in a few moments'
plt.subplots
with basic Matplotlib functionalityWe will try and recreate the below plot using Matplotlib primitives as a way to understand them better. We’ll try and be a slightly creative by deviating a bit though.
fig, ax = plt.subplots(2, 2)
fig.suptitle("2x2 Grid")
Text(0.5, 0.98, '2x2 Grid')
# We first need a figure, an imaginary canvas to put things on
fig = plt.Figure(figsize=(6, 6))
# Let's start with two Axes with an arbitrary position and size
ax1 = plt.Axes(fig=fig, rect=[0.3, 0.3, 0.4, 0.4], facecolor="red")
ax2 = plt.Axes(fig=fig, rect=[0, 0, 1, 1], facecolor="blue")