Eighteen years since the release of NumPy 1.0, we are thrilled to announce the launch of NumPy 2.0! This major release marks a significant milestone in the evolution of NumPy, bringing a wealth of enhancements and improvements to users, and setting the stage for future feature development.
NumPy has improved and evolved over the past 18 years, with many old releases bringing significant performance, usability, and consistency improvements. That said, our approach for a long time has been to make only incremental changes while carefully managing backward compatibility. This approach minimizes user breakage, but also limits the scope of improvements that can be made, both to the API and its underlying implementation. Therefore, for this one-off major release, we are breaking backward compatibility to implement significant improvements in NumPy’s type system. The type system is fundamental to NumPy, and major behavioral changes could not be made incrementally without mixing two different type systems, which would be a recipe for disaster.
The journey to an actual 2.0 release has been long, and it was difficult to build the necessary momentum. In part, this may be because, for a time, the NumPy developers associated a NumPy 2.0 release with nothing less than a revolutionary rewrite of significant key pieces of the code base. Many of these rewrites and changes happened over the years, but because of backward compatibility concerns they remained largely invisible to the users. NumPy 2.0 is the culmination of these efforts, allowing us to discard some legacy ABI (Application Binary Interface) that prevented future improvements.
Some major changes to NumPy internals—required for key features in 2.0—have been in the works since 2019 at least. We started concrete plans for the 2.0 release more than a year ago, at a four hour long public planning meeting in April 2023. Many of the key changes were proposed and discussed. The key goals we decided on there were perhaps even larger and more ambitious in scope than some of us expected. This also unlocked some extra energy - which has been great to see. After the meeting and over the course of the last year, NumPy enhancement proposals (NEPs) were written, reviewed, and implemented for each major change.
Some key highlights are:
Cleaned-up and streamlined Python API (NEP 52):
The Python API has undergone a thorough cleanup, making it easier to learn
and use NumPy. The main namespace has been reduced by approximately 10%, and
the more niche numpy.lib
namespace has been reduced by about 80%, providing
a clearer distinction between public and private API elements.
Improved scalar promotion rules: The scalar promotion rules have been updated, as proposed in NEP 50 addressing surprising behaviors in type promotion, e.g. with zero dimensional arrays.
Powerful new DType API and a new string dtype: NumPy 2.0 introduces a new API
for implementing user-defined custom data types as proposed by
NEP 41. We used
this new API to implement StringDType
, offering efficient and painless
support for variable length strings which was proposed in
NEP 55. And it is our hope
that enable future new data types with interesting new capabilities in the
PyData ecosystem and in NumPy itself.
Windows compatibility enhancements: The default 32-bit integer representation on Windows has been updated to 64-bit on 64-bit architectures, addressing one of the most common problems with having NumPy work portably across operating systems.
Support for the Python array API standard: This is the first release to include full support for the array API standard (v2022.12), made possible by the new promotion rules, APIs, and API cleanup mentioned above. We also aligned existing APIs and behavior with the standard, as proposed in NEP 56.
These are just some of the more impactful changes in behavior and usability. In addition, NumPy 2.0 contains significant performance and documentation improvements, and much more - for an extensive list of changes, see the NumPy 2 release notes.
To adopt this major release, users will likely need to adjust existing code, but we worked hard to strike a balance between improvements and ensuring that the transition to NumPy 2.0 is as seamless as possible. We wrote a comprehensive migration guide, and a ruff plugin that helps to update Python code so it will work with both NumPy 1.x and NumPy 2.x.
While we do require C API users to recompile their projects to support NumPy 2.0, we prepared for this in NumPy 1.25 already. The build process was simplified so that you can now compile with the latest NumPy version, and remain backward compatible. This means that projects build with NumPy 2.x are “magically” compatible with 1.x. It also means that projects no longer need to build their binaries using the oldest supported version of NumPy.
We knew throughout development that rolling out NumPy 2.0 would be (temporarily) disruptive, because of the backwards-incompatible API and ABI changes. We spent an extraordinary amount of effort communicating these changes, helping downstream projects adapt, tracking compatibility of popular open source projects (see, e.g., numpy#26191), and completing the release process at limited pace to provide time for adoption. No doubt, the next few weeks will bring to light some new challenges, however we fully expect these to be manageable and well worth it in the long run.
The NumPy 2.0 release is the result of a collaborative, largely volunteer, effort spanning many years and involving contributions from a diverse community of developers. In addition, many of the changes above would not have been possible without funders and institutional sponsors allowing several team members to work on NumPy as part of their day jobs. We’d like to acknowledge in particular: the Gordon and Betty Moore Foundation, the Alfred P. Sloan Foundation, NASA, NVIDIA, Quansight Labs, the Chan Zuckerberg Initiative, and Tidelift.
We are excited about future improvements to NumPy, many of which will be possible due to changes in NumPy 2.0. See the NumPy roadmap for some features in the pipeline or on the wishlist. Let’s continue working together to improve NumPy and the scientific Python and PyData ecosystem!
]]>Given the practical challenges of achieving true randomness, deterministic algorithms, known as Pseudo Random Number Generators (RNGs), are employed in science to create sequences that mimic randomness. These generators are used for simulations, experiments, and analysis where it is essential to have numbers that appear unpredictable. I want to share here what I have learned about best practices with pseudo RNGs and especially the ones available in NumPy.
A pseudo RNG works by updating an internal state through a deterministic algorithm. This internal state is initialized with a value known as a seed and each update produces a number that appears randomly generated. The key here is that the process is deterministic, meaning that if you start with the same seed and apply the same algorithm, you will get the same sequence of internal states (and numbers). Despite this determinism, the resulting numbers exhibit properties of randomness, appearing unpredictable and evenly distributed. Users can either specify the seed manually, providing a degree of control over the generated sequence, or they can opt to let the RNG object automatically derive the seed from system entropy. The latter approach enhances unpredictability by incorporating external factors into the seed.
I assume a certain knowledge of NumPy and that NumPy 1.17 or greater is used. The reason for this is that great new features were introduced in the random module of version 1.17. As numpy
is usually imported as np
, I will sometimes use np
instead of numpy
. Finally, RNG will always mean pseudo RNG in the rest of this blog post.
np.random.seed
and np.random.*
functions, such as np.random.random
, to generate random values.np.random.default_rng
function.Note that, with older versions of NumPy (<1.17), the way to create a new RNG is to use np.random.RandomState
which is based on the popular Mersenne Twister 19937 algorithm. This is also how the global NumPy RNG is created. This function is still available in newer versions of NumPy, but it is now recommended to use default_rng
instead, which returns an instance of the statistically better PCG64 RNG. You might still see np.random.RandomState
being used in tests as it has strong stability guarantees between different NumPy versions.
When you import numpy
in your Python script, an RNG is created behind the scenes. This RNG is the one used when you generate a new random value using a function such as np.random.random
. I will here refer to this RNG as the global NumPy RNG.
Although not recommended, it is a common practice to reset the seed of this global RNG at the beginning of a script using the np.random.seed
function. Fixing the seed at the beginning ensures that the script is reproducible: the same values and results will be produced each time you run it. However, although sometimes convenient, using the global NumPy RNG is a bad practice. A simple reason is that using global variables can lead to undesired side effects. For instance one might use np.random.random
without knowing that the seed of the global RNG was set somewhere else in the codebase. Quoting Numpy Enhancement Proposal (NEP) 19 by Robert Kern:
The implicit global RandomState behind the
np.random.*
convenience functions can cause problems, especially when threads or other forms of concurrency are involved. Global state is always problematic. We categorically recommend avoiding using the convenience functions when reproducibility is involved. […] The preferred best practice for getting reproducible pseudorandom numbers is to instantiate a generator object with a seed and pass it around.
In short:
np.random.seed
, which reseeds the already created global NumPy RNG, and then using np.random.*
functions, you should create a new RNG.To create a new RNG you can use the default_rng
function as illustrated in the introduction of the random module documentation:
import numpy as np
rng = np.random.default_rng()
rng.random() # generate a floating point number between 0 and 1
If you want to use a seed for reproducibility, the NumPy documentation recommends using a large random number, where large means at least 128 bits. The first reason for using a large random number is that this increases the probability of having a different seed than anyone else and thus independent results. The second reason is that relying only on small numbers for your seeds can lead to biases as they do not fully explore the state space of the RNG. This limitation implies that the first number generated by your RNG may not seem as random as expected due to inaccessible first internal states. For example, some numbers will never be produced as the first output. One possibility would be to pick the seed at random in the state space of the RNG but according to Robert Kern a 128-bit random number is large enough^{1}. To generate a 128-bit random number for your seed you can rely on the secrets module:
import secrets
secrets.randbits(128)
When running this code I get 65647437836358831880808032086803839626
for the number to use as my seed. This number is randomly generated so you need to copy paste the value that is returned by secrets.randbits(128)
otherwise you will have a different seed each time you run your code and thus break reproducibility:
import numpy as np
seed = 65647437836358831880808032086803839626
rng = np.random.default_rng(seed)
rng.random()
The reason for seeding your RNG only once (and passing that RNG around) is that with a good RNG such as the one returned by default_rng
you will be ensured good randomness and independence of the generated numbers. However, if not done properly, using several RNGs (each one created with its own seed) might lead to streams of random numbers that are less independent than the ones created from the same seed^{2}. That being said, as explained by Robert Kern, with the RNGs and seeding strategies introduced in NumPy 1.17, it is considered fairly safe to create RNGs using system entropy, i.e. using default_rng(None)
multiple times. However as explained later be careful when running jobs in parallel and relying on default_rng(None)
. Another reason for seeding your RNG only once is that obtaining a good seed can be time consuming. Once you have a good seed to instantiate your generator, you might as well use it.
As you write functions that you will use on their own as well as in a more complex script it is convenient to be able to pass a seed or your already created RNG. The function default_rng
allows you to do this very easily. As written above, this function can be used to create a new RNG from your chosen seed, if you pass a seed to it, or from system entropy when passing None
but you can also pass an already created RNG. In this case the returned RNG is the one that you passed.
import numpy as np
def stochastic_function(high=10, rng=None):
rng = np.random.default_rng(rng)
return rng.integers(high, size=5)
You can either pass an int
seed or your already created RNG to stochastic_function
. To be perfectly exact, the default_rng
function returns the exact same RNG passed to it for certain kind of RNGs such at the ones created with default_rng
itself. You can refer to the default_rng
documentation for more details on the arguments that you can pass to this function^{3}.
You must be careful when using RNGs in conjunction with parallel processing. Let’s consider the context of Monte Carlo simulation: you have a random function returning random outputs and you want to generate these random outputs a lot of times, for instance to compute an empirical mean. If the function is expensive to compute, an easy solution to speed up the computation time is to resort to parallel processing. Depending on the parallel processing library or backend that you use different behaviors can be observed. For instance if you do not set the seed yourself it can be the case that forked Python processes use the same random seed, generated for instance from system entropy, and thus produce the exact same outputs which is a waste of computational resources. A very nice example illustrating this when using the Joblib parallel processing library is available here.
If you fix the seed at the beginning of your main script for reproducibility and then pass your seeded RNG to each process to be run in parallel, most of the time this will not give you what you want as this RNG will be deep copied. The same results will thus be produced by each process. One of the solutions is to create as many RNGs as parallel processes with a different seed for each of these RNGs. The issue now is that you cannot choose the seeds as easily as you would think. When you choose two different seeds to instantiate two different RNGs how do you know that the numbers produced by these RNGs will appear as statistically independent?^{2} The design of independent RNGs for parallel processes has been an important research question. See, for example, Random numbers for parallel computers: Requirements and methods, with emphasis on GPUs by L’Ecuyer et al. (2017) for a good summary of different methods.
Starting with NumPy 1.17, it is now very easy to instantiate independent RNGs. Depending on the type of RNG you use, different strategies are available as documented in the Parallel random number generation section of the NumPy documentation. One of the strategies is to use SeedSequence
which is an algorithm that makes sure that poor input seeds are transformed into good initial RNG states. More precisely, this ensures that you will not have a degenerate behavior from your RNG and that the subsequent numbers will appear random and independent. Additionally, it ensures that close seeds are mapped to very different initial states, resulting in RNGs that are, with very high probability, independent of each other. You can refer to the documentation of SeedSequence Spawning for examples on how to generate independent RNGs from a SeedSequence
or an existing RNG. I here show how to apply this to the joblib example mentioned above.
import numpy as np
from joblib import Parallel, delayed
def stochastic_function(high=10, rng=None):
rng = np.random.default_rng(rng)
return rng.integers(high, size=5)
seed = 319929794527176038403653493598663843656
# creating the RNG that is passed around.
rng = np.random.default_rng(seed)
# create 5 independent RNGs
child_rngs = rng.spawn(5)
# use 2 processes to run the stochastic_function 5 times with joblib
random_vector = Parallel(n_jobs=2)(
delayed(stochastic_function)(rng=child_rng) for child_rng in child_rngs
)
print(random_vector)
By using a fixed seed you always get the same results each time you run this code and by using rng.spawn
you have an independent RNG for each call to stochastic_function
. Note that here you could also spawn from a SeedSequence
that you would create with the seed instead of creating an RNG. However, in general you pass around an RNG therefore I only assume to have access to an RNG. Also note that spawning from an RNG is only possible from version 1.25 of NumPy^{4}.
I hope this blog post helped you understand the best ways to use NumPy RNGs. The new Numpy API gives you all the tools you need for that. The resources below are available for further reading. Finally, I would like to thank Pamphile Roy, Stefan van der Walt and Jarrod Millman for their great feedbacks and comments which contributed to greatly improve the original version of this blog post.
check_random_state
function and RNG good practices, especially this comment by Robert Kern.SeedSequence
can and cannot do. This also explains why it is recommended to use very large random numbers for seeds.If you only need a seed for reproducibility and do not need independence with respect to others, say for a unit test, a small seed is perfectly fine. ↩︎
A good RNG is expected to produce independent numbers for a given seed. However, the independence of sequences generated from two different seeds is not always guaranteed. For instance, it is possible that the sequence started with the second seed might quickly converge to an internal state also obtained by the first seed. This can result in both RNGs producing the same subsequent numbers, which would compromise the randomness expected from distinct seeds. ↩︎ ↩︎
Before knowing about default_rng
, and before NumPy 1.17, I was using the scikit-learn function check_random_state
which is of course heavily used in the scikit-learn codebase. While writing this post I discovered that this function is now available in scipy. A look at the docstring and/or the source code of this function will give you a good idea about what it does. The differences with default_rng
are that check_random_state
currently relies on np.random.RandomState
and that when None
is passed to check_random_state
then the function returns the already existing global NumPy RNG. The latter can be convenient because if you fix the seed of the global RNG before in your script using np.random.seed
, check_random_state
returns the generator that you seeded. However, as explained above, this is not the recommended practice and you should be aware of the risks and the side effects. ↩︎
Before 1.25 you need to get the SeedSequence
from the RNG using the _seed_seq
private attribute of the underlying bit generator: rng.bit_generator._seed_seq
. You can then spawn from this SeedSequence
to get child seeds that will result in independent RNGs. ↩︎
One outcome of the 2023 Scientific Python Developer Summit was the Scientific Python Development Guide, a comprehensive guide to modern Python package development, complete with a new project template supporting 10+ build backends and a WebAssembly-powered checker with checks linked to the guide. The guide covers topics like modern, compiled, and classic packaging, style checks, type checking, docs, task runners, CI, tests, and much more! There also are sections of tutorials, principles, and some common patterns.
This guide (along with cookie & repo-review) started in Scikit-HEP in 2020. During the summit, it was merged with the NSLS-II guidelines, which provided the basis for the principles section. I’d like to thank and acknowledge Dan Allan and Gregory Lee for working tirelessly during the summit to rework, rewrite, merge, and fix the guide, including writing most of the tutorials pages and first patterns page, and rewriting the environment page as a tutorial.
The core of the project is the guide, which is comprised of four sections:
From the original Scikit-HEP dev pages, a lot was added:
The infrastructure was updated too:
We also did something I’ve wanted to do for a long time: the guide, the
cookiecutter template, and the checks are all in a single repo! The repo is
scientific-python/cookie, which is the moved scikit-hep/cookie
(the
old URL for cookiecutter still works!).
Cookie is a new project template supporting multiple backends (including compiled ones), kept in sync with the dev guide. We recommend starting with the dev guide and setting up your first package by hand, so that you understand what each part is for, but once you’ve done that, cookie allows you to get started on a new project in seconds.
A lot of work went into cookie, too!
scikit-hep
; the same integration can be offered to other orgs.See the introduction to repo-review for information about this one!
Along with this was probably the biggest change, one requested by several people
at the summit: scientific-python/repo-review (was
scikit-hep/repo-review
) is now a completely general framework for implementing
checks in Python 3.10+. The checks have been moved to sp-repo-review
, which is
now part of scientific-python/cookie. There are too many changes to list here,
so just the key ones in 0.6, 0.7, 0.8, 0.9, and 0.10:
pyproject.toml
or command line.pyproject.toml
path instead to make
running on mixed repos easier.[tool.repo-review]
with validate-pyproject.The full changelog has more - you can even see the 10 beta releases in-between 0.6.x and 0.7.0 where a lot of this refactoring work was happening. If you have configuration you’d like to write check for, feel free to write a plugin!
validate-pyproject 0.14 has added support for being used as a repo-review
plugin, so you can validate pyproject.toml
files with repo-review! This lints
[project]
and [build-system]
tables, [tool.setuptools]
, and other tools
via plugins. Scikit-build-core 0.5 can be used as a validate-project plugin
to lint [tool.scikit-build]
. Repo-review has a plugin for
[tool.repo-review]
.
Finally, sp-repo-review contains the previous repo-review plugins with checks:
If you have a guide, we’d like for you to compare it with the Scientific Python
Development Guide, and see if we are missing anything - bring it to our
attention, and maybe we can add it. And then you can link to the centrally
maintained guide instead of manually maintaining a complete custom guide. See
scikit-hep/developer for an example; many pages now point at this guide.
We can also provide org integrations for cookie, providing some
customizations when a user targets your org (targeting scikit-hep
will add a
badge).
In mid-2018 I started learning Python by reading textbooks and watching online tutorials. I had absolutely zero background in computer science, but it seemed interesting so I continued to try. At some point, I decided I wanted to do a master’s degree in statistics, so I began to work on more statistics-based programming. That’s when I found SciPy. I became (and still am) fascinated by the idea of open-source software that is completely free to use and supported by a community of diligent programmers. With plenty of extra time on my hands during the pandemic, I made it my goal to contribute to a Python library. My first contribution was actually to a project called first contributions which walks you through a very basic commit and push to GitHub. That built up my confidence a bit, so I decided to tackle a SciPy issue. It was not easy. I watched several videos and guides on how to contribute to an open-source library but got stuck many times along the way! I have to admit I felt incompetent trying to make changes to this huge library, but the maintainers and community could not have been nicer or more supportive. That’s really the magic of open source. I was confused and lost, but the (largely volunteer) community was amazing.
Eventually, I managed to get a very small commit merged into the main branch of SciPy (which you can see here). Despite being, at most, a few lines of code, this was a huge landmark for me as a programmer. To my surprise though, in early 2021, a little badge pop up on my GitHub profile that said “Mars 2020 Helicopter Contributor”. I was confused. I didn’t recall working on helicopters, much less helicopters that flew on Mars. I still remember getting chills when I read that I contributed to a library that was used on NASA’s Mars 2020 mission, which involved the robotic helicopter Ingenuity. GitHub posted an article explaining how about 12,000 people received a badge indicating that they had contributed to an open-source library that was used on the mission! Keep in mind that I made an absolutely tiny contribution, but I was extremely proud to be recognized in that way.
In an even stranger twist, this summer, I’m interning at NASA’s Jet Propulsion Laboratory (JPL) which built and still flies the Ingenuity helicopter along with the other robotic space exploration missions. It’s truly surreal to have come full circle like this, from learning Python in my living room during the pandemic to using SciPy in my daily work at JPL. Here, my work involves writing statistical simulations to estimate the probability of system failures during a mission. If you’re reading this and interested in contributing, please know that your contributions to an open-source library, no matter how small, can have an impact larger than you could ever imagine. Collaboration like this is essential to pushing forward the boundaries of science. If you want to contribute, please feel free to reach out to me (@WillTirone) or anyone else in the SciPy community, and we can get you started on the right path. Last, I want to thank the maintainers of SciPy for their endless support and assistance while I was learning the basics of contribution to the library.
]]>The first Scientific Python Developer Summit (May 22-26, 2023) brought together 34 developers at the eScience Institute at the University of Washington to develop shared infrastructure, documentation, tools, and recommendations for libraries in the Scientific Python ecosystem.
Prior to the summit we held several hour-long planning meetings:
At the summit, we had a brief check-in and then split into several groups based on each developers time and interests. Raw work progress and log have been collected in a document, we highlight just a few of things we accomplished below:
Almost a quarter of the group worked on sparse arrays for the entire week.
This work is part of a larger, multi-year effort to improve and expand SciPy’s
sparse array API, which will eventually
involve removing the sparse matrix API and eventually np.matrix
.
More details can be found in the Developer Summit 1: Sparse blog post.
We made significant progress on several SPECs, which had been drafted during previous sprints.
SPEC 0—Minimum Supported Versions , an updated and expanded recommendation similar to the NEP 29, was discussed and endorsed by several core projects.
SPEC 1—Lazy Loading of Submodules and Functions was discussed and endorsed by two core projects.
SPEC 2—API Dispatch was discussed (in a follow-up video meeting just after the summit)
and is in the process of being marked as withdrawn
or something similar.
SPEC 3—Accessibility was discussed and updated. We hope to see it endorsed by several core projects in the near future.
SPEC 4—Using a creating nightly wheels was rewritten, a helper GitHub action upload-nightly-action was created, and PRs to update the various projects to use the new nightly wheels location were made. The updates are now complete and the SPEC was endorsed by two core projects.
We anticipate several more core projects to endorse the existing SPECs over the coming months and we are now holding regular SPEC steering committee meetings to continue developing and expanding the SPECs.
We created a comprehensive community guide to empower projects in fostering their communities. This guide includes essential information on the role of community managers, along with practical strategies for community meetings, outreach, onboarding, and project management.
We created a development guide, a new project template, and existing project review.
One of the fun things that happens at summits like these are the chance encounters of people from different projects. For example, a couple of attendees worked on creating a co-collaboration network across the broader scientific python ecosystem. This gave us the opportunity to look at how contributors collaborate across projects. We could see how the bigger projects were all clustered together as there are multiple contributors who share maintenance duties for multiple projects. We could also, for example, see how the Scikit-HEP cluster was a bit further away from the usual scientific Python cluster. An action item for us :) We need more collaboration!!
Several attendees worked on pytest plugins and Sphinx extensions:
pytest-regex was created to support selecting tests with regular expressions.
pytest-doctestplus was moved upstream into the Scientific Python organization. The summit provided new momentum to develop new features (e.g. produce updated docstring), and to use it for the NumPy documentation testing.
sphinx-scientific-python, a new extension as a home for various features from the ecosystem, e.g., we agreed on bringing existing extensions from MNE tools to this extension.
pydata-sphinx-theme updates
The first release candidate of SciPy 1.11.0 was published on PyPI
on May 31, 2023, five days after the conclusion of the summit. The
summit facilitated high-bandwidth decision making on several proposed
SciPy code changes by allowing the current SciPy release manager (Tyler Reddy,
Los Alamos National Laboratory) to consult with other SciPy core developers
in person. Specific code changes were discussed with the following SciPy
maintainers: Stefan van der Walt (scipy.ndimage
), CJ Carey (scipy.sparse
),
Matt Haberland (scipy.stats
), and Pamphile Roy (scipy.stats
). When SciPy
releases are performed out of band from the summit, the release manager
will often have to delay incorporation of useful code changes to the
next release six months later, due to lack of availability of the
pertinent domain experts.
We factored out a general developer statistics package from our prototype developer statistics website.
]]>(May 22-26, 2023, Seattle WA) – The first Scientific Python Developer Summit provided an opportunity for core developers from the scientific Python ecosystem to come together to:
Related notes/sites:
One of the focuses of the summit was Sparse Arrays, and specifically their implementation in SciPy. This post attempts to recap what happened with “sparse” at the summit and a glimpse of plans for our continuing work. The Sparse Array working group holds open follow-up meetings, currently scheduled every two weeks, to continue the momentum and move this project forward.
At the Summit, we focused on improving the newly added Sparse Array API in SciPy, that lets users manipulate sparse data with NumPy semantics (before, SciPy used NumPy’s 2D-only Matrix API, but that is slated for deprecation). Our goal at the summit was to give focused energy to the effort, bring new people on board, and connect downstream users with the development effort. We also worked to create a working group for this project that would last beyond the summit itself.
The specific PRs and Issues involved in scipy.sparse
are detailed in the
Summit 2023 scipy.sparse Report,
with more detailed description appearing in the
Summit Worklog.
Some big picture take-aways are:
format
attribute describing which format of sparse storage is used,
changing functions issparse
/isspmatrix
as well as shifting
the class hierarchy to allow easy isinstance
checking.
The interface going forward includes:
issparse(A)
: True when a sparse array or matrix.isinstance(A, sparray)
: True when a sparse array.isspmatrix(A)
: True when a sparse matrix.
To check the format of a sparse array or matrix use A.format == "csr"
or similar._array
suffix which
construct sparse arrays. The old names will continue to create sparse matrix until
post-deprecation removal.
Some specific changes made include:
diags_array(A)
(and planned for eye_array
, random_array
and others).sparse.linalg.matrix_power
function for positive integer matrix power of a sparse arraycoo_array
allowed exploration of possible n-d arrays, though that is not a short-term goal.__array_ufunc__
and other __array_*__
protocols for sparse arraysOur goal is to have a working set of sparse array construction functions
and a 1d sparse array class (focusing on coo_array
first) in plenty of
time for intensive testing before SciPy v1.12. This will then allow us to
focus on creating migration documents and tools as well as helping downstream
libraries make the shift to sparse arrays. We hope to enable the removal of
deprecated sparse matrix interfaces in favor of the array interface. For this
to happen we will need most downstream users to shift to the sparse array API.
We intend to help them do that.
Our work continues with a community call every two weeks on Fridays. Near term work is to:
This is the second part of a blog series where I talk about my experience during my Outreachy internship at NetworkX. If you haven’t read the first part you can find it here.
As you advance through the contribution phase you may wonder how your internship is gonna be in case you get selected. Here is my experience as a NetworkX intern and some tips that could help you through the internship.
I started my internship in December. I was almost done with my assignments at school and was heading into finals season. It’s a wild time to start an internship but the beginning is usually not intense. The first week is about meeting the NetworkX team and deciding what you want to do during your internship. As part of your internship, you’re encouraged to research and contribute in any way you want. That means that you don’t necessarily need to work on the proposed project. I started writing notebooks because I felt confident doing that but you can explore other tasks too. As part of writing notebooks, I spent a lot of time reading papers and doing research. That was fun and let me develop some interesting skills. Also, this gave me a better view of what I want to do in the future as a computer scientist. As my notebooks were about graph isomorphism I researched new isomorphism algorithms and evaluated the possibility of implementing them in NetworkX. While writing the notebooks I read the documentation a lot so I fixed and added some things there too. Definitely contributing in any way you can is the key. For me working at NetworkX was not about fulfilling specific tasks but getting a broad vision of the project and thinking about ways I can make it grow. This approach gave me a good insight into how projects like this are managed and maintained, which I think is the most important thing I learned during the internship.
When you start a new notebook do an initial draft with the general structure of your notebook. That will help you to aim your research and organize your ideas better.
Always do some research first even if you think you know all the material. There’s always some idea, intuition, or interesting application that you don’t know about.
Take your time to learn things that can be helpful for your internship. Outreachy internships aim to help you gain skills so you can continue your tech career. Sometimes it can feel like you spend more time learning than doing and that’s ok! This is above all a learning experience!
Out of ideas for notebooks? Reading what’s already on the nx-guides can be a source of inspiration. Also, you can look for cool graph real-world applications in books and on the internet.
The repository is an incredible source of information about the project. If you are struggling with something, you can look at all related issues and PRs. There, you will be able to find discussions and explanations that can give you a better sense of why things are a certain way.
Learn about the project structure. A Python package is not just a lot of Python code together, there are a lot of other packages used in order to make documentation and testing work. Learning how everything works underneath will usually make your work easier but also it is a great skill to gain. For me, understanding how a project like this comes to life was extremely interesting because It was something I have never paid attention to before.
You will understand things as you go. So don’t overstress if you don’t understand everything. With time, some details will click. But it’s also important not to immediately give up when you don’t get something. The key is to keep your confidence even when you are feeling a bit lost.
Organize your work and learn how to work remotely. If this is your first time working remotely it’s important that you find your own way to organize your time. There are many strategies that can help you figure out how to organize your work throughout the day. Try different techniques until you find whatever suits you best. If you are a college student you may want to use the same system that works for you at school, but working it’s different so you may need to explore other options. For me, it was useful to have two lists: a to-do list because it was motivating to track my progress and an ideas list with things I want to do, usually smaller contributions that I can do when I’m tired of the bigger tasks. I also tried the Pomodoro technique but for me was more effective to work on tasks until I finish and then take a break if I want to.
As part of your Outreachy internship, you will need to write blogs, turn in feedback and attend informal chats. Be aware of that and organize all the deadlines so you and your mentor don’t miss any of them.
Make a cheat sheet with all the useful commands and links. That way you don’t have to go through the process of finding that information again every time you need it. If there is a series of commands that you use a lot try writing a bash script. Here is a repository with my cheat sheet: https://github.com/paulitapb/Outreachy2023
Overall my experience as a NetworkX intern was amazing! Not only did I gain many different skills but also now I am more confident in my abilities to work in tech. I discovered Open-Source communities and I realized I am able to contribute in valuable ways. Furthermore, I now have a better sense of what I want my future in tech to look like and what are my options.
]]>Outreachy is a paid remote internship program for underrepresented groups in tech. All internships are in Open Source and Open Science. To be selected as an intern first you need to :
Fill out an initial application: You’ll need to answer some questions about how you are affected by the systemic bias, and how being underrepresented in your local tech industry impacted your development. Maybe you don’t know how to answer some of these questions, especially if you are still not looking for a job, but it’s important to do some research first. If you can’t find any “official” information, tech communities often do surveys and publish the results. Reaching out to local tech communities that work with underrepresented groups is a great way to find mentors and like-minded people that can support you through your tech career. Take your time to reflect on these questions before writing your answers. Also if you are a college student you need to submit your school calendar. Read carefully all the time requirements and reach out to Outreachy coordinators if you think there are any details about your school calendar that you need to discuss.
Take part in the contribution phase: Once your initial application is approved you will be able to see all the projects. Finding the right project for you is important and also very challenging. One can feel tempted to contribute to multiple projects but unless you have a lot of free time I don’t feel like it’s the best option. Smaller and more constant contributions are the way to go. The contribution phase is not about introducing huge contributions but rather an opportunity to interact with the community, learn about the project, and gain new skills. Definitely finding the right project for you is key and depends a lot on how much time are you willing to put into it and your current skills.
For more information about Outreachy go to: https://www.outreachy.org/
If this is your first time contributing to an open-source project you may feel overwhelmed. Understanding an almost 20-year-old project like NetworkX can feel like it’s going to take forever but don’t worry I have some tips that may be handy for you during the contribution phase.
Learn about the project: Understanding the project is a process that may take some time. Don’t rush it! You don’t need to understand the entire codebase in a day. The most important things that you need to know only will take you a few hours to go through: Learn about the project mission and values, community rules, and contribution process. In NetworkX all you need to know is here: https://networkx.org/documentation/stable/developer/index.html
Start contributing right away: You don’t need to understand every part of the project to make valuable contributions. Start small and use that experience to level up your contributions. At the beginning of the contribution phase, some good first issues are added. Work on them first and then start opening your own issues (Don’t forget to link your PR with the issues so they can be automagically closed). Also, record your contributions on the Outreachy website as you submit them. I only recorded all contributions at the end and that took me a lot of time. If you struggle to find issues or ideas for contributions here are my contributions at NetworkX: https://github.com/networkx/networkx/pulls?q=is%3Apr+paulitapb
It’s not just about writing code. What’s great about big projects is that you can explore many different things. Making contributions to different parts of the project shows that you understand the project on a general level and can be a valuable member of the community.
Don’t be afraid of the community! As a beginner, you may worry about the technical side of the project but understanding the community review process is key. Usually, communities want to grow and that means teaching new contributors about the project. It’s fine if your contributions are not perfect or if you need to ask questions. That’s the beauty of Open-Source communities! Also, don’t be discouraged if a contribution is not merged into the project. Maybe that was already suggested, tested, or deprecated. Take that as a learning experience and even that can give you some ideas for future contributions.
I hope this information helps you to start your Open-Source journey! The NetworkX team is waiting for your great contributions!
If you are interested in my experience during the internship you can find the second part of this blog here.
]]>The NumPy team is excited to announce the launch of the NumPy Fellowship Program and the appointment of Sayed Adel (@seiko2plus) as the first NumPy Developer in Residence. This is a significant milestone in the history of the project: for the first time, NumPy is in a position to use its project funds to pay for a full year of maintainer time. We believe that this will be an impactful program that will contribute to NumPy’s long-term sustainability as a community-driven open source project.
Sayed has been making major contributions to NumPy since the start of 2020, in particular around computational performance. He is the main author of the NumPy SIMD architecture (NEP 38, docs), generously shared his knowledge of SIMD instructions with the core developer team, and helped integrate the work of various volunteer and industry contributors in this area. As a result, we’ve been able to expand support to multiple CPU architectures, integrating contributions from IBM, Intel, Apple, and others, none of which would have been possible without Sayed. Furthermore, when NumPy tentatively started using C++ in 2021, Sayed was one of the proponents of the move and helped with its implementation.
The NumPy Steering Council sees Sayed’s appointment to this role as both recognition of his past outstanding contributions as well as an opportunity to continue improving NumPy’s computational performance. In the next 12 months, we’d like to see Sayed focus on the following:
“I’m both happy and nervous: this is a great opportunity, but also a great responsibility,” said Sayed in response to his appointment.
The funds for the NumPy Fellowship Program come from a partnership with Tidelift and from individual donations. We sincerely thank both Tidelift and everyone who donated to the project—without you, this program would not be possible! We also acknowledge the CPython Developer-in-Residence and the Django Fellowship programs, which served as inspiration for this program.
Sayed officially starts as the NumPy Developer in Residence today, 1 December 2022. Already, we are thinking about opportunities beyond this first year: we imagine “in residence” roles that focus on developing, improving, and maintaining other parts of the NumPy project (e.g., documentation, website, translations, contributor experience, etc.). We look forward to this exciting new chapter of the NumPy contributor community and will keep you posted on our progress.
]]>We are delighted to announce a two-year grant from the Chan Zuckerberg Initiative (CZI) in support of the Scientific Python project. This grant will support work on common web themes, joint infrastructure and practices, accessibility, and interactivity of core library documentation. We are particularly excited that, through this work, we may expand global participation of scientific communities in using and contributing to Python tools. It is, to the best of our knowledge, the first time that a scientific open source community has received significant support for accessibility and internationalization efforts.
CZI continues to support many impactful and innovative projects in the scientific Python community through its Essential Open Source Software for Science (EOSS) program. Today, they announced the 5th funding cycle of that program. This grant to Scientific Python, while outside the EOSS program, complements it well. Among other things, the Scientific Python project aims to support, document, and make accessible common practices & infrastructure. Such infrastructure will benefit not only the projects at the core of the ecosystem, but also those well beyond it.
“We are thrilled to partner with the Scientific Python project, an effort to harmonize a critical set of open source research software projects widely used across all the areas of biomedical research that CZI supports. The distributed nature of the scientific open source ecosystem will greatly benefit from their efforts to standardize best practices and focus on ecosystem-level initiatives,” said Dario Taraborelli, Science Program Officer at the Chan Zuckerberg Initiative.
This grant will support core scientific Python projects by doing release management, writing documentation, building and supporting joint infrastructure, and by measuring and publishing metrics on community involvement and project health. In addition, here are some specific deliverables:
There are two web themes commonly deployed on community sites: the Scientific Python Hugo Theme—for project websites, and the pydata-sphinx-theme—for documentation. We will improve these themes, effectively upgrading several project websites simultaneously. By fostering theme adoption, we will help the ecosystem present a more unified front to users, while reducing the web maintenance burden on developers. Other theme work includes better responsive layouts (important for use on mobile and tables), blogging facilities, increased usability, and accessibility compliance.
Better accessibility of online resources increases usability for everyone, while fostering community participation and inclusion. The Scientific Python Hugo Theme and pydata-sphinx-theme are natural conduits for introducing accessibility standards and best practices to the broader ecosystem. We will develop access-centered best practices and contribution guidelines, organize online workshops, and work with other maintainers to improve their projects’ documentation and homepage accessibility. A set of access-centered practices will be written up as a Scientific Python Ecosystem Coordination document (or SPEC, for short), to provide guidance to those projects we cannot support directly.
A key aim of this work is to have web and documentation themes, as well as core scientific Python project websites, meet the applicable Web Content Accessibility Guidelines.
Documentation is key to a project’s success, and good documentation is approachable to end users with a wide range of backgrounds and skills. While most scientific Python projects value documentation and work hard at it, there is still much room for improvement.
One such improvement is translation and localization. Development takes place in English, as reflected by project websites and documentation. While many contributors are comfortable with English as a first, second, or even third language, the language barrier excludes especially users that are very young, are new to the community, have learning disabilities, or are from the Global South—all potential future contributors and leaders in the scientific Python community! We will therefore translate key pages of core project websites, and provide translation infrastructure for the web themes.
A second area of improvement is interactivity. Interactive project documentation has the potential to engage less experienced users, making it easier to experiment with and teach ecosystem libraries. We will work on documentation interactivity by providing seamless, in-browser execution of code via JupyterLite, a WebAssembly Jupyter distribution.
The four PI’s for this grant are Stéfan van der Walt (UC Berkeley; NumPy, scikit-image, SciPy; Scientific Python Hugo Theme), Tania Allard (Quansight Labs; JupyterHub, NumFOCUS DISC, Jupyter accessibility), Jarrod Millman (Scientific Python; NetworkX; scikit-image; Scientific Python Hugo Theme, pydata-sphinx-theme), and Ralf Gommers (Quansight Labs; SciPy, NumPy, data-apis.org). Melissa Weber Mendonça (Quansight Labs; NumPy, SciPy) and Chris Holdgraf (2i2c; Project Jupyter, MyST, pydata-sphinx-theme) will participate as key personnel, providing expertise in documentation and Sphinx themes in particular. Jarrod and Stéfan are co-creators of the Scientific Python project, and everyone on the grant has been involved in the larger scientific Python ecosystem and community for many years.
Today was announcement day, but the real work starts in December. Some topics we’ll be able to dive straight into; others will require hiring—and we’re excited to involve new web designers, accessibility experts, and engineers in this journey. Stay tuned—there’s a lot more to come!
To connect with the team, and to follow job posts, please join us at https://discuss.scientific-python.org.
]]>The Scientific Python project is an initiative to better coordinate and support the scientific Python ecosystem of libraries and to grow the surrounding community. It aims to improve communication between ecosystem projects, to better plan for their joint future, and to make that future a reality.
Initially, the Scientific Python developer community was small, so that it was easy to discuss important ecosystem-wide decisions at events like the annual SciPy conference. But with the rapid growth of the community, number of libraries, as well as geographical diversification, this was no longer possible. Scientific Python is a loose federation of somewhat independent community projects, and while this configuration is robust, it also tends to favor reinvention of the wheel and decisions that focus on project needs, instead of being strategically aligned with the entire ecosystem. Ultimately, the different projects depend on one another, so that it makes sense to have close coordination between them.
The SPECs, or Scientific Python Ecosystem Coordination documents, provide a mechanism through which the community can establish cross-project policies. They function similarly to PEPs, NEPs, SKIPs, or any of the other enhancement proposals—except that they are relevant to multiple projects in the ecosystem.
These documents will be recommendations written up by the community, and their authority will derive from endorsement by popular libraries. Some of them are already in progress and many are on the way!
SPECs are short and concise, and are endorsed by core projects in the ecosystem once they are adopted.
We provide common engineering infrastructure to help maintainers. Some tools we currently work on include a Hugo web theme for project websites, a self-hosted privacy-friendly web analytics platform, a shared discussion forum, the devpy developer CLI, this blog, and a project development statistics dashboard.
We organize virtual “domain summits” where developers can meet to discuss relevant cross-project topics. These will be recorded and shared on our YouTube channel. Thus far, we’ve organized four such events on: API dispatching, alt-text for improved accessibility, domain stacks, and sparse arrays.
We also organize an annual in-person developer summit: a week of intense collaboration, with work scheduled ahead of time, during which we address as many cross-project concerns as we can.
We work on documentation for new contributors and maintainers. Our YouTube channel hosts onboarding videos, that show how to get started contributing to a scientific Python project, as well as developer interviews. Over the next year, we also plan to unify several disparate community resources into a maintainer guide.
We love to reach out to and connect with our growing community of users and developers! Platforms we are present on include Twitter, Facebook, Instagram, and TikTok.
The short answer: anyone who wants to be. The long answer: we are a community of volunteers from different scientific Python ecosystem packages. There are several teams working on the different aspects of the project, such as our community managers & leaders, the SPEC steering committee, and blog content reviewers and editors. The project is led by Jarrod Millman and Stéfan van der Walt, both long-term community members who care deeply about the success of the ecosystem and its developers.
Currently there are eight projects that endorse the SPECs: IPython, Matplotlib, NetworkX, NumPy, pandas, scikit-image, scikit-learn, and SciPy. However, contributors from many more projects participate on our discussion forum, write blogs, and contribute to the community in other ways. We welcome everyone to become part of the community and to contribute however they can!
For the past couple of months I have been a community manager for the project. This includes recording documentation videos for the website, recording developer interviews for our YouTube channel, presenting talks at conferences, hosting developer events, creating content for our Instagram, Facebook, TikTok, and Twitter channels, and many other things that I never thought I would do.
Why? Because I believe in this. Jarrod and Stéfan reached out to me last year, inviting me to be part of this amazing idea and I was honored and very grateful. I wasn’t sure that I could do it, but now I find myself here and I know that this is the right place for me. Not because I have a lot of experience in these things (I had actually never even used TikTok before joining the project), but because I care. I have learned the importance of building community and while the Scientific Python tools are amazing, what makes the difference is the community around them and I’m grateful to be able to help make this community great.
I have learned a lot from the Scientific Python ecosystem by being a community manager, I have met a lot of wonderful people and I have seen what people can do with the tools that the ecosystem offers. So, my take: The Scientific Python project is a great bet. Open source Scientific Python is about much more than coding, it is about collaborating, teaching, and communicating. So unifying the community and promoting the integration of the projects sounds like the perfect path to follow in order to get the most out of the ecosystem.
]]>The last and final post discussing the VF2++ helpers can be found here. Now that we’ve figured out how to solve all the sub-problems that VF2++ consists of, we are ready to combine our implemented functionalities to create the final solver for the Graph Isomorphism problem.
We should quickly review the individual functionalities used in the VF2++ algorithm:
We are going to use all these functionalities to form our Isomorphism solver.
First of all, let’s describe the algorithm in simple terms, before presenting the pseudocode. The algorithm will look something like this:
The official code for the VF2++ is presented below.
# Check if there's a graph with no nodes in it
if G1.number_of_nodes() == 0 or G2.number_of_nodes() == 0:
return False
# Check that both graphs have the same number of nodes and degree sequence
if not nx.faster_could_be_isomorphic(G1, G2):
return False
# Initialize parameters (Ti/Ti_tilde, i=1,2) and cache necessary information about degree and labels
graph_params, state_params = _initialize_parameters(G1, G2, node_labels, default_label)
# Check if G1 and G2 have the same labels, and that number of nodes per label is equal between the two graphs
if not _precheck_label_properties(graph_params):
return False
# Calculate the optimal node ordering
node_order = _matching_order(graph_params)
# Initialize the stack to contain node-candidates pairs
stack = []
candidates = iter(_find_candidates(node_order[0], graph_params, state_params))
stack.append((node_order[0], candidates))
mapping = state_params.mapping
reverse_mapping = state_params.reverse_mapping
# Index of the node from the order, currently being examined
matching_node = 1
while stack:
current_node, candidate_nodes = stack[-1]
try:
candidate = next(candidate_nodes)
except StopIteration:
# If no remaining candidates, return to a previous state, and follow another branch
stack.pop()
matching_node -= 1
if stack:
# Pop the previously added u-v pair, and look for a different candidate _v for u
popped_node1, _ = stack[-1]
popped_node2 = mapping[popped_node1]
mapping.pop(popped_node1)
reverse_mapping.pop(popped_node2)
_restore_Tinout(popped_node1, popped_node2, graph_params, state_params)
continue
if _feasibility(current_node, candidate, graph_params, state_params):
# Terminate if mapping is extended to its full
if len(mapping) == G2.number_of_nodes() - 1:
cp_mapping = mapping.copy()
cp_mapping[current_node] = candidate
yield cp_mapping
continue
# Feasibility rules pass, so extend the mapping and update the parameters
mapping[current_node] = candidate
reverse_mapping[candidate] = current_node
_update_Tinout(current_node, candidate, graph_params, state_params)
# Append the next node and its candidates to the stack
candidates = iter(
_find_candidates(node_order[matching_node], graph_params, state_params)
)
stack.append((node_order[matching_node], candidates))
matching_node += 1
This section is dedicated to the performance comparison between VF2 and VF2++. The comparison was performed in random graphs without labels, for number of nodes anywhere between the range $(100-2000)$. The results are depicted in the two following diagrams.
We notice that the maximum speedup achieved is 14x, and continues to increase as the number of nodes increase. It is also highly prominent that the increase in number of nodes, doesn’t seem to affect the performance of VF2++ to a significant extent, when compared to the drastic impact on the performance of VF2. Our results are almost identical to those presented in the original VF2++ paper, verifying the theoretical analysis and premises of the literature.
The achieved boost is due to some key improvements and optimizations, specifically:
res = []
for node in G2.nodes():
if G1.degree[u] == G2.degree[node]:
res.append(node)
# do stuff with res ...
to get the nodes of same degree as u (which happens a lot of times in the implementation), we just do:
res = G2_nodes_of_degree[G1.degree[u]]
# do stuff with res ...
where “G2_nodes_of_degree” stores set of nodes for a given degree. The same is done with node labels.
candidates = set(G2.nodes())
for candidate in candidates:
if feasibility(u, candidate):
do_stuff()
we take a huge set of candidates, which results in poor performance due to maximizing calls of “feasibility”, thus performing the feasibility checks in a very large set. Now compare that to the following alternative:
candidates = [
n
for n in G2_nodes_of_degree[G1.degree[u]].intersection(
G2_nodes_of_label[G1_labels[u]]
)
]
for candidate in candidates:
if feasibility(u, candidate):
do_stuff()
Immediately we have drastically reduced the number of checks performed and calls to the function, as now we only apply them to nodes of the same degree and label as $u$. This is a simplification for demonstration purposes. In the actual implementation there are more checks and extra shrinking of the candidate set.
Let’s demonstrate our VF2++ solver on a real graph. We are going to use the graph from the Graph Isomorphism wikipedia.
Let’s start by constructing the graphs from the image above. We’ll call
the graph on the left G
and the graph on the left H
:
import networkx as nx
G = nx.Graph(
[
("a", "g"),
("a", "h"),
("a", "i"),
("g", "b"),
("g", "c"),
("b", "h"),
("b", "j"),
("h", "d"),
("c", "i"),
("c", "j"),
("i", "d"),
("d", "j"),
]
)
H = nx.Graph(
[
(1, 2),
(1, 5),
(1, 4),
(2, 6),
(2, 3),
(3, 7),
(3, 4),
(4, 8),
(5, 6),
(5, 8),
(6, 7),
(7, 8),
]
)
res = nx.vf2pp_is_isomorphic(G, H, node_label=None)
# res: True
res = nx.vf2pp_isomorphism(G, H, node_label=None)
# res: {1: "a", 2: "h", 3: "d", 4: "i", 5: "g", 6: "b", 7: "j", 8: "c"}
res = list(nx.vf2pp_all_isomorphisms(G, H, node_label=None))
# res: all isomorphic mappings (there might be more than one). This function is a generator.
# Assign some label to each node
G_node_attributes = {
"a": "blue",
"g": "green",
"b": "pink",
"h": "red",
"c": "yellow",
"i": "orange",
"d": "cyan",
"j": "purple",
}
nx.set_node_attributes(G, G_node_attributes, name="color")
H_node_attributes = {
1: "blue",
2: "red",
3: "cyan",
4: "orange",
5: "green",
6: "pink",
7: "purple",
8: "yellow",
}
nx.set_node_attributes(H, H_node_attributes, name="color")
res = nx.vf2pp_is_isomorphic(G, H, node_label="color")
# res: True
res = nx.vf2pp_isomorphism(G, H, node_label="color")
# res: {1: "a", 2: "h", 3: "d", 4: "i", 5: "g", 6: "b", 7: "j", 8: "c"}
res = list(nx.vf2pp_all_isomorphisms(G, H, node_label="color"))
# res: {1: "a", 2: "h", 3: "d", 4: "i", 5: "g", 6: "b", 7: "j", 8: "c"}
Notice how in the first case, our solver may return a different mapping every time, since the absence of labels results in nodes that can map to more than one others. For example, node 1 can map to both a and h, since the graph is symmetrical.
On the second case though, the existence of a single, unique label per node imposes that there’s only one match for each node, so the mapping returned is deterministic. This is easily observed from
output of list(nx.vf2pp_all_isomorphisms)
which, in the first case, returns all possible mappings while in the latter, returns a single, unique isomorphic mapping.
The previous post can be found here, be sure to check it out so you can follow the process step by step. Since then, another two very significant features of the algorithm have been implemented and tested: node pair candidate selection and feasibility checks.
As previously described, in the ISO problem we are basically trying to create a mapping such that, every node from the first graph is matched to a node from the second graph. This searching for “feasible pairs” can be visualized by a tree, where each node is the candidate pair that we should examine. This can become much clearer if we take a look at the below figure.
In order to check if the graphs $G_1$, $G_2$ are isomorphic, we check every candidate pair of nodes and if it is feasible, we extend the mapping and go deeper into the tree of pairs. If it’s not feasible, we climb up and follow a different branch, until every node in $G_1$ is mapped to a node $G_2$. In our example, we start by examining node 0 from G1, with node 0 of G2. After some checks (details below), we decide that the nodes 0 and 0 are matching, so we go deeper to map the remaining nodes. The next pair is 1-3, which fails the feasibility check, so we have to examine a different branch as shown. The new branch is 1-2, which is feasible, so we continue on using the same logic until all the nodes are mapped.
Although in our example we use a random candidate pair of nodes, in the actual implementation we are able to target specific pairs that are more likely to be matched, hence boost the performance of the algorithm. The idea is that, in every step of the algorithm, given a candidate
$$u\in V_1$$
we compute the candidates
$$v\in V_2$$
where $V_1$ and $V_2$ are the nodes of $G_1$ and $G_2$ respectively. Now this is a puzzle that does not require a lot of specific knowledge on graphs or the algorithm itself. Keep up with me, and you will realize it yourself. First, let $M$ be the mapping so far, which includes all the “covered nodes” until this point. There are actually three different types of $u$ nodes that we might encounter.
Node $u$ has no neighbors (degree of $u$ equals to zero). It would be redundant to test as candidates for $u$, nodes from $G_2$ that have more than zero neighbors. That said, we eliminate most of the possible candidates and keep those that have the same degree as $u$ (in this case, zero). Pretty easy right?
Node $u$ has neighbors, but none of them belong to the mapping. This situation is illustrated in the following figure.
The grey lines indicate that the nodes of $G_1$ (left 1,2) are mapped to the nodes of $G_2$ (right 1,2). They are basically the mapping. Again, given $u$, we make the observation that candidates $v$ of u, should also have no neighbors in the mapping, and also have the same degree as $u$ (as in the figure). Notice how if we add a neighbor to $v$, or if we place one of its neighbors inside the mapping, there is no point examining the pair $u-v$ for matching.
Node $u$ has neighbors and some of them belong to the mapping. This scenario is also depicted in the below figure.
In this case, to obtain the candidates for $u$, we must look into the neighborhoods of nodes from $G_2$, which map back to the covered neighbors of $u$. In our example, $u$ has one covered neighbor (1), and 1 from $G_1$ maps to 1 from $G_2$, which has $v$ as neighbor. Also, for v to be considered as candidate, it should have the same degree as $u$, obviously. Notice how every node that is not in the neighborhood of 1 (in $G_2$) cannot be matched to $u$ without breaking the isomorphism.
Let’s assume that given a node $u$, we obtained its candidate $v$ following the process described in the previous section. At this point, the Feasibility Rules are going to determine whether the mapping should be extended by the pair $u-v$ or if we should try another candidate. The feasibility of a pair $u-v$ is examined by consistency and cutting checks.
At, first I am going to present the mathematical expression of the consistency check. It may seem complicated at first, but it’s going to be made simple by using a visual illustration. Using the notation $nbh_i(u)$ for the neighborhood of u in graph $G_i$, the consistency rule is:
$$\forall\tilde{v}\in nbh_2(v)\cap M:(u, M^{-1}(\tilde{v}))\in E_1) \wedge \forall\tilde{u}\in nbh_1(u)\cap M:(u, M(\tilde{u}))\in E_2)$$
We are going to use the following simple figure to demystify the above equation.
The mapping is depicted as grey lines between the nodes that are already mapped, meaning that 1 maps to A and 2 to B. What is implied by the equation is that, for two nodes $u$ and $v$ to pass the consistency check, the neighbors of $u$ that belong in the mapping, should map to neighbors of $v$ (and backwards). This could be checked by code as simple as:
for neighbor in G1[u]:
if neighbor in mapping:
if mapping[neighbor] not in G2[v]:
return False
elif G1.number_of_edges(u, neighbor) != G2.number_of_edges(
v, mapping[neighbor]
):
return False
where the final two lines also check the number of edges between node $u$ and its neighbor $\tilde{u}$, which should be the same as those between $v$ and its neighbor which $\tilde{u}$ maps to. At a very high level, we could describe this check as a 1-look-ahead check.
We have previously discussed what $T_i$ and $\tilde{T_i}$ represent (see previous post). These sets are used in the cutting checks as follows: the number of neighbors of $u$ that belong to $T_1$, should be equal to the number of neighbors of $v$ that belong to $T_2$. Take a moment to observe the below figure.
Once again, node 1 maps to A and 2 to B. The red nodes (4,5,6) are basically $T_1$ and the yellow ones (C,D,E) are $T_2$. Notice that in order for $u-v$ to be feasible, $u$ should have the same number of neighbors, inside $T_1$, as $v$ in $T_2$. In every other case, the two graphs are not isomorphic, which can be verified visually. For this example, both nodes have 2 of their neighbors (4,6 and C,E) in $T_1$ and $T_2$ respectively. Careful! If we delete the $V-E$ edge and connect $V$ to $D$, the cutting condition is still satisfied. However, the feasibility is going to fail, by the consistency checks of the previous section. A simple code to apply the cutting check would be:
if len(T1.intersection(G1[u])) != len(T2.intersection(G2[v])) or len(
T1out.intersection(G1[u])
) != len(T2out.intersection(G2[v])):
return False
where T1out
and T2out
correspond to $\tilde{T_1}$ and $\tilde{T_2}$ respectively. And yes, we have to check for
those as well, however we skipped them in the above explanation for simplicity.
At this point, we have successfully implemented and tested all the major components of the algorithm VF2++,
This means that, in the next post, hopefully, we are going to discuss our first, full and functional implementation of VF2++.
]]>This post includes all the major updates since the last post about VF2++. Each section is dedicated to a different sub-problem and presents the progress on it so far. General progress, milestones and related issues can be found here.
The node ordering is one major modification that VF2++ proposes. Basically, the nodes are examined in an order that makes the matching faster by first examining nodes that are more likely to match. This part of the algorithm has been implemented, however there is an issue. The existence of detached nodes (not connected to the rest of the graph) causes the code to crash. Fixing this bug will be a top priority during the next steps. The ordering implementation is described by the following pseudocode.
Matching Order
- Set $M = \varnothing$.
- Set $\bar{V1}$ : nodes not in order yet
- while $\bar{V1}$ not empty do
- $rareNodes=[$nodes from $V_1$ with the rarest labels$]$
- $maxNode=argmax_{degree}(rareNodes)$
- $T=$ BFSTree with $maxNode$ as root
- for every level in $T$ do
- $V_d=[$nodes of the $d^{th}$ level$]$
- $\bar{V_1} \setminus V_d$
- $ProcessLevel(V_d)$
- Output $M$: the matching order of the nodes.
Process Level
- while $V_d$ not empty do
- $S=[$nodes from $V_d$ with the most neighbors in M$]$
- $maxNodes=argmax_{degree}(S)$
- $rarestNode=[$node from $maxNodes$ with the rarest label$]$
- $V_d \setminus m$
- Append m to M
According to the VF2++ paper notation:
$$T_1=(u\in V_1 \setminus m: \exists \tilde{u} \in m: (u,\tilde{u}\in E_1))$$
where $V_1$ and $E_1$ contain all the nodes and edges of the first graph respectively, and $m$ is a dictionary, mapping every node of the first graph to a node of the second graph. Now if we interpret the above equation, we conclude that $T_1$ contains uncovered neighbors of covered nodes. In simple terms, it includes all the nodes that do not belong in the mapping $m$ yet, but are neighbors of nodes that are in the mapping. In addition,
$$\tilde{T_1}=(V_1 \setminus m \setminus T_1)$$
The following figure is meant to provide some visual explanation of what exactly $T_i$ is.
The blue nodes 1,2,3 are nodes from graph G1 and the green nodes A,B,C belong to the graph G2. The grey lines connecting those two indicate that in this current state, node 1 is mapped to node A, node 2 is mapped to node B, etc. The yellow edges are just the neighbors of the covered (mapped) nodes. Here, $T_1$ contains the red nodes (4,5,6) which are neighbors of the covered nodes 1,2,3, and $T_2$ contains the grey ones (D,E,F). None of the nodes depicted would be included in $\tilde{T_1}$ or $\tilde{T_2}$. The latter sets would contain all the remaining nodes from the two graphs.
Regarding the computation of these sets, it’s not practical to use the brute force method and iterate over all nodes in every step of the algorithm to find the desired nodes and compute $T_i$ and $\tilde{T_i}$. We use the following observations to implement an incremental computation of $T_i$ and $\tilde{T_i}$ and make VF2++ more efficient.
We can conclude that in every step, $T_i$ and $\tilde{T_i}$ can be incrementally updated. This method avoids a ton of redundant operations and results in significant performance improvement.
The above graph shows the difference in performance between using the exhaustive brute force and incrementally updating $T_i$ and $\tilde{T_i}$. The graph used to obtain these measurements was a regular GNP Graph with a probability for an edge equal to $0.7$. It can clearly be seen that execution time of the brute force method increases much more rapidly with the number of nodes/edges than the incremental update method, as expected. The brute force method looks like this:
def compute_Ti(G1, G2, mapping, reverse_mapping):
T1 = {nbr for node in mapping for nbr in G1[node] if nbr not in mapping}
T2 = {
nbr
for node in reverse_mapping
for nbr in G2[node]
if nbr not in reverse_mapping
}
T1_out = {n1 for n1 in G1.nodes() if n1 not in mapping and n1 not in T1}
T2_out = {n2 for n2 in G2.nodes() if n2 not in reverse_mapping and n2 not in T2}
return T1, T2, T1_out, T2_out
If we assume that G1 and G2 have the same number of nodes (N), the average number of nodes in the mapping is $N_m$, and the average node degree of the graphs is $D$, then the time complexity of this function is:
$$O(2N_mD + 2N) = O(N_mD + N)$$
in which we have excluded the lookup times in $T_i$, $mapping$ and $reverse\_mapping$ as they are all $O(1)$. Our incremental method works like this:
def update_Tinout(
G1, G2, T1, T2, T1_out, T2_out, new_node1, new_node2, mapping, reverse_mapping
):
# This function should be called right after the feasibility is established and node1 is mapped to node2.
uncovered_neighbors_G1 = {nbr for nbr in G1[new_node1] if nbr not in mapping}
uncovered_neighbors_G2 = {
nbr for nbr in G2[new_node2] if nbr not in reverse_mapping
}
# Add the uncovered neighbors of node1 and node2 in T1 and T2 respectively
T1.discard(new_node1)
T2.discard(new_node2)
T1 = T1.union(uncovered_neighbors_G1)
T2 = T2.union(uncovered_neighbors_G2)
# todo: maybe check this twice just to make sure
T1_out.discard(new_node1)
T2_out.discard(new_node2)
T1_out = T1_out - uncovered_neighbors_G1
T2_out = T2_out - uncovered_neighbors_G2
return T1, T2, T1_out, T2_out
which based on the previous notation, is:
$$O(2D + 2(D + M_{T_1}) + 2D) = O(D + M_{T_1})$$
where $M_{T_1}$ is the expected (average) number of elements in $T_1$.
Certainly, the complexity is much better in this case, as $D$ and $M_{T_1}$ are significantly smaller than $N_mD$ and $N$.
In this post we investigated how node ordering works at a high level, and also how we are able to calculate some important parameters so that the space and time complexity are reduced. The next post will continue with examining two more significant components of the VF2++ algorithm: the candidate node pair selection and the cutting/consistency rules that decide when the mapping should or shouldn’t be extended. Stay tuned!
]]>I got accepted as a GSoC contributor, and I am so excited to spend the summer working on such an incredibly interesting project. The mentors are very welcoming, communicative, fun to be around, and I really look forward to collaborating with them. My application for GSoC 2022 can be found here.
My name is Konstantinos Petridis, and I am an Electrical Engineering student at the Aristotle University of Thessaloniki. I am currently on my 5th year of studies, with a Major in Electronics & Computer Science. Although a wide range of scientific fields fascinate me, I have a strong passion for Computer Science, Physics and Space. I love to study, learn new things and don’t hesitate to express my curiosity by asking a bunch of questions to the point of being annoying. You can find me on GitHub @kpetridis24.
The project I’ll be working on, is the implementation of VF2++, a state-of-the-art algorithm used for the Graph Isomorphism problem, which lies in the complexity class NP. The functionality of the algorithm is similar to a regular, but more complex form of a DFS, but performed on the possible solutions rather than the graph nodes. In order to verify/reject the isomorphism between two graphs, we examine every possible candidate pair of nodes (one from the first and one from the second graph) and check whether going deeper into the DFS tree is feasible using specific rules. In case of feasibility establishment, the DFS tree is expanded, investigating deeper pairs. When one pair is not feasible, we go up the tree and follow a different branch, just like in a regular DFS. More details about the algorithm can be found here.
The major reasons I chose this project emanate from both my love for Graph Theory, and the fascinating nature of this individual project. The algorithm itself is so recent, that NetworkX is possibly going to hold one of the first implementations of it. This might become a reference that is going to help to further develop and optimize future implementations of the algorithm by other organisations. Regarding my personal gain, I will become more familiar with the open source communities and their philosophy, I will collaborate with highly skilled individuals and cultivate a significant amount of experience on researching, working as a team, getting feedback and help when needed, contributing to an actual scientific library.
]]>I was selected as an intern to work on SciPy build system. In this blog post, I will be describing my journey of this 10-months long internship at SciPy. I worked on a variety of topics starting from migrating the SciPy build system to Meson, cleaning up the public API namespaces and adding Uarray support to SciPy submodules.
The main reasons for switching to Meson include (in addition to distutils
being deprecated):
For more details on the initial proposal to switch to Meson, see scipy-13615
I was initially selected to work on the migrating the SciPy build system to meson. I started by adding Meson build support for scipy.misc and scipy.signal. While working on this, we came across many build warnings which we wanted to fix, since they unnecessarily increased the build log and might point to some hidden bugs. I fixed these warnings, the majority of which came from deprecated NumPy C API calls.
runtests.py
, but using Meson for building SciPy.Meson build support including all the above work was merged into SciPy’s main
branch around Christmas 2021. Meson will now become the default build in the upcoming 1.9.0 release.
“A basic API design principle is: a public object should only be available from one namespace. Having any function in two or more places is just extra technical debt, and with things like dispatching on an API or another library implementing a mirror API, the cost goes up.”
>>> from scipy import ndimage
>>> ndimage.filters.gaussian_filter is ndimage.gaussian_filter # :(
True
The API reference docs of SciPy define the public API. However, SciPy still had some submodules that were accidentally somewhat public by missing an underscore at the start of their name.
I worked on cleaning the pubic namespaces for about a couple of months by carefully adding underscores to the .py
files that were not meant to be public and added depecrated warnings if anyone tries to access them.
>>> from scipy import ndimage
>>> ndimage.filters.gaussian_filter is ndimage.gaussian_filter
<stdin>:1: DeprecationWarning: Please use `gaussian_filter` from the `scipy.ndimage` namespace, the `scipy.ndimage.filters` namespace is deprecated.
True
“SciPy adopted uarray to support a multi-dispatch mechanism with the goal being: allow writing backends for public APIs that execute in parallel, distributed or on GPU.”
For about the last four months, I worked on adding Uarray support to SciPy submobules. I do recommend reading this blog post by Anirudh Dagar covering the motivation and actual usage of uarray
. I picked up the following submodules for adding uarray
compatibility:
At the same time, in order to show a working prototype, I also added uarray
backends in CuPy to the following submodules:
The pull requests contain links to Colab notebooks which show these features in action.
import scipy
import cupy as cp
import numpy as np
from scipy.linalg import inv, set_backend
import cupyx.scipy.linalg as _cupy_backend
x_cu, x_nu = cp.array([[1.0, 2.0], [3.0, 4.0]]), np.array([[1.0, 2.0], [3.0, 4.0]])
y_scipy = inv(x_nu)
with set_backend(_cupy_backend):
y_cupy = inv(x_cu)
meson-python
backend.uarray
support are still under heavy discussion, and the main aim will be get them merged as soon as possible once we have reached a concrete decision.I am very grateful to Ralf Gommers for providing me with this opportunity and believing in me. His guidance, support and patience played a major role during the entire course of internship. I am also thankful to whole SciPy community for helping me with the PR reviews and providing essential feedback. Also, huge thanks to Gagandeep Singh for always being a part of this wonderful journey.
In a nutshell, I will remember this experience as: Ralf Gommers has boosted my career by millions!
]]>The Scientific Python blog has just gotten a little more accessible! If you didn’t catch our invite on Twitter or run into the problem firsthand, there’s a good chance you might not have noticed the new descriptions for a number of blog post images.
Since it’s not a flashy improvement, we wanted to make a point to highlight the community effort to to make a more accessible blog–and internet as a whole–last week.
In the spirit of Scientific Python’s mission to build community-developed and
inclusive spaces, the project had its
first image description workshop on May 13, 2022. We gathered to learn about
and practice writing about images from the ground up. Image descriptions–
often called alt text
–
are one of many accessibility considerations that people making content are
responsible for.
Missing alt text is among the most common culprits of inaccessible content on the internet.
Fortunately, missing alt text has a clear fix even for those new to with
accessibility: describe the image based on its context.
During this event, thirteen people wrote 23 image descriptions to improve 11 blog posts in one hour (plus the time for post-event feedback 😉). Wow! The images covered range from illustrative additions, to creative tutorials, to charts that carry the message of the blog post, so it provides great set of examples on how to consider writing alt text in different situations.
That’s not even counting the many questions, discussions, and ephiphanies that happened during out work time. Learning can’t be captured as easily by a number, but a taste of our less quantifiable wins can be found on the workshop recording.
If you want to further explore the experience and relive the joy, you can find resources on the event agenda, discussions on the working pull request, and the final steps needed to make change on the contributing pull request.
But wait, there’s more! While the community made great progress on improving the blog, the work to add or improve alt text on Scientific Python’s blog, website, and documentation is ongoing. You can continue these efforts by
This image description event has been run across multiple open-source projects in the scientific computing ecosystem. Reach out via issue if you are interested in running a similar event on a project you are part of. ❤️
]]>Our first Contributor Spotlight interview is with Mukulika Pahari, our “go-to” person for Numpy documentation. Mukulika is a Computer Science student at Mumbai University. Her passions outside of computing involve things with paper, including reading books (fiction!), folding origami, and journaling. During our interview she discussed why she joined NumPy, what keeps her motivated, and how likely she would recommend becoming a NumPy contributor.
Hi, I am Mukulika. I live in Mumbai, India, and I’m completing my Computer Science degree at Mumbai University. I joined NumPy last summer during Google Season of Docs. The idea behind this initiative is to raise awareness of open source, the role of documentation, and the importance of technical writing. It also gives technical writers an opportunity to gain experience working on open source projects.
Apart from that, I like to read fiction – literally everything that I can put my hands on – and I find it relaxing to learn origami from YouTube tutorials.
I write technical documentation for NumPy, and I help new contributors with their questions.
The best part for me, honestly, is the people. It is inspiring to meet people from diverse backgrounds all over the world and do something together. However, I do find it quite scary to put your code out there for “the whole world to see and evaluate.” It can challenge my confidence. But meeting all the contributors, seeing their work, and getting their valuable feedback is absolutely worth it.
Since I already used NumPy in my data analysis courses in school, and now I am using it at my internship, I thought that I could also contribute to it. It is always more fun to do side projects in a group. Once you get to know the people in the NumPy community, you want to stay. They are really open and supportive!
Well, I do not really give out books to people – being a broke college student is quite a barrier. But I think that everyone should read “The Hitchhiker’s Guide to Galaxy” by Douglas Adams. It is absolutely hilarious! It is both entertaining and spiked with wisdom.
I recently bought a nice journal and started to write in it. I find it very cleansing to put thoughts on paper and give them structure. I appreciate pretty paper products–this one has pastel pages.
I can’t think of a specific situation, but, in general, all my experiences so far seem to follow a general theme: it is absolutely okay not to be great at everything. You fail, and then you learn for the future.
My definition of success is being happy without causing harm to anyone.
they ignore? Since I am at the beginning of my career, I can’t say much. But I think it is nice to listen to everyone and get feedback, with the mindset that you do not necessarily have to act on their advice. Having multiple perspectives is good.
I’d say a solid nine! It is overall a great experience.
Yes. What I like the most about the NumPy community is that it does not require huge commitments time-wise. Every little thing is appreciated, so that is certainly motivating.
]]>At the end of this article, my goal is to convince you that: if you need to
use random numbers, you should consider using
scipy.stats.qmc
instead of
np.random
.
In the following, we assume that SciPy, NumPy and Matplotlib are installed and imported:
import numpy as np
from scipy.stats import qmc
import matplotlib.pyplot as plt
Note that no seeding is used in these examples. This will be the topic of another article: seeding should only be used for testing purposes.
So what are Monte Carlo (MC) and Quasi-Monte Carlo (QMC)?
MC methods are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be deterministic in principle. They are often used in physical and mathematical problems and are most useful when it is difficult or impossible to use other approaches. MC methods are mainly used in three classes of problem: optimization, numerical integration, and generating draws from a probability distribution.
Put simply, this is how you would usually generate a sample of points using MC:
rng = np.random.default_rng()
sample = rng.random(size=(256, 2))
In this case, sample
is a 2-dimensional array with 256 points which can be
visualized using a 2D scatter plot.
In the plot above, points are generated randomly without any knowledge about previously drawn points. It is clear that some regions of the space are left unexplored while other regions have clusters. In an optimization problem, it could mean that you would need to generate more sample to find the optimum. Or in a regression problem, you could also overfit a model due to some cluster of points.
Generating random numbers is a more complex problem than it sounds. Simple MC methods are designed to sample points to be independent and identically distributed (IID).
One could think that the solution is just to use a grid! But look what happens if we have a distance of 0.1 between points in the unit-hypercube ( with all bounds ranging from 0 to 1).
disc = 10
x1 = np.linspace(0, 1, disc)
x2 = np.linspace(0, 1, disc)
x3 = np.linspace(0, 1, disc)
x1, x2, x3 = np.meshgrid(x1, x2, x3)
The number of points required to fill the unit interval would be 10. In a 2-dimensional hypercube the same spacing would require 100, and in 3 dimensions 1,000 points. As the number of dimensions grows, the number of samples which is required to fill the space rises exponentially as the dimensionality of the space increases. This exponential growth is called the curse of dimensionality.
To mitigate the curse of dimensionality, you could decide to randomly remove points from the sample or randomly sample in n-dimension. In both cases, this will need to empty regions and clusters of points elsewhere.
Quasi-Monte Carlo (QMC) methods have been created specifically to answer this problem. As opposed to MC methods, QMC methods are deterministic. Which means that the points are not IID, but each new point knows about previous points. The result is that we can construct samples with good coverage of the space.
Deterministic does not mean that samples are always the same. the sequences can be scrambled.
Starting with version 1.7, SciPy provides QMC methods in
scipy.stats.qmc
.
Let’s generate 2 samples with MC and a QMC method named Sobol’.
n, d = 256, 2
rng = np.random.default_rng()
sample_mc = rng.random(size=(n, d))
qrng = qmc.Sobol(d=d)
sample_qmc = qrng.random(n=n)
A very similar interface, but as seen below, with radically different results.
The 2D space clearly exhibit less empty areas and less clusters with the QMC sample.
Beyond the visual improvement of quality, there are metrics to assess the quality of a sample. Geometrical criteria are commonly used, one can compute the distance (L1, L2, etc.) between all pairs of points. But there are also statistical criteria such as: the discrepancy.
qmc.discrepancy(sample_mc)
# 0.0009
qmc.discrepancy(sample_qmc)
# 1.1e-05
The lower the value, the better the quality.
If this still does not convince you, let’s look at a concrete example: integrating a function. Let’s look at the mean of the squared sum in 5 dimensions:
$$f(\mathbf{x}) = \left( \sum_{j=1}^{5}x_j \right)^2,$$
with $x_j \sim \mathcal{U}(0,1)$. It has a known mean value, $\mu = 5/3+5(5-1)/4$. By sampling points, we can compute that mean numerically.
The samplings are done 99 times and averaged. The variance is not reported for simplicity, just know that it’s guaranteed to be lower with QMC than with MC.
dim = 5
ref = 5 / 3 + 5 * (5 - 1) / 4
n_conv = 99
ns_gen = 2 ** np.arange(4, 13)
def func(sample):
# dim 5, true value 5/3 + 5*(5 - 1)/4
return np.sum(sample, axis=1) ** 2
def conv_method(sampler, func, n_samples, n_conv, ref):
samples = [sampler(n_samples) for _ in range(n_conv)]
samples = np.array(samples)
evals = [np.sum(func(sample)) / n_samples for sample in samples]
squared_errors = (ref - np.array(evals)) ** 2
rmse = (np.sum(squared_errors) / n_conv) ** 0.5
return rmse
# Analysis
sample_mc_rmse = []
sample_sobol_rmse = []
rng = np.random.default_rng()
for ns in ns_gen:
# Monte Carlo
sampler_mc = lambda x: rng.random((x, dim))
conv_res = conv_method(sampler_mc, func, ns, n_conv, ref)
sample_mc_rmse.append(conv_res)
# Sobol'
engine = qmc.Sobol(d=dim)
conv_res = conv_method(engine.random, func, ns, 1, ref)
sample_sobol_rmse.append(conv_res)
sample_mc_rmse = np.array(sample_mc_rmse)
sample_sobol_rmse = np.array(sample_sobol_rmse)
# Plot
fig, ax = plt.subplots(figsize=(4, 4))
ax.set_aspect("equal")
# MC
ratio = sample_mc_rmse[0] / ns_gen[0] ** (-1 / 2)
ax.plot(ns_gen, ns_gen ** (-1 / 2) * ratio, ls="-", c="k")
ax.scatter(ns_gen, sample_mc_rmse, label="MC: np.random")
# Sobol'
ratio = sample_sobol_rmse[0] / ns_gen[0] ** (-2 / 2)
ax.plot(ns_gen, ns_gen ** (-2 / 2) * ratio, ls="-", c="k")
ax.scatter(ns_gen, sample_sobol_rmse, label="QMC: qmc.Sobol")
ax.set_xlabel(r"$N_s$")
ax.set_xscale("log")
ax.set_xticks(ns_gen)
ax.set_xticklabels([rf"$2^{{{ns}}}$" for ns in np.arange(4, 13)])
ax.set_ylabel(r"$\log (\epsilon)$")
ax.set_yscale("log")
ax.legend(loc="upper right")
fig.tight_layout()
plt.show()
With MC the approximation error follows a theoretical rate of $O(n^{-1/2})$. But, QMC methods have better rates of convergence and achieve $O(n^{-1})$ for this function–and even better rates on very smooth functions.
This means that using $2^8=256$ points from Sobol’ leads to a lower error than using $2^{12}=4096$ points from MC! When the function evaluation is costly, it can bring huge computational savings.
But there is more! Another great use of QMC is to sample arbitrary
distributions. In SciPy 1.8, there are new classes of
samplers
that allow you to sample from any custom distribution. And some of
these methods can use QMC with a qrvs
method:
Here is an example with a distribution from SciPy: fisk. We generate
a MC sample from the distribution (either directly from the distribution with
fisk.rvs
or using NumericalInverseHermite.rvs
) and another sample with
QMC using NumericalInverseHermite.qrvs
.
import scipy.stats as stats
from scipy.stats import sampling
# Any distribution
c = 3.9
dist = stats.fisk(c)
# MC
rng = np.random.default_rng()
sample_mc = dist.rvs(128, random_state=rng)
# QMC
rng_dist = sampling.NumericalInverseHermite(dist)
# sample_mc = rng_dist.rvs(128, random_state=rng) # MC alternative same as above
qrng = qmc.Sobol(d=1)
sample_qmc = rng_dist.qrvs(128, qmc_engine=qrng)
Let’s visualize the difference between MC and QMC by calculating the empirical Probability Density Function (PDF). The QMC results are clearly superior to MC.
# Visualization
fig, axs = plt.subplots(1, 2, sharey=True, sharex=True, figsize=(8, 4))
x = np.linspace(dist.ppf(0.01), dist.ppf(0.99), 100)
pdf = dist.pdf(x)
delta = np.max(pdf) * 5e-2
samples = {"MC: np.random": sample_mc, "QMC: qmc.Sobol": sample_qmc}
for ax, sample in zip(axs, samples):
ax.set_title(sample)
ax.plot(x, pdf, "-", lw=3, label="fisk PDF")
ax.plot(samples[sample], -delta - delta * np.random.random(128), "+k")
kde = stats.gaussian_kde(samples[sample])
ax.plot(x, kde(x), "-.", lw=3, label="empirical PDF")
# or use a histogram
# ax.hist(sample, density=True, histtype='stepfilled', alpha=0.2)
ax.set_xlim([0, 3])
axs[0].legend(loc="best")
fig.supylabel("Density")
fig.supxlabel("Sample value")
fig.tight_layout()
plt.show()
Careful readers will note that there is no seeding. This is intentional as noted at the beginning of this article. You might run this code again and have better results with MC. But only sometimes. And that’s exactly my point. On average, you are guaranteed to have more consistent results with a better quality using QMC. I invite you to try it and see for yourself!
I hope that I convinced you to use QMC the next time you need random numbers. QMC is superior to MC, period.
There is an extensive body of literature and rigorous proofs. One reason MC is still more popular is that QMC is harder to implement and, depending on the method, there are rules to follow.
Take the Sobol’ method we used: you must use exactly $2^n$ sample. If you don’t do it, you will break some properties and end up having the same performance than MC. This is why some people argue that QMC is not better: they simply don’t use the methods properly, hence fail to see any benefits and conclude that MC is “enough”.
In
scipy.stats.qmc
,
we went to great lengths to explain how to use the methods, and we added some
explicit warnings to make the methods accessible and useful to
everyone.
With an extensive and high-quality ecosystem of libraries, scientific Python has emerged as the leading platform for data analysis. This ecosystem is sustained largely by volunteers working on independent projects with separate mailing lists, websites, roadmaps, documentation, engineering and packaging solutions, and governance structures.
The Scientific Python project aims to better coordinate the ecosystem and prepare the software projects in this ecosystem for the next decade of data science.
There is no shortage of blog posts around the web about how to use and explore different packages in the scientific Python ecosystem. However, some of it is outdated or incomplete, and many times doesn’t follow the best practices that would be advocated for by the maintainers of these packages.
In addition, we would like to create a central, community-driven location where Scientific Python projects can make announcements and share information.
Our project aims to be the definitive community blog—for people looking to make use of these libraries in education, research and industry, contribute to them, or maintain them—written, reviewed, and approved by the community of developers and users.
While our core projects (NumPy, SciPy, Matplotlib, scikit-image, NetworkX, etc.) will be regularly contributing content, we also would like to increase the number of contributors by providing support to newer members to generate high-quality, peer-reviewed blog posts.
Our goal is to populate the https://blog.scientific-python.org/ website with high-quality content, reviewed and approved by the maintainers of the libraries in the ecosystem. The main goal of these documents is to centralize information relevant to all (or most) projects in the ecosystem, at the reduced cost of being maintained in one place.
This project aims to:
To ensure this project is successful, it is recommended that the technical writer has some familiarity with at least a few of Scientific Python’s core projects.
We would consider the project successful if:
We anticipate the project to be developed over six months including onboarding five technical writers, reviewing existing material, developing blog post ideas with the project mentors and blog editorial board, writing and revising the blog posts, as well as providing feedback on the submission and review process.
Dates | Action Items |
---|---|
May | Onboarding |
June | Review existing documentation |
July | Update contributor guide |
August–October | Create and edit content |
November | Project completion |
Budget item | Amount | Running Total | Notes/justifications |
---|---|---|---|
Technical writers (5) | $15,000.00 | $15,000.00 | $3,000 / writer |
TOTAL | $15,000.00 |
The Scientific Python project is a new initiative, and this is our first time participating in Google Season of Docs. However, both Jarrod Millman and Ross Barnowski are established members of the Python community, with a vast collective experience in mentoring, managing and maintaining large open source projects.
Jarrod cofounded the Neuroimaging in Python project. He was the NumPy and SciPy release manager from 2007 to 2009. He cofounded NumFOCUS and served on its board from 2011 to 2015. Currently, he is the release manager of NetworkX and cofounder of the Scientific Python project.
Both mentors Jarrod and Ross have mentored many new contributors on multiple projects including NumPy, SciPy, and NetworkX. Ross has served as a co-mentor for three former GSoD students on the NumPy project, largely related to generating new content for tutorials, as well as refactoring existing user documentation.
Links:
]]>This tutorial will teach you how to create custom tables in Matplotlib, which are extremely flexible in terms of the design and layout. You’ll hopefully see that the code is very straightforward! In fact, the main methods we will be using are ax.text()
and ax.plot()
.
I want to give a lot of credit to Todd Whitehead who has created these types of tables for various Basketball teams and players. His approach to tables is nothing short of fantastic due to the simplicity in design and how he manages to effectively communicate data to his audience. I was very much inspired by his approach and wanted to be able to achieve something similar in Matplotlib.
Before I begin with the tutorial, I wanted to go through the logic behind my approach as I think it’s valuable and transferable to other visualizations (and tools!).
With that, I would like you to think of tables as highly structured and organized scatterplots. Let me explain why: for me, scatterplots are the most fundamental chart type (regardless of tool).
For example ax.plot()
automatically “connects the dots” to form a line chart or ax.bar()
automatically “draws rectangles” across a set of coordinates. Very often (again regardless of tool) we may not always see this process happening. The point is, it is useful to think of any chart as a scatterplot or simply as a collection of shapes based on xy coordinates. This logic / thought process can unlock a ton of custom charts as the only thing you need are the coordinates (which can be mathematically computed).
With that in mind, we can move on to tables! So rather than plotting rectangles or circles we want to plot text and gridlines in a highly organized manner.
We will aim to create a table like this, which I have posted on Twitter here. Note, the only elements added outside of Matplotlib are the fancy arrows and their descriptions.
Importing required libraries.
import matplotlib as mpl
import matplotlib.patches as patches
from matplotlib import pyplot as plt
First, we will need to set up a coordinate space - I like two approaches:
I want to create a coordinate space for a table containing 6 columns and 10 rows - this means (similar to pandas row/column indices) each row will have an index between 0-9 and each column will have an index between 0-6 (this is technically 1 more column than what we defined but one of the columns with a lot of text will span two column “indices”)
# first, we'll create a new figure and axis object
fig, ax = plt.subplots(figsize=(8, 6))
# set the number of rows and cols for our table
rows = 10
cols = 6
# create a coordinate system based on the number of rows/columns
# adding a bit of padding on bottom (-1), top (1), right (0.5)
ax.set_ylim(-1, rows + 1)
ax.set_xlim(0, cols + 0.5)
Now, the data we want to plot is sports (football) data. We have information about 10 players and some values against a number of different metrics (which will form our columns) such as goals, shots, passes etc.
# sample data
data = [
{"id": "player10", "shots": 1, "passes": 79, "goals": 0, "assists": 1},
{"id": "player9", "shots": 2, "passes": 72, "goals": 0, "assists": 1},
{"id": "player8", "shots": 3, "passes": 47, "goals": 0, "assists": 0},
{"id": "player7", "shots": 4, "passes": 99, "goals": 0, "assists": 5},
{"id": "player6", "shots": 5, "passes": 84, "goals": 1, "assists": 4},
{"id": "player5", "shots": 6, "passes": 56, "goals": 2, "assists": 0},
{"id": "player4", "shots": 7, "passes": 67, "goals": 0, "assists": 3},
{"id": "player3", "shots": 8, "passes": 91, "goals": 1, "assists": 1},
{"id": "player2", "shots": 9, "passes": 75, "goals": 3, "assists": 2},
{"id": "player1", "shots": 10, "passes": 70, "goals": 4, "assists": 0},
]
Next, we will start plotting the table (as a structured scatterplot). I did promise that the code will be very simple, less than 10 lines really, here it is:
# from the sample data, each dict in the list represents one row
# each key in the dict represents a column
for row in range(rows):
# extract the row data from the list
d = data[row]
# the y (row) coordinate is based on the row index (loop)
# the x (column) coordinate is defined based on the order I want to display the data in
# player name column
ax.text(x=0.5, y=row, s=d["id"], va="center", ha="left")
# shots column - this is my "main" column, hence bold text
ax.text(x=2, y=row, s=d["shots"], va="center", ha="right", weight="bold")
# passes column
ax.text(x=3, y=row, s=d["passes"], va="center", ha="right")
# goals column
ax.text(x=4, y=row, s=d["goals"], va="center", ha="right")
# assists column
ax.text(x=5, y=row, s=d["assists"], va="center", ha="right")
As you can see, we are starting to get a basic wireframe of our table. Let’s add column headers to further make this scatterplot look like a table.
# Add column headers
# plot them at height y=9.75 to decrease the space to the
# first data row (you'll see why later)
ax.text(0.5, 9.75, "Player", weight="bold", ha="left")
ax.text(2, 9.75, "Shots", weight="bold", ha="right")
ax.text(3, 9.75, "Passes", weight="bold", ha="right")
ax.text(4, 9.75, "Goals", weight="bold", ha="right")
ax.text(5, 9.75, "Assists", weight="bold", ha="right")
ax.text(6, 9.75, "Special\nColumn", weight="bold", ha="right", va="bottom")
The rows and columns of our table are now done. The only thing that is left to do is formatting - much of this is personal choice. The following elements I think are generally useful when it comes to good table design (more research here):
Gridlines: Some level of gridlines are useful (less is more). Generally some guidance to help the audience trace their eyes or fingers across the screen can be helpful (this way we can group items too by drawing gridlines around them).
for row in range(rows):
ax.plot([0, cols + 1], [row - 0.5, row - 0.5], ls=":", lw=".5", c="grey")
# add a main header divider
# remember that we plotted the header row slightly closer to the first data row
# this helps to visually separate the header row from the data rows
# each data row is 1 unit in height, thus bringing the header closer to our
# gridline gives it a distinctive difference.
ax.plot([0, cols + 1], [9.5, 9.5], lw=".5", c="black")
Another important element for tables in my opinion is highlighting the key data points. We already bolded the values that are in the “Shots” column but we can further shade this column to give it further importance to our readers.
# highlight the column we are sorting by
# using a rectangle patch
rect = patches.Rectangle(
(1.5, -0.5), # bottom left starting position (x,y)
0.65, # width
10, # height
ec="none",
fc="grey",
alpha=0.2,
zorder=-1,
)
ax.add_patch(rect)
We’re almost there. The magic piece is ax.axis(‘off’)
. This hides the axis, axis ticks, labels and everything “attached” to the axes, which means our table now looks like a clean table!
ax.axis("off")
Adding a title is also straightforward.
ax.set_title("A title for our table!", loc="left", fontsize=18, weight="bold")
Finally, if you wish to add images, sparklines, or other custom shapes and patterns then we can do this too.
To achieve this we will create new floating axes using fig.add_axes()
to create a new set of floating axes based on the figure coordinates (this is different to our axes coordinate system!).
Remember that figure coordinates by default are between 0 and 1. [0,0] is the bottom left corner of the entire figure. If you’re unfamiliar with the differences between a figure and axes then check out Matplotlib’s Anatomy of a Figure for further details.
newaxes = []
for row in range(rows):
# offset each new axes by a set amount depending on the row
# this is probably the most fiddly aspect (TODO: some neater way to automate this)
newaxes.append(fig.add_axes([0.75, 0.725 - (row * 0.063), 0.12, 0.06]))
You can see below what these floating axes will look like (I say floating because they’re on top of our main axis object). The only tricky thing is figuring out the xy (figure) coordinates for these.
These floating axes behave like any other Matplotlib axes. Therefore, we have access to the same methods such as ax.bar(), ax.plot(), patches, etc. Importantly, each axis has its own independent coordinate system. We can format them as we wish.
# plot dummy data as a sparkline for illustration purposes
# you can plot _anything_ here, images, patches, etc.
newaxes[0].plot([0, 1, 2, 3], [1, 2, 0, 2], c="black")
newaxes[0].set_ylim(-1, 3)
# once again, the key is to hide the axis!
newaxes[0].axis("off")
That’s it, custom tables in Matplotlib. I did promise very simple code and an ultra-flexible design in terms of what you want / need. You can adjust sizes, colors and pretty much anything with this approach and all you need is simply a loop that plots text in a structured and organized manner. I hope you found it useful. Link to a Google Colab notebook with the code is here
]]>As part of the University of North Carolina BIOL222 class, Dr. Catherine Kehl asked her students to “use matplotlib.pyplot
to make art.” BIOL222 is Introduction to Programming, aimed at students with no programming background. The emphasis is on practical, hands-on active learning.
The students completed the assignment with festive enthusiasm around Halloween. Here are some great examples:
Harris Davis showed an affinity for pumpkins, opting to go 3D!
# get library for 3d plotting
from mpl_toolkits.mplot3d import Axes3D
# make a pumpkin :)
rho = np.linspace(0, 3 * np.pi, 32)
theta, phi = np.meshgrid(rho, rho)
r, R = 0.5, 0.5
X = (R + r * np.cos(phi)) * np.cos(theta)
Y = (R + r * np.cos(phi)) * np.sin(theta)
Z = r * np.sin(phi)
# make the stem
theta1 = np.linspace(0, 2 * np.pi, 90)
r1 = np.linspace(0, 3, 50)
T1, R1 = np.meshgrid(theta1, r1)
X1 = R1 * 0.5 * np.sin(T1)
Y1 = R1 * 0.5 * np.cos(T1)
Z1 = -(np.sqrt(X1**2 + Y1**2) - 0.7)
Z1[Z1 < 0.3] = np.nan
Z1[Z1 > 0.7] = np.nan
# Display the pumpkin & stem
fig = plt.figure()
ax = fig.gca(projection="3d")
ax.set_xlim3d(-1, 1)
ax.set_ylim3d(-1, 1)
ax.set_zlim3d(-1, 1)
ax.plot_surface(X, Y, Z, color="tab:orange", rstride=1, cstride=1)
ax.plot_surface(X1, Y1, Z1, color="tab:green", rstride=1, cstride=1)
plt.show()
Bryce Desantis stuck to the biological theme and demonstrated fractal art.
import numpy as np
import matplotlib.pyplot as plt
# Barnsley's Fern - Fractal; en.wikipedia.org/wiki/Barnsley_…
# functions for each part of fern:
# stem
def stem(x, y):
return (0, 0.16 * y)
# smaller leaflets
def smallLeaf(x, y):
return (0.85 * x + 0.04 * y, -0.04 * x + 0.85 * y + 1.6)
# large left leaflets
def leftLarge(x, y):
return (0.2 * x - 0.26 * y, 0.23 * x + 0.22 * y + 1.6)
# large right leftlets
def rightLarge(x, y):
return (-0.15 * x + 0.28 * y, 0.26 * x + 0.24 * y + 0.44)
componentFunctions = [stem, smallLeaf, leftLarge, rightLarge]
# number of data points and frequencies for parts of fern generated:
# lists with all 75000 datapoints
datapoints = 75000
x, y = 0, 0
datapointsX = []
datapointsY = []
# For 75,000 datapoints
for n in range(datapoints):
FrequencyFunction = np.random.choice(componentFunctions, p=[0.01, 0.85, 0.07, 0.07])
x, y = FrequencyFunction(x, y)
datapointsX.append(x)
datapointsY.append(y)
# Scatter plot & scaled down to 0.1 to show more definition:
plt.scatter(datapointsX, datapointsY, s=0.1, color="g")
# Title of Figure
plt.title("Barnsley's Fern - Assignment 3")
# Changing background color
ax = plt.axes()
ax.set_facecolor("#d8d7bf")
Grace Bell got a little trippy with this rotationally semetric art. It’s pretty cool how she captured mouse events. It reminds us of a flower. What do you see?
import matplotlib.pyplot as plt
from matplotlib.tri import Triangulation
from matplotlib.patches import Polygon
import numpy as np
# I found this sample code online and manipulated it to make the art piece!
# was interested in because it combined what we used for functions as well as what we used for plotting with (x,y)
def update_polygon(tri):
if tri == -1:
points = [0, 0, 0]
else:
points = triang.triangles[tri]
xs = triang.x[points]
ys = triang.y[points]
polygon.set_xy(np.column_stack([xs, ys]))
def on_mouse_move(event):
if event.inaxes is None:
tri = -1
else:
tri = trifinder(event.xdata, event.ydata)
update_polygon(tri)
ax.set_title(f"In triangle {tri}")
event.canvas.draw()
# this is the info that creates the angles
n_angles = 14
n_radii = 7
min_radius = 0.1 # the radius of the middle circle can move with this variable
radii = np.linspace(min_radius, 0.95, n_radii)
angles = np.linspace(0, 2 * np.pi, n_angles, endpoint=False)
angles = np.repeat(angles[..., np.newaxis], n_radii, axis=1)
angles[:, 1::2] += np.pi / n_angles
x = (radii * np.cos(angles)).flatten()
y = (radii * np.sin(angles)).flatten()
triang = Triangulation(x, y)
triang.set_mask(
np.hypot(x[triang.triangles].mean(axis=1), y[triang.triangles].mean(axis=1))
< min_radius
)
trifinder = triang.get_trifinder()
fig, ax = plt.subplots(subplot_kw={"aspect": "equal"})
ax.triplot(
triang, "y+-"
) # made the color of the plot yellow and there are "+" for the data points but you can't really see them because of the lines crossing
polygon = Polygon([[0, 0], [0, 0]], facecolor="y")
update_polygon(-1)
ax.add_patch(polygon)
fig.canvas.mpl_connect("motion_notify_event", on_mouse_move)
plt.show()
As a bonus, did you like that fox in the banner? That was created (and well documented) by Emily Foster!
import numpy as np
import matplotlib.pyplot as plt
plt.axis("off")
# head
xhead = np.arange(-50, 50, 0.1)
yhead = -0.007 * (xhead * xhead) + 100
plt.plot(xhead, yhead, "darkorange")
# outer ears
xearL = np.arange(-45.8, -9, 0.1)
yearL = -0.08 * (xearL * xearL) - 4 * xearL + 70
xearR = np.arange(9, 45.8, 0.1)
yearR = -0.08 * (xearR * xearR) + 4 * xearR + 70
plt.plot(xearL, yearL, "black")
plt.plot(xearR, yearR, "black")
# inner ears
xinL = np.arange(-41.1, -13.7, 0.1)
yinL = -0.08 * (xinL * xinL) - 4 * xinL + 59
xinR = np.arange(13.7, 41.1, 0.1)
yinR = -0.08 * (xinR * xinR) + 4 * xinR + 59
plt.plot(xinL, yinL, "salmon")
plt.plot(xinR, yinR, "salmon")
# bottom of face
xfaceL = np.arange(-49.6, -14, 0.1)
xfaceR = np.arange(14, 49.3, 0.1)
xfaceM = np.arange(-14, 14, 0.1)
plt.plot(xfaceL, abs(xfaceL), "darkorange")
plt.plot(xfaceR, abs(xfaceR), "darkorange")
plt.plot(xfaceM, abs(xfaceM), "black")
# nose
xnose = np.arange(-14, 14, 0.1)
ynose = -0.03 * (xnose * xnose) + 20
plt.plot(xnose, ynose, "black")
# whiskers
xwhiskR = [50, 70, 55, 70, 55, 70, 49.3]
xwhiskL = [-50, -70, -55, -70, -55, -70, -49.3]
ywhisk = [82.6, 85, 70, 65, 60, 45, 49.3]
plt.plot(xwhiskR, ywhisk, "darkorange")
plt.plot(xwhiskL, ywhisk, "darkorange")
# eyes
plt.plot(20, 60, color="black", marker="o", markersize=15)
plt.plot(-20, 60, color="black", marker="o", markersize=15)
plt.plot(22, 62, color="white", marker="o", markersize=6)
plt.plot(-18, 62, color="white", marker="o", markersize=6)
We look forward to seeing these students continue in their plotting and scientific adventures!
]]>It’s my great pleasure to announce that I’ve finished my book on matplotlib and it is now freely available at www.labri.fr/perso/nrougier/scientific-visualization.html while sources for the book are hosted at github.com/rougier/scientific-visualization-book.
The Python scientific visualisation landscape is huge. It is composed of a myriad of tools, ranging from the most versatile and widely used down to the more specialised and confidential. Some of these tools are community based while others are developed by companies. Some are made specifically for the web, others are for the desktop only, some deal with 3D and large data, while others target flawless 2D rendering. In this landscape, Matplotlib has a very special place. It is a versatile and powerful library that allows you to design very high quality figures, suitable for scientific publishing. It also offers a simple and intuitive interface as well as an object oriented architecture that allows you to tweak anything within a figure. Finally, it can be used as a regular graphic library in order to design non‐scientific figures. This book is organized into four parts. The first part considers the fundamental principles of the Matplotlib library. This includes reviewing the different parts that constitute a figure, the different coordinate systems, the available scales and projections, and we’ll also introduce a few concepts related to typography and colors. The second part is dedicated to the actual design of a figure. After introducing some simple rules for generating better figures, we’ll then go on to explain the Matplotlib defaults and styling system before diving on into figure layout organization. We’ll then explore the different types of plot available and see how a figure can be ornamented with different elements. The third part is dedicated to more advanced concepts, namely 3D figures, optimization & animation. The fourth and final part is a collection of showcases.
I have been creating common visualisations like scatter plots, bar charts, beeswarms etc. for a while and thought about doing something different. Since I’m an avid football fan, I thought of ideas to represent players’ usage or involvement over a period (a season, a couple of seasons). I have seen some cool visualisations like donuts which depict usage and I wanted to make something different and simple to understand. I thought about representing batteries as a form of player usage and it made a lot of sense.
For players who have been barely used (played fewer minutes) show a large amount of battery present since they have enough energy left in the tank. And for heavily used players, do the opposite i.e. show drained or less amount of battery
So, what is the purpose of a battery chart? You can use it to show usage, consumption, involvement, fatigue etc. (anything usage related).
The image below is a sample view of how a battery would look in our figure, although a single battery isn’t exactly what we are going to recreate in this tutorial.
Before jumping on to the tutorial, I would like to make it known that the function can be tweaked to fit accordingly depending on the number of subplots or any other size parameter. Coming to the figure we are going to plot, there are a series of steps that is to be considered which we will follow one by one. The following are those steps:-
What is our use case?
The first and foremost part is to import the essential libraries so that we can leverage the functions within. In this case, we will import the libraries we need.
import pandas as pd
import matplotlib.pyplot as plt
from matplotlib.path import Path
from matplotlib.patches import FancyBboxPatch, PathPatch, Wedge
The functions imported from matplotlib.path
and matplotlib.patches
will be used to draw lines, rectangles, boxes and so on to display the battery as it is.
The next part is to define a function named draw_battery()
, which will be used to draw the battery. Later on, we will call this function by specifying certain parameters to build the figure as we require. The following below is the code to build the battery -
def draw_battery(
fig,
ax,
percentage=0,
bat_ec="grey",
tip_fc="none",
tip_ec="grey",
bol_fc="#fdfdfd",
bol_ec="grey",
invert_perc=False,
):
"""
Parameters
----------
fig : figure
The figure object for the plot
ax : axes
The axes/axis variable of the figure.
percentage : int, optional
This is the battery percentage - size of the fill. The default is 0.
bat_ec : str, optional
The edge color of the battery/cell. The default is "grey".
tip_fc : str, optional
The fill/face color of the tip of battery. The default is "none".
tip_ec : str, optional
The edge color of the tip of battery. The default is "grey".
bol_fc : str, optional
The fill/face color of the lightning bolt. The default is "#fdfdfd".
bol_ec : str, optional
The edge color of the lightning bolt. The default is "grey".
invert_perc : bool, optional
A flag to invert the percentage shown inside the battery. The default is False
Returns
-------
None.
"""
try:
fig.set_size_inches((15, 15))
ax.set(xlim=(0, 20), ylim=(0, 5))
ax.axis("off")
if invert_perc == True:
percentage = 100 - percentage
# color options - #fc3d2e red & #53d069 green & #f5c54e yellow
bat_fc = (
"#fc3d2e"
if percentage <= 20
else "#53d069" if percentage >= 80 else "#f5c54e"
)
"""
Static battery and tip of battery
"""
battery = FancyBboxPatch(
(5, 2.1),
10,
0.8,
"round, pad=0.2, rounding_size=0.5",
fc="none",
ec=bat_ec,
fill=True,
ls="-",
lw=1.5,
)
tip = Wedge(
(15.35, 2.5), 0.2, 270, 90, fc="none", ec=bat_ec, fill=True, ls="-", lw=3
)
ax.add_artist(battery)
ax.add_artist(tip)
"""
Filling the battery cell with the data
"""
filler = FancyBboxPatch(
(5.1, 2.13),
(percentage / 10) - 0.2,
0.74,
"round, pad=0.2, rounding_size=0.5",
fc=bat_fc,
ec=bat_fc,
fill=True,
ls="-",
lw=0,
)
ax.add_artist(filler)
"""
Adding a lightning bolt in the centre of the cell
"""
verts = [
(10.5, 3.1), # top
(8.5, 2.4), # left
(9.5, 2.4), # left mid
(9, 1.9), # bottom
(11, 2.6), # right
(10, 2.6), # right mid
(10.5, 3.1), # top
]
codes = [
Path.MOVETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.LINETO,
Path.CLOSEPOLY,
]
path = Path(verts, codes)
bolt = PathPatch(path, fc=bol_fc, ec=bol_ec, lw=1.5)
ax.add_artist(bolt)
except Exception as e:
import traceback
print("EXCEPTION FOUND!!! SAFELY EXITING!!! Find the details below:")
traceback.print_exc()
Once we have created the API or function, we can now implement the same. And for that, we need to feed in required data. In our example, we have a dataset that has the list of Liverpool players and the minutes they have played in the past two seasons. The data was collected from Football Reference aka FBRef.
We use the read excel function in the pandas library to read our dataset that is stored as an excel file.
data = pd.read_excel("Liverpool Minutes Played.xlsx")
Now, let us have a look at how the data looks by listing out the first five rows of our dataset -
data.head()
Now that everything is ready, we go ahead and plot the data. We have 25 players in our dataset, so a 5 x 5 figure is the one to go for. We’ll also add some headers and set the colors accordingly.
fig, ax = plt.subplots(5, 5, figsize=(5, 5))
facecolor = "#00001a"
fig.set_facecolor(facecolor)
fig.text(
0.35,
0.95,
"Liverpool: Player Usage/Involvement",
color="white",
size=18,
fontname="Libre Baskerville",
fontweight="bold",
)
fig.text(
0.25,
0.92,
"Data from 19/20 and 20/21 | Battery percentage indicate usage | less battery = played more/ more involved",
color="white",
size=12,
fontname="Libre Baskerville",
)
We have now now filled in appropriate headers, figure size etc. The next step is to plot all the axes i.e. batteries for each and every player. p
is the variable used to iterate through the dataframe and fetch each players data. The draw_battery()
function call will obviously plot the battery. We also add the required labels along with that - player name and usage rate/percentage in this case.
p = 0 # The variable that'll iterate through each row of the dataframe (for every player)
for i in range(0, 5):
for j in range(0, 5):
ax[i, j].text(
10,
4,
str(data.iloc[p, 0]),
color="white",
size=14,
fontname="Lora",
va="center",
ha="center",
)
ax[i, j].set_facecolor(facecolor)
draw_battery(fig, ax[i, j], round(data.iloc[p, 8]), invert_perc=True)
"""
Add the battery percentage as text if a label is required
"""
ax[i, j].text(
5,
0.9,
"Usage - " + str(int(100 - round(data.iloc[p, 8]))) + "%",
fontsize=12,
color="white",
)
p += 1
Now that everything is almost done, we do some final touchup and this is a completely optional part anyway. Since the visualisation is focused on Liverpool players, I add Liverpool’s logo and also add my watermark. Also, crediting the data source/provider is more of an ethical habit, so we go ahead and do that as well before displaying the plot.
liv = Image.open("Liverpool.png", "r")
liv = liv.resize((80, 80))
liv = np.array(liv).astype(np.float) / 255
fig.figimage(liv, 30, 890)
fig.text(
0.11,
0.08,
"viz: Rithwik Rajendran/@rithwikrajendra",
color="lightgrey",
size=14,
fontname="Lora",
)
fig.text(
0.8, 0.08, "data: FBRef/Statsbomb", color="lightgrey", size=14, fontname="Lora"
)
plt.show()
So, we have the plot below. You can customise the design as you want in the draw_battery()
function - change size, colours, shapes etc
Matplotlib: Revisiting Text/Font Handling
To kick things off for the final report, here’s a meme to nudge about the previous blogs.
Matplotlib is a comprehensive library for creating static, animated, and interactive visualizations, which has become a de-facto Python plotting library.
Much of the implementation behind its font manager is inspired by W3C compliant algorithms, allowing users to interact with font properties like font-size
, font-weight
, font-family
, etc.
By “not ideal”, I do not mean that the library has design flaws, but that the design was engineered in the early 2000s, and is now outdated.
(PS: here’s the link to my GSoC proposal, if you’re interested)
Overall, the project was divided into two major subgoals:
But before we take each of them on, we should get an idea about some basic terminology for fonts (which are a lot, and are rightly confusing)
The PR: Clarify/Improve docs on family-names vs generic-families brings about a bit of clarity about some of these terms. The next section has a linked PR which also explains the types of fonts and how that is relevant to Matplotlib.
An easy-to-read guide on Fonts and Matplotlib was created with PR: [Doc] Font Types and Font Subsetting, which is currently live at Matplotlib’s DevDocs.
Taking an excerpt from one of my previous blogs (and the doc):
Fonts can be considered as a collection of these glyphs, so ultimately the goal of subsetting is to find out which glyphs are required for a certain array of characters, and embed only those within the output.
PDF, PS/EPS and SVG output document formats are special, as in the text within them can be editable, i.e, one can copy/search text from documents (for eg, from a PDF file) if the text is editable.
The PDF, PS/EPS and SVG backends used to support font subsetting, only for a few types. What that means is, before Summer ‘21, Matplotlib could generate Type 3 subsets for PDF, PS/EPS backends, but it could not generate Type 42 / TrueType subsets.
With PR: Type42 subsetting in PS/PDF merged in, users can expect their PDF/PS/EPS documents to contains subsetted glyphs from the original fonts.
This is especially beneficial for people who wish to use commercial (or CJK) fonts. Licenses for many fonts require subsetting such that they can’t be trivially copied from the output files generated from Matplotlib.
Matplotlib was designed to work with a single font at runtime. A user could specify a font.family
, which was supposed to correspond to CSS properties, but that was only used to find a single font present on the user’s system.
Once that font was found (which is almost always found, since Matplotlib ships with a set of default fonts), all the user text was rendered only through that font. (which used to give out “tofu” if a character wasn’t found)
It might seem like an outdated approach for text rendering, now that we have these concepts like font-fallback, but these concepts weren’t very well discussed in early 2000s. Even getting a single font to work was considered a hard engineering problem.
This was primarily because of the lack of any standardization for representation of fonts (Adobe had their own font representation, and so did Apple, Microsoft, etc.)
Previous (notice Tofus) VS After (CJK font as fallback)
To migrate from a font-first approach to a text-first approach, there are multiple steps involved:
The very first (and crucial!) step is to get to a point where we have multiple font paths (ideally individual font files for the whole family). That is achieved with either:
Quoting one of my previous blogs:
Don’t break, a lot at stake!
My first approach was to change the existing public findfont
API to incorporate multiple filepaths. Since Matplotlib has a very huge userbase, there’s a high chance it would break a chunk of people’s workflow:
First PR (left), Second PR (right)
Once we get a list of font paths, we need to change the internal representation of a “font”. Matplotlib has a utility called FT2Font, which is written in C++, and used with wrappers as a Python extension, which in turn is used throughout the backends. For all intents and purposes, it used to mean: FT2Font === SingleFont
(if you’re interested, here’s a meme about how FT2Font was named!)
But that is not the case anymore, here’s a flowchart to explain what happens now:
Font-Fallback Algorithm
With PR: Implement Font-Fallback in Matplotlib, every FT2Font object has a std::vector<FT2Font *> fallback_list
, which is used for filling the parent cache, as can be seen in the self-explanatory flowchart.
For simplicity, only one type of cache (character -> FT2Font) is shown, whereas in actual implementation there’s 2 types of caches, one shown above, and another for glyphs (glyph_id -> FT2Font).
Note: Only the parent’s APIs are used in some backends, so for each of the individual public functions like
load_glyph
,load_char
,get_kerning
, etc., we find the FT2Font object which has that glyph from the parent FT2Font cache!
Now that we have multiple fonts to render a string, we also need to embed them for those special backends (i.e., PDF/PS, etc.). This was done with some patches to specific backends:
With this, one could create a PDF or a PS/EPS document with multiple fonts which are embedded (and subsetted!).
From small contributions to eventually working on a core module of such a huge library, the road was not what I had imagined, and I learnt a lot while designing solutions to these problems.
…since all plots will work their way through the new codepath!
I think that single statement is worth the whole GSoC project.
For the sake of statistics (and to make GSoC sound a bit less intimidating), here’s a list of contributions I made to Matplotlib before Summer ‘21, most of which are only a few lines of diff:
Created At | PR Title | Diff | Status |
---|---|---|---|
Nov 2, 2020 | Expand ScalarMappable.set_array to accept array-like inputs | (+28 −4) | MERGED |
Nov 8, 2020 | Add overset and underset support for mathtext | (+71 −0) | MERGED |
Nov 14, 2020 | Strictly increasing check with test coverage for streamplot grid | (+54 −2) | MERGED |
Jan 11, 2021 | WIP: Add support to edit subplot configurations via textbox | (+51 −11) | DRAFT |
Jan 18, 2021 | Fix over/under mathtext symbols | (+7,459 −4,169) | MERGED |
Feb 11, 2021 | Add overset/underset whatsnew entry | (+28 −17) | MERGED |
May 15, 2021 | Warn user when mathtext font is used for ticks | (+28 −0) | MERGED |
Here’s a list of PRs I opened during Summer'21:
From learning about software engineering fundamentals from Tom to learning about nitty-gritty details about font representations from Jouni;
From learning through Antony’s patches and pointers to receiving amazing feedback on these blogs from Hannah, it has been an adventure! 💯
Special Mentions: Frank, Srijan and Atharva for their helping hands!
And lastly, you, the reader; if you’ve been following my previous blogs, or if you’ve landed at this one directly, I thank you nevertheless. (one last meme, I promise!)
I know I speak for every developer out there, when I say it means a lot when you choose to look at their journey or their work product; it could as well be a tiny website, or it could be as big as designing a complete library!
I’m grateful to Maptlotlib (under the parent organisation: NumFOCUS), and of course, Google Summer of Code for this incredible learning opportunity.
Farewell, reader! :’)
Consider contributing to Matplotlib (Open Source in general) ❤️
Welcome! This post is not going to be discussing technical implementation details or theortical work for my Google Summer of Code project, but rather serve as a summary and recap for the work that I did this summer.
I am very happy with the work I was able to accomplish and believe that I successfully completed my project.
My project was titled NetworkX: Implementing the Asadpour Asymmetric Traveling Salesman Problem Algorithm. The updated abstract given on the Summer of Code project project page is below.
This project seems to implement the asymmetric traveling salesman problem developed by Asadpour et al, originally published in 2010 and revised in 2017. The project is broken into multiple methods, each of which has a set timetable during the project. We start by solving the Held-Karp relaxation using the Ascent method from the original paper by Held and Karp. Assuming the result is fractional, we continue into the Asadpour algorithm (integral solutions are optimal by definition and immediately returned). We approximate the distribution of spanning trees on the undirected support of the Held Karp solution using a maximum entropy rounding method to construct a distribution of trees. Roughly speaking, the probability of sampling any given tree is proportional to the product of all its edge lambda values. We sample 2 log n trees from the distribution using an iterative approach developed by V. G. Kulkarni and choose the tree with the smallest cost after returning direction to the arcs. Finally, the minimum tree is augmented using a minimum network flow algorithm and shortcut down to an O(log n / log log n) approximation of the minimum Hamiltonian cycle.
My proposal PDF for the 2021 Summer of Code can be found here.
All of my changes and additions to NetworkX are part of this pull request and can also be found on this branch in my fork of the GitHub repository, but I will be discussing the changes and commits in more detail later.
Also note that for the commits I listed in each section, this is an incomplete list only hitting on focused commits to that function or its tests.
For the complete list, please reference the pull request or the bothTSP
GitHub branch on my fork of NetworkX.
My contributions to NetworkX this summer consist predominantly of the following functions and classes, each of which I will discuss in their own sections of this blog post. Functions and classes which are front-facing are also linked to the developer documentation for NetworkX in the list below and for their section headers.
SpanningTreeIterator
ArborescenceIterator
held_karp_ascent
spanning_tree_distribution
sample_spanning_tree
asadpour_atsp
These functions have also been unit tested, and those tests will be integrated into NetworkX once the pull request is merged.
The following papers are where all of these algorithms originate form and they were of course instrumental in the completion of this project.
[1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, p. 379 - 389 https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
[2] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[3] M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
[4] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning trees in order of increasing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
[5] V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), p. 185–207.
SpanningTreeIterator
#The SpanningTreeIterator
was the first contribution I completed as part of my GSoC project.
This class takes a graph and returns every spanning tree in it in order of increasing cost, which makes it a direct implementation of [4].
The interesting thing about this iterator is that it is not used as part of the Asadpour algorithm, but served as an intermediate step so that I could develop the ArborescenceIterator
which is required for the Held Karp relaxation.
It works by partitioning the edges of the graph as either included, excluded or open and then finding the minimum spanning tree which respects the partition data on the graph edges.
In order to get this to work, I created a new minimum spanning tree function called kruskal_mst_edges_partition
which does exactly that.
To prevent redundancy, all kruskal minimum spanning trees now use this function (the original kruskal_mst_edges
function is now just a wrapper for the partitioned version).
Once a spanning tree is returned from the iterator, the partition data for that tree is split so that the union of the newly generated partitions is the set of all spanning trees in the partition except the returned minimum spanning tree.
As I mentioned earlier, the SpanningTreeIterator
is not directly used in my GSoC project, but I still decided to implement it to understand the partition process and be able to directly use the examples from [4] before moving onto the ArborescenceIterator
.
This class I’m sure will be useful to the other users of NetworkX and provided a strong foundation to build the ArborescenceIterator
off of.
Blog Posts about SpanningTreeIterator
5 Jun 2021 - Finding All Minimum Arborescences
10 Jun 2021 - Implementing The Iterators
Commits about SpanningTreeIterator
Now, at the beginning of this project, my commit messages were not very good… I had some problems about merge conflicts after I accidentally committed to the wrong branch and this was the first time I’d used a pre-commit hook.
I have not changed the commit messages here, so that you may be assumed by my troughly unhelpful messages, but did annotate them to provide a more accurate description of the commit.
Testing - Rewrote Kruskal’s algorithm to respect partitions and tested that while stubbing the iterators in a separate file
I’m not entirely sure how the commit hook works… - Added test cases and finalized implementation of Spanning Tree Iterator in the incorrect file
Moved iterators into the correct files to maintain proper codebase visibility - Realized that the iterators need to be in mst.py
and branchings.py
respectively to keep private functions hidden
Documentation update for the iterators - No explanation needed
Update mst.py to accept suggestion - Accepted doc string edit from code review
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Cleaned code, merged functions if possible and opened partition functionality to all
Implement suggestions from boothby
ArborescenceIterator
#The ArborescenceIterator
is a modified version of the algorithm discussed in [4] so that it iterates over the spanning arborescences.
This iterator was a bit more difficult to implement, but that is due to how the minimum spanning arborescence algorithm is structured rather than the partition scheme not being applicable to directed graphs.
In fact the partition scheme is identical to the undirected SpanningTreeIterator
, but Edmonds’ algorithm is more complex and there are several edge cases about how nodes can be contracted and what it means for respecting the partition data.
In order to fully understand the NetworkX implementation, I had to read the original Edmonds paper, [2].
The most notable change was that when the iterator writes the next partition onto the edges of the graph just before Edmonds’ algorithm is executed, if any incoming edge is marked as included, all of the others are marked as excluded.
This is an implicit part of the SpanningTreeIterator
, but needed to be explicitly done here so that if the vertex in question was merged during Edmonds’ algorithm we could not choose two of the incoming edges to the same vertex once the merging was reversed.
As a final note, the ArborescenceIterator
has one more initial parameter than the SpanningTreeIterator
, which is the ability to give it an initial partition and iterate over all spanning arborescence with cost greater than the initial partition.
This was used as part of the branch and bound method, but is no longer a part of the my Asadpour algorithm implementation.
Blog Posts about ArborescenceIterator
5 Jun 2021 - Finding All Minimum Arborescences
10 Jun 2021 - Implementing The Iterators
Commits about ArborescenceIterator
My commits listed here are still annotated and much of the work was done at the same time.
Testing - Rewrote Kruskal’s algorithm to respect partitions and tested that while stubbing the iterators in a separate file
Moved iterators into the correct files to maintain proper codebase visibility - Realized that the iterators need to be in mst.py
and branchings.py
respectively to keep private functions hidden
Including Black reformat - Modified Edmonds’ algorithm to respect partitions
Modified the ArborescenceIterator to accept init partition - No explanation needed
Documentation update for the iterators - No explanation needed
Update branchings.py accept doc string edit - No explanation needed
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Cleaned code, merged functions if possible and opened partition functionality to all
Implemented review suggestions from rossbar
Implement suggestions from boothby
held_karp_ascent
#The Held Karp relaxation was the most difficult part of my GSoC project and the part that I was the most worried about going into this May.
My plans on how to solve the relaxation evolved over the course of the summer as well, finally culminating in held_karp_ascent
.
In my GSoC proposal, I discuss using scipy
to solve the relaxation, but the Held Karp relaxation is a semi-infinite linear problem (that is, it is finite but exponential) so I would quickly surpass the capabilities of virtually any computer that the code would be run on.
Fortunately I realized that while I was still writing my proposal and was able to change it.
Next, I wanted to use the ellipsoid algorithm because that is the suggested method in the Asadpour paper [1].
As it happens, the ellipsoid algorithm is not implemented in numpy
or scipy
and after discussing the practicality of implementing the algorithm as part of this project, we decided that a robust ellipsoid solver was a GSoC project onto itself and beyond the scope of the Asadpour algorithm.
Another method was needed, and was found.
In the original paper by Held and Karp [3], they present three different algorithms for solving the relaxation, the column-generation technique, the ascent method and the branch and bound method.
After reading the paper and comparing all of the methods, I decided that the branch and bound method was the best in terms of performance and wanted to implement that one.
The branch and bound method is a modified version of the ascent method, so I started by implementing the ascent method, then the branch and bound around it. This had the extra benefit of allowing me to compare the two and determine which is actually better.
Implementing the ascent method proved difficult. There were a number of subtle bugs in finding the minimum 1-arborescences and finding the value of epsilon by not realizing all of the valid edge substitutions in the graph. More information about these problems can be found in my post titled Understanding the Ascent Method. Even after this the ascent method was not working proper, but I decided to move onto the branch and bound method in hopes of learning more about the process so that I could fix the ascent method.
That is exactly what happened! While debugging the branch and bound method, I realized that my function for finding the set of minimum 1-arborescences would stop searching too soon and possibly miss the minimum 1-arborescences. Once I fixed that bug, both the ascent as well as the branch and bound method started to produce the correct results.
But which one would be used in the final project?
Well, that came down to which output was more compatible with the rest of the Asadpour algorithm. The ascent method could find a fractional solution where the edges are not totally in or out of the solution while the branch and bound method would take the time to ensure that the solution was integral. As it would happen, the Asadpour algorithm expects a fractional solution to the Held Karp relaxation so in the end the ascent method one out and the branch and bound method was removed from the project.
All of this is detailed in the (many) blog posts I wrote on this topic, which are listed below.
Blog posts about the Held Karp relaxation
My first two posts were about the scipy
solution and the ellipsoid algorithm.
11 Apr 2021 - Held Karp Relaxation
8 May 2021 - Held Karp Separation Oracle
This next post discusses the merits of each algorithm presenting in the original Held and Karp paper [3].
3 Jun 2021 - A Closer Look At Held Karp
And finally, the last three Held Karp related posts are about the debugging of the algorithms I did implement.
22 Jun 2021 - Understanding The Ascent Method
28 Jun 2021 - Implementing The Held Karp Relaxation
7 Jul 2021 - Finalizing Held Karp
Commits about the Held Karp relaxation
Annotations only provided if needed.
Grabbing black reformats - Initial Ascent method implementation
Working on debugging ascent method plus black reformats
Ascent method terminating, but at non-optimal solution
minor edits - Removed some debug statements
Fixed termination condition, still given non-optimal result
Minor bugfix, still non-optimal result - Ensured reported answer is the cycle if multiple options
Fixed subtle bug in find_epsilon() - Fixed the improper substitute detection bug
Cleaned code and tried something which didn’t work
Black formats - Initial branch and bound implementation
Branch and bound returning optimal solution
black formatting changes - Split ascent and branch and bound methods into different functions
Performance tweaks and testing fractional answers
Asadpour output for ascent method
Removed branch and bound method. One unit test misbehaving
Added asymmetric fractional test for the ascent method
Removed printn statements and tweaked final test to be more asymmetric
Changed HK to only report on the support of the answer
spanning_tree_distribution
#Once we have the support of the Held Karp relaxation, we calculate edge weights $\gamma$ for support so that the probability of any tree being sampled is proportional to the product of $e^{\gamma}$ across its edges. This is called a maximum entropy distribution in the Asadpour paper. This procedure was included in the Asadpour paper [1] on page 386.
- Set $\gamma = \vec{0}$.
- While there exists an edge $e$ with $q_e(\gamma) > (1 + \epsilon)z_e$:
- Compute $\delta$ such that if we define $\gamma’$ ad $\gamma_e’ = \gamma_e - \delta$ and $\gamma_f’ = \gamma_e$ for all $f \in E \backslash {e}$, then $q_e(\gamma’) = (1 + \epsilon / 2)z_e$
- Set $\gamma \leftarrow \gamma'$
- Output $\tilde{\gamma} := \gamma$.
Where $q_e(\gamma)$ is the probability that any given edge $e$ will be in a sampled spanning tree chosen with probability proportional to $\exp(\gamma(T))$. $\delta$ is also given as
$$ \delta = \frac{q_e(\gamma)(1-(1+\epsilon/2)z_e)}{(1-q_e(\gamma))(1+\epsilon/2)z_e} $$
so the Asadpour paper did almost all of the heavy lifting for this function. However, they were not very clear on how to calculate $q_e(\gamma)$ other than that Krichhoff’s Tree Matrix Theorem can be used.
My original method for calculating $q*e(\gamma)$ was to apply Krichhoff’s Theorem to the original laplacian matrix and the laplacian produced once the edge $e$ is contracted from the graph. Testing quickly showed that once the edge is contracted from the graph, it cannot affect the value of the laplacian and thus after subtracting $\delta$ the probability of that edge would increase rather than decrease. Multiplying my original value of $q_e(\gamma)$ by $\exp(\gamma_e)$ proved to be the solution here for reasons extensively discussed in my blog post _The Entropy Distribution* and in particular the “Update! (28 July 2021)” section.
Blog posts about spanning_tree_distribution
13 Jul 2021 - Entropy Distribution Setup
20 Jul 2021 - The Entropy Distribution
Commits about spanning_tree_distribution
Draft of spanning_tree_distribution
Changed HK to only report on the support of the answer - Needing to limit $\gamma$ to only the support of the Held Karp relaxation is what caused this change
Fixed contraction bug by changing to MultiGraph. Problem with prob > 1 - Because the probability is only proportional to the product of the edge weights, this was not actually a problem
Black reformats - Rewrote the test and cleaned the code
Fixed pypi test error - The pypi tests do not have numpy
or scipy
and I forgot to flag the test to be skipped if they are not available
Further testing of dist fix - Fixed function to multiply $q_e(\gamma)$ by $\exp(\gamma_e)$ and implemented exception if $\delta$ ever misbehaves
Can sample spanning trees - Streamlined finding $q_e(\gamma)$ using new helper function
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Implement suggestions from boothby
sample_spanning_tree
#What good is a spanning tree distribution if we can’t sample from it?
While the Asadpour paper [1] provides a rough outline of the sampling process, the bulk of their methodology comes from the Kulkarni paper, Generating random combinatorial objects [5]. That paper had a much more detailed explanation and even this pseudo code from page 202.
$U = \emptyset,$ $V = E$
Do $i = 1$ to $N$;
$\qquad$Let $a = n(G(U, V))$
$\qquad\qquad a’$ $= n(G(U \cup {i}, V))$
$\qquad$Generate $Z \sim U[0, 1]$
$\qquad$If $Z \leq \alpha_i \times \left(a’ / a\right)$
$\qquad\qquad$then $U = U \cup {i}$,
$\qquad\qquad$else $V = V - {i}$
$\qquad$end.
Stop. $U$ is the required spanning tree.
The only real difficulty here was tracking how the nodes were being contracted.
My first attempt was a mess of if
statements and the like, but switching it to a merge-find data structure (or disjoint set data structure) proved to be a wise decision.
Of course, it is one thing to be able to sample a spanning tree and another entirely to know if the sampling technique matches the expected distribution.
My first iteration test for sample_spanning_tree
just sampled a large number of trees (50000) and they printed the percent error from the normalized distribution of spanning tree.
With a sample size of 50000 all of the errors were under 10%, but I still wanted to find a better test.
From my AP statistics class in high school I remembered the $X^2$ (Chi-squared) test and realized that it would be perfect here.
scipy
even had the ability to conduct one.
By converting to a chi-squared test I was able to reduce the sample size down to 1200 (near the minimum required sample size to have a valid chi-squared test) and use a proper hypothesis test at the $\alpha = 0.01$ significance level.
Unfortunately, the test would still fail 1% of the time until I added the @py_random_state
decorator to sample_spanning_tree
, and then the test can pass in a Random
object to produce repeatable results.
Blog posts about sample_spanning_tree
21 Jul 2021 - Preliminaries For Sampling A Spanning Tree
28 Jul 2021 - Sampling A Spanning Tree
Commits about sample_spanning_tree
Developing test for sampling spanning tree
Changed sample_spanning_tree test to Chi squared test
Adding test cases - Implemented @py_random_state
decorator
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
asadpour_atsp
#This function was the last piece of the puzzle, connecting all of the others together and producing the final result!
Implementation of this function was actually rather smooth.
The only technical difficulty I had was reading the support of the flow_dict
and the theoretical difficulties were adapting the min_cost_flow
function to solve the minimum circulation problem.
Oh, and that if the flow is greater than 1 I need to add parallel edges to the graph so that it is still eulerian.
A brief overview of the whole algorithm is given below:
Blog posts about asadpour_atsp
29 Jul 2021 - Looking At The Big Picture
10 Aug 2021 - Completing The Asadpour Algorithm
Commits about asadpour_atsp
untested implementation of asadpour_tsp
Fixed runtime errors in asadpour_tsp - General traveling salesman problem function assumed graph were undirected. This is not work with an atsp algorithm
black reformats - Fixed parallel edges from flow support bug
Fixed rounding error with tests
Review suggestions from dshult - Implemented code review suggestions from one of my mentors
Implemented review suggestions from rossbar
Overall, I really enjoyed this Summer of Code. I was able to branch out, continue to learn python and more about graphs and graph algorithms which is an area of interest for me.
Assuming that I have any amount of free time this coming fall semester, I’d love to stay involved with NetworkX. In fact, there are already some things that I have in mind even though my current code works as is.
Move sample_spanning_tree
to mst.py
and rename it to random_spanning_tree
.
The ability to sample random spanning trees is not a part of the greater NetworkX library and could be useful to others.
One of my mentors mentioned it being relevant to Steiner trees and if I can help other developers and users out, I will.
Adapt sample_spanning_tree
so that it can use both additive and multiplicative weight functions.
The Asadpour algorithm only needs the multiplicative weight, but the Kulkarni paper [5] does talk about using an additive weight function which may be more useful to other NetworkX users.
Move my Krichhoff’s Tree Matrix Theorem helper function to laplacian_matrix.py
so that other NetworkX users can access it.
Investigate the following article about the Held Karp relaxation. While I have no definite evidence for this one, I do believe that the Held Karp relaxation is the slowest part of my implementation of the Asadpour algorithm and thus is the best place for improving it. The ascent method I am using comes from the original Held and Karp paper [3], but they did release a part II which may have better algorithms in it. The citation is given below.
M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees: Part II. Mathematical Programming, 1971, 1(1), p. 6–25. https://doi.org/10.1007/BF01584070
Refactor the Edmonds
class in branchings.py
.
That class is the implementation for Edmonds’ branching algorithm but uses an iterative approach rather than the recursive one discussed in Edmonds’ paper [2].
I did also agree to work with another person, lkora to help rework this class and possible add a minimum_maximal_branching
function to find the minimum branching which still connects as many nodes as possible.
This would be analogous to a spanning forest in an undirected graph.
At the moment, neither of us have had time to start such work.
For more information please reference issue #4836.
While there are areas of this problem which I can improve upon, it is important for me to remember that this project was still a complete success. NetworkX now has an algorithm to approximate the traveling salesman problem in asymmetric or directed graphs.
]]>My implementation of asadpour_atsp
is now working!
Recall that my pseudo code for this function from my last post was
def asadpour_tsp
Input: A complete graph G with weight being the attribute key for the edge weights.
Output: A list of edges which form the approximate ATSP solution.
z_star = held_karp(G)
# test to see if z_star is a graph or dict
if type(z_star) is nx.DiGraph
return z_star.edges
z_support = nx.MultiGraph()
for u, v in z_star
if not in z_support.edges
edge_weight = min(G[u][v][weight], G[v][u][weight])
z_support.add_edge(u, v, weight=edge_weight)
gamma = spanning_tree_distribution(z_support, z_star)
for u, v in z_support.edges
z_support[u][v][lambda] = exp(gamma[(u, v)])
for _ in range 1 to 2 ceil(log(n))
sampled_tree = sample_spanning_tree(G)
sampled_tree_weight = sampled_tree.size()
if sampled_tree_weight < minimum_sampled_tree_weight
minimum_sampled_tree = sampled_tree.copy()
minimum_sampled_tree_weight = sampled_tree_weight
t_star = nx.DiGraph
for u, v, d in minimum_sampled_tree.edges(data=weight)
if d == G[u][v][weight]
t_star.add_edge(u, v, weight=d)
else
t_star.add_edge(v, u, weight=d)
for n in t_star
node_demands[n] = t_star.out_degree(n) - t_star.in_degree(n)
nx.set_node_attributes(G, node_demands)
flow_dict = nx.min_cost_flow(G)
for u, v in flow_dict
if edge not in t_star.edges and flow_dict[u, v] > 0
t_star.add_edge(u, v)
eulerian_curcuit = nx.eulerian_circuit(t_star)
return _shortcutting(eulerian_curcuit)
And this was more or less correct. A few issues were present, as they always were going to be.
First, my largest issue came from a part of a word being in parenthesis in the Asadpour paper on page 385.
This integral circulation $f^*$ corresponds to a directed (multi)graph $H$ which contains $\vec{T}^*$.
Basically if the minimum flow is every larger than 1 along an edge, I need to add that many parallel edges in order to ensure that everything is still Eulerian. This became a problem quickly while developing my test cases as shown in the below example.
As you can see, for the incorrect circulation, vertices 2 and 3 are not eulerian as they in and out degrees do not match.
All of the others were just minor points where the pseudo code didn’t directly translate into python (because, after all, it isn’t python).
The first thing I did once asadpour_atsp
was take the fractional, symmetric Held Karp relaxation test graph and run it through the general traveling_salesman_problem
function.
Since there are random numbers involved here, the results were always within the $O(\log n / \log \log n)$ approximation factor but were different.
Three examples are shown below.
The first thing we want to check is the approximation ratio.
We know that the minimum cost output of the traveling_saleman_problem
function is 304 (This is actually lower than the optimal tour in the undirected version, more on this later).
Next we need to know what our maximum approximation factor is.
Now, the Asadpour algorithm is $O(\log n / \log \log n)$ which for our six vertex graph would be $\ln(6) / \ln(\ln(6)) \approx 3.0723$.
However, on page 386 they give the coefficients of the approximation as $(2 + 8 \log n / \log \log n)$ which would be $2 + 8 \times \ln(6) / \ln(\ln(6)) \approx 26.5784$.
(Remember that all $\log$’s in the Asadpour paper refer to the natural logarithm.)
All of our examples are well below even the lower limit.
For example 1:
$$ \begin{array}{r l} \text{actual}: & 504 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{504}{304} \approx 1.6578 < 3.0723 \end{array} $$
Example 2:
$$ \begin{array}{r l} \text{actual}: & 404 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{404}{304} \approx 1.3289 < 3.0723 \end{array} $$
Example 3:
$$ \begin{array}{r l} \text{actual}: & 304 \\\ \text{expected}: & 304 \\\ \text{approx. factor}: & \frac{304}{304} = 1.0000 < 3.0723 \end{array} $$
At this point, you’ve probably noticed that the examples given are strictly speaking, not hamiltonian cycles: they visit vertices multiple times.
This is because the graph we have is not complete.
The Asadpour algorithm only works on complete graphs, so the traveling_salesman_problem
function finds the shortest cost path between every pair of vertices and inserts the missing edges.
In fact, if the asadpour_atsp
function is given an incomplete graph, it will raise an exception.
Take example three, since there is only one repeated vertex, 5.
Behind the scenes, the graph is complete and the solution may contain the dashed edge in the below image.
But that edge is not in the original graph, so during the post-processing done by the traveling_salesman_problem
function, the red edges are inserted instead of the dashed edge.
Before I could write any tests, I needed to ensure that the tests were consistent from execution to execution.
At the time, this was not the case since there were random numbers being generated in order to sample the spanning trees.
So I had to learn how to use the @py_random_state
decorator.
When this decorator is added to the top of a function, we pass it either the position of the argument in the function signature or the name of the keyword for that argument. It then takes that argument and configures a python Random object based on the input parameter.
None
, use a new Random
object.int
, use a new Random
object with that seed.Random
object, use that object as is.So I changed the function signature of sample_spanning_tree
to have random=None
at the end.
For most use cases, the default value will not be changed and the results will be different every time the method is called, but if we give it an int
, the same tree will be sampled every time.
But, for my tests I can give it a seed to create repeatable behaviour.
Since the sample_spanning_tree
function is not visible outside of the treveling_salesman
file, I also had to create a pass-through parameter for asadpour_atsp
so that my seed could have any effect.
Once this was done, I modified the test for sample_spanning_tree
so that it would not have a 1 in 100 chance of spontaneously failing.
At first I just passed it an int
, but that forced every tree sampled to be the same (since the edges were shuffled the same and sampled from the same sequence of numbers) and the test failed.
So I tweaked it to use a Random
object from the random package and this worked well.
From here, I wrap the complete asadpour_atsp
parameters I want in another function fixed_asadpour
like this:
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, 56)
path = nx_app.traveling_salesman_problem(
G, weight="weight", cycle=False, method=fixed_asadpour
)
I tested using both traveling_salesman_problem
and asadpour_atsp
.
The tests included:
There is even a bonus feature!
The asadpour_atsp
function accepts a fourth argument, source
!
Since both of the return methods use eulerian_circuit
and the _shortcutting
functions, I can pass a source
vertex to the circuit function and ensure that the returned path starts and returns to the desired vertex.
Access it by wrapping the method, just be sure that the source vertex is in the graph to avoid an exception.
def fixed_asadpour(G, weight):
return nx_app.asadpour_atsp(G, weight, source=0)
path = nx_app.traveling_salesman_problem(
G, weight="weight", cycle=False, method=fixed_asadpour
)
A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi, An O (log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
]]>“Matplotlib, I want 多个汉字 in between my text.”
Let’s say you asked Matplotlib to render a plot with some label containing 多个汉字 (multiple Chinese characters) in between your English text.
Or conversely, let’s say you use a Chinese font with Matplotlib, but you had English text in between (which is quite common).
Assumption: the Chinese font doesn’t have those English glyphs, and vice versa
With this short writeup, I’ll talk about how does a migration from a font-first to a text-first approach in Matplotlib looks like, which ideally solves the above problem.
Logically, the very first step to solving this would be to ask whether you have multiple fonts, right?
Matplotlib doesn’t ship CJK (Chinese Japanese Korean) fonts, which ideally contains these Chinese glyphs. It does try to cover most grounds with the default font it ships with, however.
So if you don’t have a font to render your Chinese characters, go ahead and install one! Matplotlib will find your installed fonts (after rebuilding the cache, that is).
This is where things get interesting, and what my previous writeup was all about..
Parsing the whole family to get multiple fonts for given font properties
To give you an idea about how things used to work for Matplotlib:
FT2Font is a matplotlib-to-font module, which provides high-level Python API to interact with a single font’s operations like read/draw/extract/etc.
Being written in C++, the module needs wrappers around it to be converted into a Python extension using Python’s C-API.
It allows us to use C++ functions directly from Python!
So wherever you see a use of font within the library (by library I mean the readable Python codebase XD), you could have derived that:
FT2Font === SingleFont
Things are be a bit different now however..
FT2Font is basically itself a wrapper around a library called FreeType, which is a freely available software library to render fonts.
In my initial proposal.. while looking around how FT2Font is structured, I figured:
Oh, looks like all we need are Faces!
If you don’t know what faces/glyphs/ligatures are, head over to why Text Hates You. I can guarantee you’ll definitely enjoy some real life examples of why text rendering is hard. 🥲
Anyway, if you already know what Faces are, it might strike you:
If we already have all the faces we need from multiple fonts (let’s say we created a child of FT2Font.. which only tracks the faces for its families), we should be able to render everything from that parent FT2Font right?
As I later figured out while finding segfaults in implementing this design:
Each FT2Font is linked to a single FT_Library object!
If you tried to load the face/glyph/character (basically anything) from a different FT2Font object.. you’ll run into serious segfaults. (because one object linked to an FT_Library
can’t really access another object which has it’s own FT_Library
)
// face is linked to FT2Font; which is
// linked to a single FT_Library object
FT_Face face = this->get_face();
FT_Get_Glyph(face->glyph, &placeholder); // works like a charm
// somehow get another FT2Font's face
FT_Face family_face = this->get_family_member()->get_face();
FT_Get_Glyph(family_face->glyph, &placeholder); // segfaults!
Realizing this took a good amount of time! After this I quickly came up with a recursive approach, wherein we:
std::vector<FT2Font *> fallback_list
fallback_list
A quick overhaul of the above piece of code^
bool ft_get_glyph(FT_Glyph &placeholder) {
FT_Error not_found = FT_Get_Glyph(this->get_face(), &placeholder);
if (not_found) return False;
else return True;
}
// within driver code
for (uint i=0; i<fallback_list.size(); i++) {
// iterate through all FT2Font objects
bool was_found = fallback_list[i]->ft_get_glyph(placeholder);
if (was_found) break;
}
With the idea surrounding this implementation, the Agg backend is able to render a document (either through GUI, or a PNG) with multiple fonts!
I’ve spent days at Python C-API’s argument doc, and it’s hard to get what you need at first, ngl.
But, with the help of some amazing people in the GSoC community (@srijan-paul, @atharvaraykar) and amazing mentors, blockers begone!
Oh no. XD
Things work just fine for the Agg backend, but to generate a PDF/PS/SVG with multiple fonts is another story altogether! I think I’ll save that for later.
Well, we’re finally at the point in this GSoC project where the end is glimmering on the horizon. I have completed the Held Karp relaxation, generating a spanning tree distribution and now sampling from that distribution. That means that it is time to start thinking about how to link these separate components into one algorithm.
Recall that from the Asadpour paper the overview of the algorithm is
Algorithm 1 An $O(\log n / \log \log n)$-approximation algorithm for the ATSP
Input: A set $V$ consisting of $n$ points and a cost function $c\ :\ V \times V \rightarrow \mathbb{R}^+$ satisfying the triangle inequality.
Output: $O(\log n / \log \log n)$-approximation of the asymmetric traveling salesman problem instance described by $V$ and $c$.
- Solve the Held-Karp LP relaxation of the ATSP instance to get an optimum extreme point solution $x^*$. Define $z^*$ as in (5), making it a symmetrized and scaled down version of $x^*$. Vector $z^*$ can be viewed as a point in the spanning tree polytope of the undirected graph on the support of $x^*$ that one obtains after disregarding the directions of arcs (See Section 3.)
- Let $E$ be the support graph of $z^*$ when the direction of the arcs are disregarded. Find weights ${\tilde{\gamma}}_{e \in E}$ such that the exponential distribution on the spanning trees, $\tilde{p}(T) \propto \exp(\sum_{e \in T} \tilde{\gamma}_e)$ (approximately) preserves the marginals imposed by $z^*$, i.e. for any edge $e \in E$,
$\sum\_{T \in \mathcal{T} : T \ni e} \tilde{p}(T) \leq (1 + \epsilon) z^\*\_e$, for a small enough value of $\epsilon$. (In this paper we show that $\epsilon = 0.2$ suffices for our purpose. See Section 7 and 8 for a description of how to compute such a distribution.)- Sample $2\lceil \log n \rceil$ spanning trees $T_1, \dots, T_{2\lceil \log n \rceil}$ from $\tilde{p}(.)$. For each of these trees, orient all its edges so as to minimize its cost with respect to our (asymmetric) cost function $c$. Let $T^*$ be the tree whose resulting cost is minimal among all of the sampled trees.
- Find a minimum cost integral circulation that contains the oriented tree $\vec{T}^*$. Shortcut this circulation to a tour and output it. (See Section 4.)
We are now firmly in the steps 3 and 4 area.
Going all the way back to my post on 24 May 2021 titled Networkx Function stubs the only function left is asadpour_tsp
, the main function which needs to accomplish this entire algorithm.
But before we get to creating pseudo code for it there is still step 4 which needs a thorough examination.
Once we have sampled enough spanning trees from the graph and converted the minimum one into $\vec{T}^*$ we need to find the minimum cost integral circulation in the graph which contains $\vec{T}^*$.
While NetworkX a minimum cost circulation function, namely, min_cost_flow
, it is not suitable for the Asadpour algorithm out of the box.
The problem here is that we do not have node demands, we have edge demands.
However, after some reading and discussion with one of my mentors Dan, we can convert the current problem into one which can be solved using the min_cost_flow
function.
The problem that we are trying to solve is called the minimum cost circulation problem and the one which min_cost_flow
is able to solve is the, well, minimum cost flow problem.
As it happens, these are equivalent problems, so I can convert the minimum cost circulation into a minimum cost flow problem by transforming the minimum edge demands into node demands.
Recall that at this point we have a directed minimum sampled spanning tree $\vec{T}^*$ and that the flow through each of the edges in $\vec{T}^*$ needs to be at least one. From the perspective of a flow problem, $\vec{T}^*$ is moving some flow around the graph. However, in order to augment $\vec{T}^*$ into an Eulerian graph so that we can walk it, we need to counteract this flow so that the net flow for each node is 0 $(f(\delta^+(v)) = f(\delta^-(v))$ in the Asadpour paper).
So, we find the net flow of each node and then assign its demand to be the negative of that number so that the flow will balance at the node in question. If the total flow at any node $i$ is $\delta^+(i) - \delta^-(i)$ then the demand we assign to that node is $\delta^-(i) - \delta^+(i)$. Once we assign the demands to the nodes we can temporarily ignore the edge lower capacities to find the minimum flow.
For more information on the conversion process, please see [2].
After the minimum flow is found, we take the support of the flow and add it to the $\vec{T}^*$ to create a multigraph $H$.
Now we know that $H$ is weakly connected (it contains $\vec{T^*}$) and that it is Eulerian because for every node the in-degree is equal to the out-degree.
A closed eulerian walk or eulerian circuit can be found in this graph with eulerian_circuit
.
Here is an example of this process on a simple graph. I suspect that the flow will not always be the back edges from the spanning tree and that the only reason that is the case here is due to the small number of vertcies.
Finally, we take the eulerian circuit and shortcut it.
On the plus side, the shortcutting process is the same as the Christofides algorithm so that is already the _shortcutting
helper function in the traveling salesman file.
This is really where it is critical that the triangle inequality holds so that the shortcutting cannot increase the cost of the circulation.
Let’s start with the function signature.
def asadpour_tsp
Input: A complete graph G with weight being the attribute key for the edge weights.
Output: A list of edges which form the approximate ATSP solution.
This is exactly what we’d expect, take a complete graph $G$ satisfying the triangle inequality and return the edges in the approximate solution to the asymmetric traveling salesman problem.
Recall from my post Networkx Function Stubs what the primary traveling salesman function, traveling_salesman_problem
will ensure that we are given a complete graph that follows the triangle inequality by using all-pairs shortest path calculations and will handle if we are expected to return a true cycle or only a path.
The first step in the Asadpour algorithm is the Held Karp relaxation. I am planning on editing the flow of the algorithm here a bit. If the Held Karp relaxation finds an integer solution, then we know that is one of the optimal TSP routes so there is no point in continuing the algorithm: we can just return that as an optimal solution. However, if the Held Karp relaxation finds a fractional solution we will press on with the algorithm.
z_star = held_karp(G)
# test to see if z_star is a graph or dict
if type(z_star) is nx.DiGraph
return z_star.edges
Once we have the Held Karp solution, we create the undirected support of z_star
for the next step of creating the exponential distribution of spanning trees.
z_support = nx.MultiGraph()
for u, v in z_star
if not in z_support.edges
edge_weight = min(G[u][v][weight], G[v][u][weight])
z_support.add_edge(u, v, weight=edge_weight)
gamma = spanning_tree_distribution(z_support, z_star)
This completes steps 1 and 2 in the Asadpour overview at the top of this post. Next we sample $2 \lceil \log n \rceil$ spanning trees.
for u, v in z_support.edges
z_support[u][v][lambda] = exp(gamma[(u, v)])
for _ in range 1 to 2 ceil(log(n))
sampled_tree = sample_spanning_tree(G)
sampled_tree_weight = sampled_tree.size()
if sampled_tree_weight < minimum_sampled_tree_weight
minimum_sampled_tree = sampled_tree.copy()
minimum_sampled_tree_weight = sampled_tree_weight
Now that we have the minimum sampled tree, we need to orient the edge directions to keep the cost equal to that minimum tree.
We can do this by iterating over the edges in minimum_sampled_tree
and checking the edge weights in the original graph $G$.
Using $G$ is required here if we did not record the minimum direction which is a possibility when we create z_support
.
t_star = nx.DiGraph
for u, v, d in minimum_sampled_tree.edges(data=weight)
if d == G[u][v][weight]
t_star.add_edge(u, v, weight=d)
else
t_star.add_edge(v, u, weight=d)
Next we create a mapping of nodes to node demands for the minimum cost flow problem which was discussed earlier in this post.
I think that using a dict is the best option as it can be passed into set_node_attributes
all at once before finding the minimum cost flow.
for n in t_star
node_demands[n] = t_star.out_degree(n) - t_star.in_degree(n)
nx.set_node_attributes(G, node_demands)
flow_dict = nx.min_cost_flow(G)
Take the Eulerian circuit and shortcut it on the way out.
Here we can add the support of the flow directly to t_star
to simulate adding the two graphs together.
for u, v in flow_dict
if edge not in t_star.edges and flow_dict[u, v] > 0
t_star.add_edge(u, v)
eulerian_curcuit = nx.eulerian_circuit(t_star)
return _shortcutting(eulerian_curcuit)
That should be it.
Once the code for asadpour_tsp
is written it will need to be tested.
I’m not sure how I’m going to create the test cases yet, but I do plan on testing it using real world airline ticket prices as that is my go to example for the asymmetric traveling salesman problem.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
D. Williamson, ORIE 633 Network Flows Lecture 11, 11 Oct 2007, https://people.orie.cornell.edu/dpw/orie633/LectureNotes/lecture11.pdf.
]]>The heavy lifting I did in the preliminary post certainly paid off here!
In just one day I was able to implement sample_spanning_tree
and its two helper functions.
This was a very easy function to implement.
It followed exactly from the pesudo code and was working with spanning_tree_distribution
before I started on sample_spanning_tree
.
This function was more difficult than I originally anticipated.
The code for the main body of the function only needed minor tweaks to work with the specifics of python such as shuffle
being in place and returning None
and some details about how sets work.
For example, I add edge $e$ to $U$ before calling prepare_graph
on in and then switch the if
statement to be the inverse to remove $e$ from $U$.
Those portions are functionally the same.
The issues I had with this function all stem back to contracting multiple nodes in a row and how that affects the graph.
As a side note, the contracted_edge
function in NetworkX is a wrapper for contracted_node
and the latter has a copy
keyword argument that is assumed to be True
by the former function.
It was a trivial change to extend this functionality to contracted_edge
but in the end I used contracted_node
so the whole thing is moot.
First recall how edge contraction, or in this case node contraction, works. Two nodes are merged into one which is connected by the same edges which connected the original two nodes. Edges between those two nodes become self loops, but in this case I prevented the creation of self loops as directed by Kulkarni. If a node which is not contracted has edges to both of the contracted nodes, we insert a parallel edge between them. I struggled with NetworkX’s API about the graph classes in a past post titled The Entropy Distribution.
For NetworkX’s implementation, we would call nx.contracted_nodes(G, u, v)
and u
and v
would always be merged into u
, so v
is the node which is no longer in the graph.
Now imagine that we have three edges to contract because they are all in $U$ which look like the following.
If we process this from left to right, we first contract nodes 0 and 1. At this point, the $\{1, 2\}$ no longer exists in $G$ as node 1 itself has been removed. However, we would still need to contract the new $\{0, 2\}$ edge which is equivalent to the old $\{1, 2\}$ edge.
My first attempt to solve this was… messy and didn’t work well.
I developed an if-elif
chain for which endpoints of the contracting edge no longer existed in the graph and tried to use dict comprehension to force a dict to always be up to date with which vertices were equivalent to each other.
It didn’t work and was very messy.
Fortunately there was a better solution. This next bit of code I actually first used in my Graph Algorithms class from last semester. In particular it is the merge-find or disjoint set data structure from the components algorithm (code can be found here and more information about the data structure here).
Basically we create a mapping from a node to that node’s representative.
In this case a node’s representative is the node that is still in $G$ but the input node has been merged into through a series of contractions.
In the above example, once node 1 is merged into node 0, 0 would become node 1’s representative.
We search recursively through the merged_nodes
dict until we find a node which is not in the dict, meaning that it is still its own representative and therefore in the graph.
This will let us handle a representative node later being merged into another node.
Finally, we take advantage of path compression so that lookup times remain good as the number of entries in merged_nodes
grows.
This worked well once I caught a bug where the prepare_graph
function tried to contract a node with itself.
However, the function was running and returning a result but it could have one or two more edges than needed which of course means it is not a tree.
I was testing on the symmetric fractional Held Karp graph by the way, so with six nodes it should have five edges per tree.
I seeded the random number generator for one of the seven edge results and started to debug! Recall that once we generate a uniform decimal between 0 and 1 we compare it to
$$ \lambda_e \times \frac{K_{G \backslash {e}}}{K_G} $$
where $K$ is the result of Krichhoff’s Theorem on the subscripted graph. One probability that caught my eye had the fractional component equal to 1. This means that adding $e$ to the set of contracted edges had no effect on where that edge should be included in the final spanning tree. Closer inspection revealed that the edge $e$ in question already could not be picked for the spanning tree since it did not exist in $G$ it could not exist in $G \backslash {e}$.
Imagine the following situation. We have three edges to contract but they form a cycle of length three.
If we contract $\{0, 1\}$ and then $\{0, 2\}$ what does that mean for $\{1, 2\}$? Well, ${1, 2}$ would become a self loop on vertex 0 but we are deleting self loops so it cannot exist. It has to have a probability of 0. Yet in the current implementation of the function, it would have a probability of $\lambda_{\{1, 2\}}$. So, I have to check to see if a representative edge exists for the edge we are considering in the current iteration of the main for loop.
The solution to this is to return the merge-find data structure with the prepared graph for $G$ and then check that an edge with endpoints at the two representatives for the endpoints of the original edge exists.
If so, use the kirchhoff value as normal but if not make G_e_total_tree_weight
equal to zero so that this edge cannot be picked.
Finally I was able to sample trees from G
consistently, but did they match the expected probabilities?
The first test I was working with sampled one tree and checked to see if it was actually a tree. I first expanded it to sample 1000 trees and make sure that they were all trees. At this point, I thought that the function will always return a tree, but I need to check the tree distribution.
So after a lot of difficulty writing the test itself to check which of the 75 possible spanning trees I had sampled I was ready to check the actual distribution. First, the test iterates over all the spanning trees, records the products of edge weights and normalizes the data. (Remember that the actual probability is only proportional to the product of edge weights). Then I sample 50000 trees and record the actual frequency. Next, it calculates the percent error from the expected probability to the actual frequency. The sample size is so large because at 1000 trees the percent error was all over the place but, as the Law of Large Numbers dictates, the larger sample shows the actual results converging to the expected results so I do believe that the function is working.
That being said, seeing the percent error converge to be less than 15% for all 75 spanning trees is not a very rigorous test. I can either implement a formal test using the percent error or try to create a Chi squared test using scipy.
This morning I was able to get a Chi squared test working and it was definitely the correct decision. I was able to reduce the sample size from 50,000 to 1200 which is a near minimum sample. In order to run a Chi squared test you need an expected frequency of at least 5 for all of the categories so I had to find the number of samples to ganturee that for a tree with a probability of about 0.4% which was 1163 that I rounded to 1200.
I am testing at the 0.01 signigance level, so this test may fail without reason 1% of the time but it is still a overall good test for distribution.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, pp. 379-389, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207.
]]>Data visualization is a key step in a data science pipeline. Python offers great possibilities when it comes to representing some data graphically, but it can be hard and time-consuming to create the appropriate chart.
The Python Graph Gallery is here to help. It displays many examples, always providing the reproducible code. It allows to build the desired chart in minutes.
The gallery currently provides more than 400 chart examples. Those examples are organized in 40 sections, one for each chart types: scatterplot, boxplot, barplot, treemap and so on. Those chart types are organized in 7 big families as suggested by data-to-viz.com: one for each visualization purpose.
It is important to note that not only the most common chart types are covered. Lesser known charts like chord diagrams, streamgraphs or bubble maps are also available.
Each section always starts with some very basic examples. It allows to understand how to build a chart type in a few seconds. Hopefully applying the same technique on another dataset will thus be very quick.
For instance, the scatterplot section starts with this matplotlib example. It shows how to create a dataset with pandas and plot it with the plot()
function. The main graph argument like linestyle
and marker
are described to make sure the code is understandable.
The gallery uses several libraries like seaborn or plotly to produce its charts, but is mainly focus on matplotlib. Matplotlib comes with great flexibility and allows to build any kind of chart without limits.
A whole page is dedicated to matplotlib. It describes how to solve recurring issues like customizing axes or titles, adding annotations (see below) or even using custom fonts.
The gallery is also full of non-straightforward examples. For instance, it has a tutorial explaining how to build a streamchart with matplotlib. It is based on the stackplot()
function and adds some smoothing to it:
Last but not least, the gallery also displays some publication ready charts. They usually involve a lot of matplotlib code, but showcase the fine grain control one has over a plot.
Here is an example with a post inspired by Tuo Wang’s work for the tidyTuesday project. (Code translated from R available here)
The python graph gallery is an ever growing project. It is open-source, with all its related code hosted on github.
Contributions are very welcome to the gallery. Each blogpost is just a jupyter notebook so suggestion should be very easy to do through issues or pull requests!
The python graph gallery is a project developed by Yan Holtz in his free time. It can help you improve your technical skills when it comes to visualizing data with python.
The gallery belongs to an ecosystem of educative websites. Data to viz describes best practices in data visualization, the R, python and d3.js graph galleries provide technical help to build charts with the 3 most common tools.
For any question regarding the project, please say hi on twitter at @R_Graph_Gallery!
]]>In order to test the exponential distribution that I generate using spanning_tree_distribution
, I need to be able to sample a tree from the distribution.
The primary citation used in the Asadpour paper is Generating Random Combinatorial Objects by V. G. Kulkarni (1989).
While I was not able to find an online copy of this article, the Michigan Tech library did have a copy that I was able to read.
Kulkarni gave a general overview of the algorithm in Section 2, but Section 5 is titled `Random Spanning Trees’ and starts on page 200. First, let’s check that the preliminaries for the Kulkarni paper on page 200 match the Asadpour algorithm.
Let $G = (V, E)$ be an undirected network of $M$ nodes and $N$ arcs… Let $\mathfrak{B}$ be the set of all spanning trees in $G$. Let $\alpha_i$ be the positive weight of arc $i \in E$. Defined the weight $w(B)$ of a spanning tree $B \in \mathfrak{B}$ as
$$w(B) = \prod_{i \in B} \alpha_i$$
Also define
$$n(G) = \sum_{B \in \mathfrak{B}} w(B)$$
In this section we describe an algorithm to generate $B \in \mathfrak{B}$ so that
$$P\{B \text{ is generated}\} = \frac{w(B)}{n(G)}$$
Immediately we can see that $\mathfrak{B}$ is the same as $\mathcal{T}$ from the Asadpour paper, the set of all spanning trees. The weight of each edge is $\alpha_i$ for Kulkarni and $\lambda_e$ to Asadpour. As for the product of the weights of the graph being the probability, the Asadpour paper states on page 382
Given $\lambda*e \geq 0$ for $e \in E$, a $\lambda$*-random tree_ $T$ of $G$ is a tree $T$ chosen from the set of all spanning trees of $G$ with probability proportional to $\prod_{e \in T} \lambda_e$.
So this is not a concern. Finally, $n(G)$ can be written as
$$\sum_{T \in \mathcal{T}} \prod_{e \in T} \lambda_e$$
which does appear several times throughout the Asadpour paper. Thus the preliminaries between the Kulkarni and Asadpour papers align.
The specialized version of the general algorithm which Kulkarni gives is Algorithm A8 on page 202.
$U = \emptyset,$ $V = E$
Do $i = 1$ to $N$;
$\qquad$Let $a = n(G(U, V))$
$\qquad\qquad a’$ $= n(G(U \cup {i}, V))$
$\qquad$Generate $Z \sim U[0, 1]$
$\qquad$If $Z \leq \alpha_i \times \left(a’ / a\right)$
$\qquad\qquad$then $U = U \cup {i}$,
$\qquad\qquad$else $V = V - {i}$
$\qquad$end.
Stop. $U$ is the required spanning tree.
Now we have to understand this algorithm so we can create pseudo code for it.
First as a notational explanation, the statement “Generate $Z \sim U[0, 1]$” means picking a uniformly random variable over the interval $[0, 1]$ which is independent of all the random variables generated before it (See page 188 of Kulkarni for more information).
The built-in python module random
can be used here.
Looking at real-valued distributions, I believe that using random.uniform(0, 1)
is preferable to random.random()
since the latter does not have the probability of generating a ‘1’ and that is explicitly part of the interval discussed in the Kulkarni paper.
The other notational oddity would be statements similar to $G(U, V)$ which is this case does not refer to a graph with $U$ as the vertex set and $V$ as the edge set as $U$ and $V$ are both subsets of the full edge set $E$.
$G(U, V)$ is defined in the Kulkarni paper on page 201 as
Let $G(U, V)$ be a subgraph of $G$ obtained by deleting arcs that are not in $V$, and collapsing arcs that are in $U$ (i.e., identifying the end nodes of arcs in $U$) and deleting all self-loops resulting from these deletions and collapsing.
This language seems a bit… clunky, especially for the edges in $U$.
In this case, “collapsing arcs that are in $U$” would be contracting those edges without self loops.
Fortunately, this functionality is a part of NetworkX using networkx.algorithms.minors.contracted_edge
with the self_loops
keyword argument set to False
.
As for the edges in $E - V$, this can be easily accomplished by using networkx.MultiGraph.remove_edges_from
.
Once we have generated $G(U, V)$, we need to find $n(G(U, V)$.
This can be done with something we are already familiar with: Kirchhoff’s Tree Matrix Theorem.
All we need to do is create the Laplacian matrix and then find the determinant of the first cofactor.
This code will probably be taken directly from the spanning_tree_distribution
function.
Actually, this is a place to create a broader helper function called krichhoffs
which will take a graph and return the number of weighted spanning trees in it which would then be used as part of q
in spanning_tree_distribution
and in sample_spanning_tree
.
From here we compare $Z$ to $\alpha_i \left(a’ / a\right)$ so see if that edge is added to the graph or discarded. Understanding the process of the algorithm gives context to the meaning of $U$ and $V$. $U$ is the set of edges which we have decided to include in the spanning tree while $V$ is the set of edges yet to be considered for $U$ (roughly speaking).
Now there is still a bit of ambiguity in the algorithm that Kulkarni gives, mainly about $i$. In the loop condition, $i$ is an integer from 1 to $N$, the number of arcs in the graph but it is later being added to $U$ so it has to be an edge. Referencing the Asadpour paper, it starts its description of sampling the $\lambda$-random tree on page 383 by saying “The idea is to order the edges $e_1, \dots, e_m$ of $G$ arbitrarily and process them one by one”. So I believe that the edge interpretation is correct and the integer notation used in Kulkarni was assuming that a mapping of the edges to ${1, 2, \dots, N}$ has occurred.
Time to write some pseudo code! Starting with the function signature
def sample_spanning_tree
Input: A multigraph G whose edges contain a lambda value stored at lambda_key
Output: A new graph which is a spanning tree of G
Next up is a bit of initialization
U = set()
V = set(G.edges)
shuffled_edges = shuffle(G.edges)
Now the definitions of U
and V
come directly from Algorithm A8, but shuffled_edges
is new.
My thoughts are that this will be what we use for $i$.
We shuffle the edges of the graph and then in the loop we iterate over the edges within shuffled_edges
.
Next we have the loop.
for edge e in shuffled_edges
G_total_tree_weight = kirchhoffs(prepare_graph(G, U, V))
G_i_total_tree_weight = kirchhoffs(prepare_graph(G, U.add(e), V))
z = uniform(0, 1)
if z <= e[lambda_key] * G_i_total_tree_weight / G_total_tree_weight
U = U.add(e)
if len(U) == G.number_of_edges - 1
# Spanning tree complete, no need to continue to consider edges.
spanning_tree = nx.Graph
spanning_tree.add_edges_from(U)
return spanning_tree
else
V = V.remove(e)
The main loop body does use two other functions which are not part of the standard NetworkX libraries, krichhoffs
and prepare_graph
.
As I mentioned before, krichhoffs
will apply Krichhoff’s Theorem to the graph.
Pseudo code for this is below and strongly based off of the existing code in q
of spanning_tree_distribution
which will be updated to use this new helper.
def krichhoffs
Input: A multigraph G and weight key, weight
Output: The total weight of the graph's spanning trees
G_laplacian = laplacian_matrix(G, weight=weight)
G_laplacian = G_laplacian.delete(0, 0)
G_laplacian = G_laplacian.delete(0, 1)
return det(G_laplacian)
The process for the other helper, prepare_graph
is also given.
def prepare_graph
Input: A graph G, set of contracted edges U and edges which are not removed V
Output: A subgraph of G in which all vertices in U are contracted and edges not in V are
removed
result = G.copy
edges_to_remove = set(result.edges).difference(V)
result.remove_edges_from(edges_to_remove)
for edge e in U
nx.contracted_edge(e)
return result
There is one other change to the NetworkX API that I would like to make.
At the moment, networkx.algorithms.minors.contracted_edge
is programmed to always return a copy of a graph.
Since I need to be contracting multiple edges at once, it would make a lot more sense to do the contraction in place.
I would like to add an optional keyword argument to contracted_edge
called copy
which will default to True
so that the overall functionality will not change but I will be able to perform in place contractions.
The most obvious one is to implement the functions that I have laid out in the pseudo code step, but testing is still a concerning area. My best bet is to sample say 1000 trees and check that the probability of each tree is equal to the product of all of the lambda’s on it’s edges.
That actually just caused me to think of a new test of spanning_tree_distribution
.
If I generate the distribution and then iterate over all of the spanning trees with a SpanningTreeIterator
I can sum the total probability of each tree being sampled and if that is not 1 (or very close to it) than I do not have a valid distribution over the spanning trees.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, SODA ’10, Society for Industrial and Applied Mathematics, 2010, pp. 379-389, https://dl.acm.org/doi/abs/10.5555/1873601.1873633.
V. G. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207.
]]>Implementing spanning_tree_distribution
proved to have some NetworkX difficulties and one algorithmic difficulty.
Recall that the algorithm for creating the distribution is given in the Asadpour paper as
- Set $\gamma = \vec{0}$.
- While there exists an edge $e$ with $q_e(\gamma) > (1 + \epsilon) z_e$:
- Compute $\delta$ such that if we define $\gamma’$ as $\gamma_e’ = \gamma_e - \delta$, and $\gamma_f’ = \gamma_f$ for all $f \in E\ \backslash {e}$, then $q_e(\gamma’) = (1 + \epsilon/2)z_e$.
- Set $\gamma \leftarrow \gamma’$.
- Output $\tilde{\gamma} := \gamma$.
Now, the procedure that I laid out in my last blog titled Entropy Distribution Setup worked well for the while loop portion.
All of my difficulties with the NetworkX API happened in the q
inner function.
After I programmed the function, I of course needed to run it and at first I was just printing the gamma
dict out so that I could see what the values for each edge were.
My first test uses the symmetric fractional Held Karp solution and to my surprise, every value of $\gamma$ returned as 0.
I didn’t think that this was intended behavior because if it was, there would be no reason to include this step in the overall Asadpour algorithm, so I started to dig around the code with PyCharm’s debugger.
The results were, as I suspected, not correct.
I was running Krichhoff’s tree matrix theorem on the original graph, so the returned probabilities were an order of magnitude smaller than the values of $z_e$ that I was comparing them to.
Additionally, all of the values were the same so I knew that this was a problem and not that the first edge I checked had unusually small probabilities.
So, I returned to the Asadpour paper and started to ask myself questions like
It was pretty easy to dismiss the first question, if normalization was required it would be mentioned in the Asadpour paper and without a description of how to normalize it the chances of me finding the `correct’ way to do so would be next to impossible. The second question did take some digging. The sections of the Asadpour paper which talk about using Krichhoff’s theorem all discuss it using the graph $G$ which is why I was originally using all edges in $G$ rather than the edges in $E$. A few hints pointed to the fact that I needed to only consider the edges in $E$, the first being the algorithm overview which states
Find weights ${\tilde{\gamma}}_{e \in E}$
In particular the $e \in E$ statement says that I do not need to consider the edges which are not in $E$. Secondly, Lemma 7.2 starts by stating
Let $G = (V, E)$ be a graph with weights $\gamma_e$ for $e \in E$
Based on the current state of the function and these hints, I decided to reduce the input graph to spanning_tree_distribution
to only edges with $z_e > 0$.
Running the test on the symmetric fractional solution now, it still returned $\gamma = \vec{0}$ but the probabilities it was comparing were much closer during that first iteration.
Due to the fact that I do not have an example graph and distribution to work with, this could be the correct answer, but the fact that every value was the same still confused me.
My next step was to determine the actual probability of an edge being in the spanning trees for the first iteration when $\gamma = \vec{0}$.
This can be easily done with my SpanningTreeIterator
and exploits the fact that $\gamma = \vec{0} \equiv \lambda_e = 1\ \forall\ e \in \gamma$ so we can just iterate over the spanning trees and count how often each edge appears.
That script is listed below
import networkx as nx
edges = [
(0, 1),
(0, 2),
(0, 5),
(1, 2),
(1, 4),
(2, 3),
(3, 4),
(3, 5),
(4, 5),
]
G = nx.from_edgelist(edges, create_using=nx.Graph)
edge_frequency = {}
sp_count = 0
for tree in nx.SpanningTreeIterator(G):
sp_count += 1
for e in tree.edges:
if e in edge_frequency:
edge_frequency[e] += 1
else:
edge_frequency[e] = 1
for u, v in edge_frequency:
print(
f"({u}, {v}): {edge_frequency[(u, v)]} / {sp_count} = {edge_frequency[(u, v)] / sp_count}"
)
This output revealed that the probabilities returned by q
should vary from edge to edge and that the correct solution for $\gamma$ is certainly not $\vec{0}$.
(networkx-dev) mjs@mjs-ubuntu:~/Workspace$ python3 spanning_tree_frequency.py
(0, 1): 40 / 75 = 0.5333333333333333
(0, 2): 40 / 75 = 0.5333333333333333
(0, 5): 45 / 75 = 0.6
(1, 4): 45 / 75 = 0.6
(2, 3): 45 / 75 = 0.6
(1, 2): 40 / 75 = 0.5333333333333333
(5, 3): 40 / 75 = 0.5333333333333333
(5, 4): 40 / 75 = 0.5333333333333333
(4, 3): 40 / 75 = 0.5333333333333333
Let’s focus on that first edge, $(0, 1)$. My brute force script says that it appears in 40 of the 75 spanning trees of the below graph where each edge is labelled with its $z_e$ value.
Yet q
was saying that the edge was in 24 of 75 spanning trees.
Since the denominator was correct, I decided to focus on the numerator which is the number of spanning trees in $G\ \backslash\ \{(0, 1)\}$.
That graph would be the following.
An argument can be made that this graph should have a self-loop on vertex 0, but this does not affect the Laplacian matrix in any way so it is omitted here. Basically, the $[0, 0]$ entry of the adjacency matrix would be 1 and the degree of vertex 0 would be 5 and $5 - 1 = 4$ which is what the entry would be without the self loop.
What was happening was that I was giving nx.contracted_edge
a graph of the Graph class (not a directed graph since $E$ is undirected) and was getting a graph of the Graph class back.
The Graph class does not support multiple edges between two nodes so the returned graph only had one edge between node 0 and node 2 which was affecting the overall Laplacian matrix and thus the number of spanning trees.
Switching from a Graph to a MultiGraph did the trick, but this subtle change should be mentioned in the NetworkX documentation for the function, linked here.
I definitely believed that if a contracted an edge the output should automatically include both of the $(0, 2)$ edges.
An argument can be made for changing the default behavior to match this, but at the very least the documentation should explain this problem.
Now the q
function was returning the correct $40 / 75$ answer for $(0, 1)$ and correct values for the rest of the edges so long as all of the $\gamma_e$’s were 0.
But the test was erroring out with a ValueError
when I tried to compute $\delta$.
q
was returning a probability of an edge being in a sampled spanning tree of more than 1, which is clearly impossible but also caused the denominator of $\delta$ to become negative and violate the domain of the natural log.
During my investigation of this problem, I noticed that after computing $\delta$ and subtracting it from $\gamma_e$, it did not have the desired effect on $q_e$. Recall that we define $\delta$ so that $\gamma_e - \delta$ yields a $q_e$ of $(1 + \epsilon / 2) z_e$. In other words, the effect of $\delta$ is to decrease an edge probability which is too high, but in my current implementation it was having the opposite effect. The value of $q_{(0, 1)}$ was going from 0.5333 to just over 0.6. If I let this trend continue, the program would eventually hit one of those cases where $q_e \geq 1$ and crash the program.
Here I can use edge $(0, 1)$ as an example to show the problem. The original Laplacian matrix for $G$ with $\gamma = \vec{0}$ is
$$ \begin{bmatrix} 3 & -1 & -1 & 0 & 0 & -1 \\\ -1 & 3 & -1 & 0 & -1 & 0 \\\ -1 & -1 & 3 & -1 & 0 & 0 \\\ 0 & 0 & -1 & 3 & -1 & -1 \\\ 0 & -1 & 0 & -1 & 3 & -1 \\\ -1 & 0 & 0 & -1 & -1 & 3 \end{bmatrix} $$
and the Laplacian for $G\ \backslash\ \{(0, 1)\}$ is
$$ \begin{bmatrix} 4 & -2 & -1 & -1 & 0 \\\ -2 & 3 & 0 & 0 & -1 \\\ -1 & 0 & 3 & -1 & -1 \\\ -1 & 0 & -1 & 3 & -1 \\\ 0 & -1 & -1 & -1 & 3 \end{bmatrix} $$
The determinant of the first cofactor is how we get the $40 / 75$. Now consider the Laplacian matrices after we updated $\gamma_{(0, 1)}$ for the first time. The one for $G$ becomes
$$ \begin{bmatrix} 2.74 & -0.74 & -1 & 0 & 0 & -1 \\\ -0.74 & 2.74 & -1 & 0 & -1 & 0 \\\ -1 & -1 & 3 & -1 & 0 & 0 \\\ 0 & 0 & -1 & 3 & -1 & -1 \\\ 0 & -1 & 0 & -1 & 3 & -1 \\\ -1 & 0 & 0 & -1 & -1 & 3 \end{bmatrix} $$
and its first cofactor determinant is reduced from 75 to 61.6. What do we expect the value of the matrix for $G\ \backslash\ \{(0, 1)\}$ to be? Well, we know that the final value of $q_e$ needs to be $(1 + \epsilon / 2) z_e$ or $1.1 \times 0.41\overline{6}$ which is $0.458\overline{3}$. So
$$ \begin{array}{r c l} \displaystyle\frac{x}{61.6} &=& 0.458\overline{3} \\\ x &=& 28.2\overline{3} \end{array} $$
and the value of the first cofactor determinant should be $28.2\overline{3}$. However, the contracted Laplacian for $(0, 1)$ after the value of $\gamma_e$ is updated is
$$ \begin{bmatrix} 4 & -2 & -1 & -1 & 0 \\\ -2 & 3 & 0 & 0 & -1 \\\ -1 & 0 & 3 & -1 & -1 \\\ -1 & 0 & -1 & 3 & -1 \\\ 0 & -1 & -1 & -1 & 3 \end{bmatrix} $$
the same as before! The only edge with a different $\gamma_e$ than before is $(0, 1)$, but since it is the contracted edge it is no longer in the graph any more and thus cannot affect the value of the first cofactor’s determinant!
But if we change the algorithm to add $\delta$ to $\gamma_e$ rather than subtract it, the determinant of the first cofactor for $G\ \backslash\ \{e\}$’s Laplacian will not change but the determinant for the Laplacian of $G$’s first cofactor will increase. This reduces the overall probability of picking $e$ in a spanning tree. And, if we happen to use the same formula for $\delta$ as before for our example of $(0, 1)$ then $q_{(0, 1)}$ becomes $0.449307$. Recall our target value of $0.458\overline{3}$. This answer has a $-1.96%$ error.
$$ \begin{array}{r c l} \text{error} &=& \frac{0.449307 - 0.458333}{0.458333} \times 100 \\\ &=& \frac{-0.009026}{0.458333} \times 100 \\\ &=& -0.019693 \times 100 \\\ &=& -1.9693% \end{array} $$
Also, the test now completes without error.
Further research and discussion with my mentors revealed just how flawed my original analysis was. In the next step, sampling the spanning trees, adding anything to $\gamma$ would directly increase the probability that the edge would be sampled. That being said, the original problem that I found was still an issue.
Going back to the notion that we a graph on which every spanning tree maps to every spanning tree which contains the desired edge, this is still the key idea which lets us use Krichhoff’s Tree Matrix Theorem. And, contracting the edge will still give a graph in which every spanning tree can be mapped to a corresponding spanning tree which includes $e$. However, the weight of those spanning trees in $G \backslash \{e\}$ do not quite map between the two graphs.
Recall that we are dealing with a multiplicative weight function, so the final weight of a tree is the product of all the $\lambda$’s on its edges.
$$ c(T) = \prod_{e \in E} \lambda_e $$
The above statement can be expanded into
$$ c(T) = \lambda_1 \times \lambda_2 \times \dots \times \lambda_{|E|} $$
with some arbitrary ordering of the edges $1, 2, \dots |E|$. Because the ordering of the edges is arbitrary and due to the associative property of multiplication, we can assume without loss of generality that the desired edge $e$ is the last one in the sequence.
Any spanning tree in $G \backslash \{e\}$ cannot include that last $\lambda$ in it because that edge does not exist in the graph. Therefore in order to convert the weight from a tree in $G \backslash \{e\}$ we need to multiply $\lambda_e$ back into the weight of the contracted tree. So, we can now state that
$$ c(T \in \mathcal{T}: T \ni e) = \lambda_e \prod_{f \in E} \lambda_f\ \forall\ T \in G \backslash \{e\} $$
or that for all trees in $G \backslash \{e\}$, the cost of the corresponding tree in $G$ is the product of its edge $\lambda$’s times the weight of the desired edge. Now recall that $q_e(\gamma)$ is
$$ \frac{\sum_{T \ni e} \exp(\gamma(T))}{\sum_{T \in \mathcal{T}} \exp(\gamma(T))} $$
In particular we are dealing with the numerator of the above fraction and using $\lambda_e = \exp(\gamma_e)$ we can rewrite it as
$$ \sum_{T \ni e} \exp(\gamma(T)) = \sum_{T \ni e} \prod_{e \in T} \lambda_e $$
Since we now know that we are missing the $\lambda_e$ term, we can add it into the expression.
$$ \sum_{T \ni e} \lambda_e \times \prod_{f \in T, f \not= e} \lambda_f $$
Using the rules of summation, we can pull the $\lambda_e$ factor out of the summation to get
$$ \lambda_e \times \sum_{T \ni e} \prod_{f \in T, f \not= e} \lambda_f $$
And since we use that applying Krichhoff’s Theorem to $G \backslash \{e\}$ will yield everything except the factor of $\lambda_e$, we can just multiply it back manually.
This would let the peusdo code for q
become
def q
input: e, the edge of interest
# Create the laplacian matrices
write lambda = exp(gamma) into the edges of G
G_laplace = laplacian(G, lambda)
G_e = nx.contracted_edge(G, e)
G_e_laplace = laplacian(G, lambda)
# Delete a row and column from each matrix to made a cofactor matrix
G_laplace.delete((0, 0))
G_e_laplace.delete((0, 0))
# Calculate the determinant of the cofactor matrices
det_G_laplace = G_laplace.det
det_G_e_laplace = G_e_laplace.det
# return q_e
return lambda_e * det_G_e_laplace / det_G_laplace
Making this small change to q
worked very well.
I was able to change back to subtracting $\delta$ as the Asadpour paper does and even added a check to code so that every time we update a value in $\gamma$ we know that $\delta$ has had the correct effect.
# Check that delta had the desired effect
new_q_e = q(e)
desired_q_e = (1 + EPSILON / 2) * z_e
if round(new_q_e, 8) != round(desired_q_e, 8):
raise Exception
And the test passes without fail!
I technically do not know if this distribution is correct until I can start to sample from it. I have written the test I have been working with into a proper test but since my oracle is the program itself, the only way it can fail is if I change the function’s behavior without knowing it.
So I must press onwards to write sample_spanning_tree
and get a better test for both of those functions.
As for the tests of spanning_tree_distribution
, I would of course like to add more test cases.
However, if the Held Karp relaxation returns a cycle as an answer, then there will be $n - 1$ path spanning trees and the notion of creating this distribution in the first place as we have already found a solution to the ATSP.
I really need more truly fractional Held Karp solutions to expand the test of these next two functions.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
]]>“Well? Did you get it working?!”
Before I answer that question, if you’re missing the context, check out my previous blog’s last few lines.. promise it won’t take you more than 30 seconds to get the whole problem!
With this short writeup, I intend to talk about what we did and why we did, what we did. XD
Ring any bells? Remember OS (Operating Systems)? It’s one of the core CS subjects which I bunked then and regret now. (╥﹏╥)
The wikipedia page has a 2-liner explanation if you have no idea what’s an Ostrich Algorithm.. but I know most of y’all won’t bother clicking it XD, so here goes:
Ostrich algorithm is a strategy of ignoring potential problems by “sticking one’s head in the sand and pretending there is no problem”
An important thing to note: it is used when it is more cost-effective to allow the problem to occur than to attempt its prevention.
As you might’ve guessed by now, we ultimately ended up with the not-so-clean API (more on this later).
The highest level overview of the problem was:
❌ fontTools -> buffer -> ttconv_with_buffer
✅ fontTools -> buffer -> tempfile -> ttconv_with_file
The first approach created corrupted outputs, however the second approach worked fine. A point to note here would be that Method 1 is better in terms of separation of reading the file from parsing the data.
ttconv_with_buffer
is a modification to the original ttconv_with_file
; that allows it to input a file buffer instead of a file-pathYou might be tempted to say:
“Well,
ttconv_with_buffer
must be wrongly modified, duh.”
Logically, yes. ttconv
was designed to work with a file-path and not a file-object (buffer), and modifying a codebase written in 1998 turned out to be a larger pain than we anticipated.
He even did, but the efforts to get it to production / or to fix ttconv
embedding were ⋙ to just get on with the second method. That damn ostrich really helped us get out of that debugging hell. 🙃
Finally, we’re onto the second subgoal for the summer: Font Fallback!
To give an idea about how things work right now:
matplotlib.rcParams["font-family"] = ["list", "of", "font", "families"]
As soon as a font is found by iterating the font-family, all text is rendered by that and only that font.
You can immediately see the problems with this approach; using the same font for every character will not render any glyph which isn’t present in that font, and will instead spit out a square rectangle called “tofu” (read the first line here).
And that is exactly the first milestone! That is, parsing the entire list of font families to get an intermediate representation of a multi-font interface.
Imagine if you had the superpower to change Python standard library’s internal functions, without consulting anybody. Let’s say you wanted to write a solution by hooking in and changing, let’s say str("dumb")
implementation by returning:
>>> str("dumb")
["d", "u", "m", "b"]
Pretty “dumb”, right? xD
For your usecase it might work fine, but it would also mean breaking the entire Python userbase’ workflow, not to mention the 1000000+ libraries that depend on the original functionality.
On a similar note, Matplotlib has a public API known as findfont(prop: str)
, which when given a string (or FontProperties) finds you a font that best matches the given properties in your system.
It is used throughout the library, as well as at multiple other places, including downstream libraries. Being naive as I was, I changed this function signature and submitted the PR. 🥲
Had an insightful discussion about this with my mentors, and soon enough raised the other PR, which didn’t touch the findfont
API at all.
One last thing to note: Even if we do complete the first milestone, we wouldn’t be done yet, since this is just parsing the entire list to get multiple fonts..
We still need to migrate the library’s internal implementation from font-first to text-first!
But that’s for later, for now:
Finally moving on from the Held Karp relaxation, we arrive at the second step of the Asadpour asymmetric traveling salesman problem algorithm. Referencing the Algorithm 1 from the Asadpour paper, we are now finally on step two.
Algorithm 1 An $O(\log n / \log \log n)$-approximation algorithm for the ATSP
Input: A set $V$ consisting of $n$ points and a cost function $c\ :\ V \times V \rightarrow \mathbb{R}^+$ satisfying the triangle inequality.
Output: $O(\log n / \log \log n)$-approximation of the asymmetric traveling salesman problem instance described by $V$ and $c$.
- Solve the Held-Karp LP relaxation of the ATSP instance to get an optimum extreme point solution $x^*$. Define $z^*$ as in (5), making it a symmetrized and scaled down version of $x^*$. Vector $z^*$ can be viewed as a point in the spanning tree polytope of the undirected graph on the support of $x^*$ that one obtains after disregarding the directions of arcs (See Section 3.)
- Let $E$ be the support graph of $z^*$ when the direction of the arcs are disregarded. Find weights ${\tilde{\gamma}}_{e \in E}$ such that the exponential distribution on the spanning trees, $\tilde{p}(T) \propto \exp(\sum_{e \in T} \tilde{\gamma}_e)$ (approximately) preserves the marginals imposed by $z^*$, i.e. for any edge $e \in E$, $$\sum_{T \in \mathcal{T} : T \ni e} \tilde{p}(T) \leq (1 + \epsilon) z^*_e$$ for a small enough value of $\epsilon$. (In this paper we show that $\epsilon = 0.2$ suffices for our purpose. See Section 7 and 8 for a description of how to compute such a distribution.)
- Sample $2\lceil \log n \rceil$ spanning trees $T_1, \dots, T_{2\lceil \log n \rceil}$ from $\tilde{p}(.)$. For each of these trees, orient all its edges so as to minimize its cost with respect to our (asymmetric) cost function $c$. Let $T^*$ be the tree whose resulting cost is minimal among all of the sampled trees.
- Find a minimum cost integral circulation that contains the oriented tree $\vec{T}^*$. Shortcut this circulation to a tour and output it. (See Section 4.)
Sections 7 and 8 provide two different methods to find the desired probability distribution, with section 7 using a combinatorial approach and section 8 the ellipsoid method. Considering that there is no ellipsoid solver in the scientific python ecosystem, and my mentors and I have already decided not to implement one within this project, I will be using the method in section 7.
The algorithm given in section 7 is as follows:
- Set $\gamma = \vec{0}$.
- While there exists an edge $e$ with $q_e(\gamma) > (1 + \epsilon) z_e$:
- Compute $\delta$ such that if we define $\gamma’$ as $\gamma_e’ = \gamma_e - \delta$, and $\gamma_f’ = \gamma_f$ for all $f \in E\ \backslash {e}$, then $q_e(\gamma’) = (1 + \epsilon/2)z_e$.
- Set $\gamma \leftarrow \gamma’$.
- Output $\tilde{\gamma} := \gamma$.
This structure is fairly straightforward, but we need to know what $q_e(\gamma)$ is and how to calculate $\delta$.
Finding $\delta$ is very easy, the formula is given in the Asadpour paper (Although I did not realize this at the time that I wrote my GSoC proposal and re-derived the equation for delta. Fortunately my formula matches the one in the paper.)
$$ \delta = \ln \frac{q_e(\gamma)(1 - (1 + \epsilon / 2)z_e)}{(1 - q_e(\gamma))(1 + \epsilon / 2) z_e} $$
Notice that the formula for $\delta$ is reliant on $q_e(\gamma)$. The paper defines $q_e(\gamma)$ as
$$ q_e(\gamma) = \frac{\sum_{T \ni e} \exp(\gamma(T))}{\sum_{T \in \mathcal{T}} \exp(\gamma(T))} $$
where $\gamma(T) = \sum_{f \in T} \gamma_f$.
The first thing that I noticed is that in the denominator the summation is over all spanning trees for in the graph, which for the complete graphs we will be working with is exponential so a `brute force’ approach here is useless. Fortunately, Asadpour and team realized we can use Kirchhoff’s matrix tree theorem to our advantage.
As an aside about Kirchhoff’s matrix tree theorem, I was not familiar with this theorem before this project so I had to do a bit of reading about it. Basically, if you have a laplacian matrix (the adjacency matrix minus the degree matrix), the absolute value of any cofactor is the number of spanning trees in the graph. This was something completely unexpected to me, and I think that it is very cool that this type of connection exists.
The details of using Kirchhoff’s theorem are given in section 5.3. We will be using a weighted laplacian $L$ defined by
$$ L_{i, j} = \left\{ \begin{array}{l l} -\lambda_e & e = (i, j) \in E \\\ \sum_{e \in \delta({i})} \lambda_e & i = j \\\ 0 & \text{otherwise} \end{array} \right. $$
where $\lambda_e = \exp(\gamma_e)$.
Now, we know that applying Krichhoff’s theorem to $L$ will return
$$ \sum_{t \in \mathcal{T}} \prod_{e \in T} \lambda_e $$
but which part of $q_e(\gamma)$ is that?
If we apply $\lambda_e = \exp(\gamma_e)$, we find that
$$ \begin{array}{r c l} \sum_{T \in \mathcal{T}} \prod_{e \in T} \lambda_e &=& \sum_{T \in \mathcal{T}} \prod_{e \in T} \exp(\gamma_e) \\\ && \sum_{T \in \mathcal{T}} \exp\left(\sum_{e \in T} \gamma_e\right) \\\ && \sum_{T \in \mathcal{T}} \exp(\gamma(T)) \\\ \end{array} $$
So moving from the first row to the second row is a confusing step, but essentially we are exploiting the properties of exponents. Recall that $\exp(x) = e^x$, so could have written it as $\prod_{e \in T} e^{\gamma_e}$ but this introduces ambiguity as we would have multiple meanings of $e$. Now, for all values of $e$, $e_1, e_2, \dots, e_{n-1}$ in the spanning tree $T$ that product can be expanded as
$$ \prod_{e \in T} e^{\gamma_e} = e^{\gamma_{e_1}} \times e^{\gamma_{e_2}} \times \dots \times e^{\gamma_{e_{n-1}}} $$
Each exponential factor has the same base, so we can collapse that into
$$ e^{\gamma_{e_1} + \gamma_{e_2} + \dots + \gamma_{e_{n-1}}} $$
which is also
$$ e^{\sum_{e \in T} \gamma_e} $$
but we know that $\sum_{e \in T} \gamma_e$ is $\gamma(T)$, so it becomes
$$ e^{\gamma(T)} = \exp(\gamma(T)) $$
Once we put that back into the summation we arrive at the denominator in $q_e(\gamma)$, $\sum_{T \in \mathcal{T}} \exp(\gamma(T))$.
Next, we need to find the numerator for $q_e(\gamma)$. Just as before, a `brute force’ approach would be exponential in complexity, so we have to find a better way. Well, the only difference between the numerator and denominator is the condition on the outer summation, which the $T \in \mathcal{T}$ being changed to $T \ni e$ or every tree containing edge $e$.
There is a way to use Krichhoff’s matrix tree theorem here as well. If we had a graph in which every spanning tree could be mapped in a one-to-one fashion onto every spanning tree in the original graph which contains the desired edge $e$. In order for a spanning tree to contain edge $e$, we know that the endpoints of $e$, $(u, v)$ will be directly connected to each other. So we are then interested in every spanning tree in which we reach vertex $u$ and then leave from vertex $v$. (As opposed to the spanning trees where we reach vertex $u$ and then leave from that same vertex). In a sense, we are treating vertices $u$ and $v$ is the same vertex. We can apply this literally by contracting $e$ from the graph, creating $G / {e}$. Every spanning tree in this graph can be uniquely mapped from $G / {e}$ onto a spanning tree in $G$ which contains the edge $e$.
From here, the logic to show that a cofactor from $L$ is actually the numerator of $q_e(\gamma)$ parallels the logic for the denominator.
At this point, we have all of the needed information to create some pseudo code for the next function in the Asadpour method, spanning_tree_distribution()
.
Here I will use an inner function q()
to find $q_e$.
def spanning_tree_distribution
input: z, the symmetrized and scaled output of the Held Karp relaxation.
output: gamma, the maximum entropy exponential distribution for sampling spanning trees
from the graph.
def q
input: e, the edge of interest
# Create the laplacian matrices
write lambda = exp(gamma) into the edges of G
G_laplace = laplacian(G, lambda)
G_e = nx.contracted_edge(G, e)
G_e_laplace = laplacian(G, lambda)
# Delete a row and column from each matrix to made a cofactor matrix
G_laplace.delete((0, 0))
G_e_laplace.delete((0, 0))
# Calculate the determinant of the cofactor matrices
det_G_laplace = G_laplace.det
det_G_e_laplace = G_e_laplace.det
# return q_e
return det_G_e_laplace / det_G_laplace
# initialize the gamma vector
gamma = 0 vector of length G.size
while true
# We will iterate over the edges in z until we complete the
# for loop without changing a value in gamma. This will mean
# that there is not an edge with q_e > 1.2 * z_e
valid_count = 0
# Search for an edge with q_e > 1.2 * z_e
for e in z
q_e = q(e)
z_e = z[e]
if q_e > 1.2 * z_e
delta = ln(q_e * (1 - 1.1 * z_e) / (1 - q_e) * 1.1 * z_e)
gamma[e] -= delta
else
valid_count += 1
if valid_count == number of edges in z
break
return gamma
The clear next step is to implement the function spanning_tree_distribution
using the pseudo code above as an outline.
I will start by writing q
and testing it with the same graphs which I am using to test the Held Karp relaxation.
Once q
is complete, the rest of the function seems fairly straight forward.
One thing that I am concerned about is my ability to test spanning_tree_distribution
.
There are no examples given in the Asadpour research paper and no other easy resources which I could turn to in order to find an oracle.
The only method that I can think of right now would be to complete this function, then complete sample_spanning_tree
.
Once both functions are complete, I can sample a large number of spanning trees to find an experimental probability for each tree, then run a statistical test (such as an h-test) to see if the probability of each tree is near $\exp(\gamma(T))$ which is the desired distribution.
An alternative test would be to use the marginals in the distribution and have to manually check that
$$ \sum_{T \in \mathcal{T} : T \ni e} p(T) \leq (1 + \epsilon) z^*_e,\ \forall\ e \in E $$
where $p(T)$ is the experimental data from the sampled trees.
Both methods seem very computationally intensive and because they are sampling from a probability distribution they may fail randomly due to an unlikely sample.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
]]>This should be my final post about the Held-Karp relaxation! Since my last post titled Implementing The Held Karp Relaxation, I have been testing both the ascent method as well as the branch and bound method.
My first test was to use a truly asymmetric graph rather than a directed graph where the cost in each direction happened to be the same.
In order to create such a test, I needed to know the solution to any such proposed graphs.
I wrote a python script called brute_force_optimal_tour.py
which will generate a random graph, print its adjacency matrix and then check every possible combination of edges to find the optimal tour.
import networkx as nx
from itertools import combinations
import numpy as np
import math
import random
def is_1_arborescence(G):
"""
Returns true if `G` is a 1-arborescence
"""
return (
G.number_of_edges() == G.order()
and max(d for n, d in G.in_degree()) <= 1
and nx.is_weakly_connected(G)
)
# Generate a random adjacency matrix
size = (7, 7)
G_array = np.empty(size, dtype=int)
random.seed()
for r in range(size[0]):
for c in range(size[1]):
if r == c:
G_array[r][c] = 0
continue
G_array[r][c] = random.randint(1, 100)
# Print that adjacency matrix
print(G_array)
G = nx.from_numpy_array(G_array, create_using=nx.DiGraph)
num_nodes = G.order()
combo_count = 0
min_weight_tour = None
min_tour_weight = math.inf
test_combo = nx.DiGraph()
for combo in combinations(G.edges(data="weight"), G.order()):
combo_count += 1
test_combo.clear()
test_combo.add_weighted_edges_from(combo)
# Test to see if test_combo is a tour.
# This means first that it is an 1-arborescence
if not is_1_arborescence(test_combo):
continue
# It also means that every vertex has a degree of 2
arborescence_weight = test_combo.size("weight")
if (
len([n for n, deg in test_combo.degree if deg == 2]) == num_nodes
and arborescence_weight < min_tour_weight
):
# Tour found
min_weight_tour = test_combo.copy()
min_tour_weight = arborescence_weight
print(
f"Minimum tour found with weight {min_tour_weight} from {combo_count} combinations of edges\n"
)
for u, v, d in min_weight_tour.edges(data="weight"):
print(f"({u}, {v}, {d})")
This is useful information as every though the ascent method returns a vector, because if the ascent method returns this solution (a.k.a $f(\pi) = 0$) we can calculate that vector off of the edges in the solution without having to explicitly enumerate the dict returned by held_karp_ascent()
.
The first output from the program was a six vertex graph and is presented below.
~ time python3 brute_force_optimal_tour.py
[[ 0 45 39 92 29 31]
[72 0 4 12 21 60]
[81 6 0 98 70 53]
[49 71 59 0 98 94]
[74 95 24 43 0 47]
[56 43 3 65 22 0]]
Minimum tour found with weight 144.0 from 593775 combinations of edges
(0, 5, 31)
(5, 4, 22)
(1, 3, 12)
(3, 0, 49)
(2, 1, 6)
(4, 2, 24)
real 0m9.596s
user 0m9.689s
sys 0m0.241s
First I checked that the ascent method was returning a solution with the same weight, 144, which it was.
Also, every entry in the vector was $0.866\overline{6}$ which is $\frac{5}{6}$ or the scaling factor from the Asadpour paper so I know that it was finding the exact solution.
Because if this, my test in test_traveling_salesman.py
checks that for all edges in the solution edge set both $(u, v)$ and $(v, u)$ are equal to $\frac{5}{6}$.
For my next test, I created a $7 \times 7$ matrix to test with, and as expected the running time of the python script was much slower.
~ time python3 brute_force_optimal_tour.py
[[ 0 26 63 59 69 31 41]
[62 0 91 53 75 87 47]
[47 82 0 90 15 9 18]
[68 19 5 0 58 34 93]
[11 58 53 55 0 61 79]
[88 75 13 76 98 0 40]
[41 61 55 88 46 45 0]]
Minimum tour found with weight 190.0 from 26978328 combinations of edges
(0, 1, 26)
(1, 3, 53)
(3, 2, 5)
(2, 5, 9)
(5, 6, 40)
(4, 0, 11)
(6, 4, 46)
real 7m28.979s
user 7m29.048s
sys 0m0.245s
Once again, the value of $f(\pi)$ hit 0, so the ascent method returned an exact solution and my testing procedure was the same as for the six vertex graph.
The branch and bound method was not working well with the two example graphs I generated. First, on the seven vertex matrix, I programmed the test and let it run… and run… and run… until I stopped it at just over an hour of execution time. If it took one eight of that time to brute force the solution, then the branch and bound method truly is not efficient.
I moved to the six vertex graph with high hopes, I already had a six vertex graph which was correctly executing in a reasonable amount of time. The six vertex graph created a large number of exceptions and errors when I ran the tests. I was able to determine why the errors were being generated, but the context did not conform which my expectations for the branch and bound method.
Basically, direction_of_ascent_kilter()
was finding a vertex which was out-of-kilter and returning the corresponding direction of ascent, but find_epsilon()
was not finding any valid cross over edges and returning a maximum direction of travel of $\infty$.
While I could change the default value for the return value of find_epsilon()
to zero, that would not solve the problem because the value of the vector $\pi$ would get stuck and the program would enter an infinite loop.
I do have an analogy for this situation. Imagine that you are in an unfamiliar city and you have to meet somebody at the tallest building in that city. However, you don’t know the address and have no way to get a GPS route to that building. Instead of wandering around aimlessly, you decide to scan the skyline for the tallest building you can see and start walking down the street which is the closest to matching that direction. Additionally, you have the ability to tell at any given direction how far down the chosen street to go before you need to re-evaluate and pick a new street.
This hypothetical is a better approximation of the ascent method, but the problem here can be demonstrated non the less.
After this procedure works for a while, you suddenly find yourself in an unusual situation. You can still see the tallest building, so you know you are not there yet. You know what street will take you closer to the building, but for some reason you cannot move down that street.
From my understanding of the ascent and branch and bound methods, if the direction of ascent exists, then we have to be able to move some amount in that direction without fail, but the branch and bound method was failing to provide an adequate distance to move.
Considering the trouble with the branch and bound method, and that it is not going to be used in the final Asadpour algorithm, I plan on removing it from the NetworkX pull request and moving onwards using only the ascent method for the rest of the Ascent method.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061.
M. Held, R. M. Karp, The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>"Aitik, how is your GSoC going?"
Well, it’s been a while since I last wrote. But I wasn’t spending time watching Loki either! (that’s a lie.)
During this period the project took on some interesting (and stressful) curves, which I intend to talk about in this small writeup.
The first week of coding period, and I met one of my new mentors, Jouni. Without him, along with Tom and Antony, the project wouldn’t have moved an inch.
It was initially Jouni’s PR which was my starting point of the first milestone in my proposal, Font Subsetting.
As was proposed by Tom, a good way to understand something is to document your journey along the way! (well, that’s what GSoC wants us to follow anyway right?)
Taking an excerpt from one of the paragraphs I wrote here:
Font Subsetting can be used before generating documents, to embed only the required glyphs within the documents. Fonts can be considered as a collection of these glyphs, so ultimately the goal of subsetting is to find out which glyphs are required for a certain array of characters, and embed only those within the output.
Now this may seem straightforward, right?
The glyph programs can call their own subprograms, for example, characters like ä
could be composed by calling subprograms for a
and ¨
; or →
could be composed by a program that changes the display matrix and calls the subprogram for ←
.
Since the subsetter has to find out all such subprograms being called by every glyph included in the subset, this is a generally difficult problem!
Something which one of my mentors said which really stuck with me:
Matplotlib isn’t a font library, and shouldn’t try to be one.
It’s really easy to fall into the trap of trying to do everything within your own project, which ends up rather hurting itself.
Since this holds true even for Matplotlib, it uses external dependencies like FreeType, ttconv, and newly proposed fontTools to handle font subsetting, embedding, rendering, and related stuff.
PS: If that font stuff didn’t make sense, I would recommend going through a friendly tutorial I wrote, which is all about Matplotlib and Fonts!
Matplotlib uses an external dependency ttconv
which was initially forked into Matplotlib’s repository in 2003!
ttconv was a standalone commandline utility for converting TrueType fonts to subsetted Type 3 fonts (among other features) written in 1995, which Matplotlib forked in order to make it work as a library.
Over the time, there were a lot of issues with it which were either hard to fix, or didn’t attract a lot of attention. (See the above paragraph for a valid reason)
One major utility which is still used is convert_ttf_to_ps
, which takes a font path as input and converts it into a Type 3 or Type 42 PostScript font, which can be embedded within PS/EPS output documents. The guide I wrote (link) contains decent descriptions, the differences between these type of fonts, etc.
Why do we need to? Type 42 subsetting isn’t really supported by ttconv, so we use a new dependency called fontTools, whose ‘full-time job’ is to subset Type 42 fonts for us (among other things).
It provides us with a font buffer, however ttconv expects a font path to embed that font
Easily enough, this can be done by Python’s tempfile.NamedTemporaryFile
:
with tempfile.NamedTemporaryFile(suffix=".ttf") as tmp:
# fontdata is the subsetted buffer
# returned from fontTools
tmp.write(fontdata.getvalue())
# TODO: allow convert_ttf_to_ps
# to input file objects (BytesIO)
convert_ttf_to_ps(
os.fsencode(tmp.name),
fh,
fonttype,
glyph_ids,
)
But this is far from a clean API; in terms of separation of *reading* the file from *parsing* the data.
What we ideally want is to pass the buffer down to convert_ttf_to_ps
, and modify the embedding code of ttconv
(written in C++). And here we come across a lot of unexplored codebase, which wasn’t touched a lot ever since it was forked.
Funnily enough, just yesterday, after spending a lot of quality time, me and my mentors figured out that the whole logging system of ttconv was broken, all because of a single debugging function. 🥲
This is still an ongoing problem that we need to tackle over the coming weeks, hopefully by the next time I write one of these blogs, it gets resolved!
Again, thanks a ton for spending time reading these blogs. :D
I have now completed my implementation of the ascent and the branch and bound method detailed in the 1970 paper The Traveling-Salesman Problem and Minimum Spanning Trees by Micheal Held and Richard M. Karp.
In my last post, titled Understanding the Ascent Method, I completed the first iteration of the ascent method and found an important bug in the find_epsilon()
method and found a more efficient way to determine substitutes in the graph.
However the solution being given was still not the optimal solution.
After discussing my options with my GSoC mentors, I decided to move onto the branch and bound method anyways with the hope that because the method is more human-computable and an example was given in the paper by Held and Karp that I would be able to find the remaining flaws. Fortunately, this was indeed the case and I was able to correctly implement the branch and bound method and fix the last problem with the ascent method.
The branch and bound method follows from the ascent method, but tweaks how we determine the direction of ascent and simplifies the expression used for $\epsilon$. As a reminder, we use the notion of an out-of-kilter vertex to find directions of ascent which are unit vectors or negative unit vectors. An out-of-kilter vertex is a vertex which is consistently not connected enough or connected too much in the set of minimum 1-arborescences of a graph. The formal definition is given on page 1151 as
Vertex $i$ is said to be out-of-kilter high at the point $\pi$, if, for all $k \in K(\pi), v_{ik} \geqq 1$; similarly, vertex $i$ is out-of-kilter low at the point $\pi$ if, for all $k \in K(\pi), v_{ik} = -1$.
Where $v_{ik}$ is the degree of the vertex minus two.
First, I created a function called direction_of_ascent_kilter()
which returns a direction of ascent based on whether a vertex is out-of-kilter.
However, I did not use the method mentioned on the paper by Held and Karp, which is to find a member of $K(\pi, u_i)$ where $u_i$ is the unit vector with 1 in the $i$th location and check if vertex $i$ had a degree of 1 or more than two.
Instead, I knew that I could find the elements of $K(\pi)$ with existing code and decided to check the value of $v_{ik}$ for all $k \in K(\pi)$ and once it is determined that a vertex is out-of-kilter simply move on to the next vertex.
Once I have a mapping of all vertices to their kilter state, find one which is out-of-kilter and return the corresponding direction of ascent.
The changes to find_epsilon()
were very minor, basically removing the denominator from the formula for $\epsilon$ and adding a check to see if we have a negative direction of ascent so that the crossover distances become positive and thus valid.
The brand new function which was needed was branch()
, which well… branches according to the Held and Karp paper.
The first thing it does is run the linear program to form the ascent method to determine if a direction of ascent exists.
If the direction does exist, branch.
If not, search the set of minimum 1-arborescences for a tour and then branch if it does not exist.
The branch process itself is rather simple, find the first open edge (an edge not in the partition sets $X$ and $Y$) and then create two new configurations where that edges is either included or excluded respectively.
Finally the overall structure of the algorithm, written in pseudocode is
Initialize pi to be the zero vector.
Add the configuration (∅, ∅, pi, w(0)) to the configuration priority queue.
while configuration_queue is not empty:
config = configuration_queue.get()
dir_ascent = direction_of_ascent_kilter()
if dir_ascent is None:
branch()
if solution returned by branch is not None
return solution
else:
max_dist = find_epsilon()
update pi
update edge weights
update config pi and bound value
My initial implementation of the branch and bound method returned the same, incorrect solution is the ascent method, but with different edge weights. As a reminder, I wanted a solution which looked like this:
and I now had two algorithms returning this solution:
As I mentioned before, the branch and bound method is more human-computable than the ascent method, so I decided to follow the execution of my implementation with the one given in [1]. Below, the left side is the data from the Held and Karp paper and on the right my program’s execution on the directed version.
Undirected Graph | Directed Graph |
---|---|
Iteration 1: | |
Starting configuration: $(\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, 196)$ | Starting configuration: $(\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & 0 & 0 & 0 \end{bmatrix}, 196)$ |
Minimum 1-Trees: | Minimum 1-Arborescences: |
Vertex 3 out-of-kilter LOW | Vertex 3 out-of-kilter LOW |
$d = \begin{bmatrix} 0 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}$ | $d = \begin{bmatrix} 0 & 0 & 0 & -1 & 0 & 0 \end{bmatrix}$ |
$\epsilon(\pi, d) = 5$ | $\epsilon(\pi, d) = 5$ |
New configuration: $(\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & -5 & 0 & 0 \end{bmatrix}, 201)$ | New configuration: $(\emptyset, \emptyset, \begin{bmatrix} 0 & 0 & 0 & -5 & 0 & 0 \end{bmatrix}, 212)$ |
Iteration 2: | |
Minimum 1-Trees: | Minimum 1-Arborescences: |
In order to get these results, I forbid the program from being able to choose to connect vertex 0 to the same other vertex for both the incoming and outgoing edge. However, it is very clear that from the start, iteration two was not going to be the same.
I noticed that in the first iteration, there were twice as many 1-arborescences as 1-trees and that the difference was that the cycle can be traversed in both directions. This creates a mapping between 1-trees and 1-arborescences. In the second iteration, there is not as twice as many 1-arborescences and that mapping is not present. Vertex 0 always connects to vertex 3 in the arborescences and vertex 5 in the trees. Additionally, the cost of the 1-arborescences are higher than the costs of the 1-trees.
I knew that the choice of root node in the arborescences affects the total price from working on the ascent method. I now wondered if a minimum 1-arborescence could come from a non-minimum spanning arborescence. So it would be, the answer is yes.
In order to test this hypothesis, I created a simple python script using a modified version of k_pi()
.
The entire thing is longer than I’d like to put here, but the gist was simple; iterate over all of the spanning arborescences in the graph, tracking the minimum weight and then printing the minimum 1-arborescences that this program finds to compare to the ones that the unaltered one finds.
The output is below:
Adding arborescence with weight 212.0
Adding arborescence with weight 212.0
Adding arborescence with weight 212.0
Adding arborescence with weight 204.0
Adding arborescence with weight 204.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Adding arborescence with weight 196.0
Found 6 minimum 1-arborescences
(1, 5, 30)
(2, 1, 41)
(2, 3, 21)
(4, 2, 35)
(5, 0, 52)
(0, 4, 17)
(1, 2, 41)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
(0, 5, 52)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
(5, 2, 41)
(0, 5, 52)
(2, 4, 35)
(3, 2, 16)
(4, 0, 17)
(5, 1, 30)
(5, 3, 46)
(0, 5, 52)
(2, 3, 21)
(3, 5, 41)
(4, 2, 35)
(5, 1, 30)
(5, 0, 52)
(0, 4, 17)
(2, 3, 21)
(2, 5, 41)
(4, 2, 35)
(5, 1, 30)
(5, 0, 52)
(0, 4, 17)
This was very enlightening. The 1-arborescences of weight 212 were the ones that my branch and bound method was using in the second iteration, but not the true minimum ones. Graphically, those six 1-arborescences look like this:
And suddenly that mapping between the 1-trees and 1-arborescences is back! But why can minimum 1-arborescences come from non-minimum spanning arborescences? Remember that we create 1-arborescences by find spanning arborescences on the vertex set ${2, 3, \dots, n}$ and then connecting that missing vertex to the root of the spanning arborescence and the minimum weight incoming edge.
This means that even among the true minimum spanning arborescences, the final weight of the 1-arborescence can vary based on the cost of connecting ‘vertex 1’ to the root of the arborescence. I already had to deal with this issue earlier in the implementation of the ascent method. Now suppose that not every vertex in the graph is a root of an arborescence in the set of minimum spanning arborescences. Let the minimum root be the root vertex of the arborescence which is the cheapest to connect to and the maximum root the root vertex which is the most expensive to connect to. If we needed to, we could order the roots from minimum to maximum based on the weight of the edge from ‘vertex 1’ to that root.
Finally, suppose that the result of considering only the set of minimum spanning arborescences results in a set of minimum 1-arborescenes which do not use the minimum root and have a total cost $c$ more than the cost of the minimum spanning arborescence plus the cost of connecting to the minimum root.
Continue to consider spanning arborescences in increasing weight, such as the ones returned by the ArborescenceIterator
.
Eventually the ArborescenceIterator
will return a spanning arborescence which has the minimum root.
If the cost of the minimum spanning arborescence is $c_{min}$ and the cost of this arborescence is less than $c_{min} + c$ then a new minimum 1-arborescence has been found from a non-minimum spanning arborescence.
It is obviously impractical to consider all of the spanning arborescences in the graph, and because ArborescenceIterator
returns arborescences in order of increasing weight, there is a weight after which it is impossible to produce a minimum 1-arborescence.
Let the cost of a minimum spanning arborescence be $c_{min}$ and the total costs of connecting the roots range from $r_{min}$ to $r_{max}$.
The worst case cost of the minimum 1-arborescence is $c_{min} + r_{max}$ which would connect the minimum spanning arborescence to the most expensive root and the best case minimum 1-arborescence would be $c_{min} + r_{min}$.
With regard to the weight of the spanning arborescence itself, once it exceeds $c_{min} + r_{max} - r_{min}$ we know that even if it uses the minimum root that the total weight will be greater than worst case minimum 1-arborescence so that is the bound which we use the ArborescenceIterator
with.
After implementing this boundary for checking spanning arborescences to find minimum 1-arborescences, both methods executed successfully on the test graph.
Now that both the ascent and branch and bound methods are working, they must be tested both for accuracy and performance. Surprisingly, on the test graph I have been using, which is originally from the Held and Karp paper, the ascent method is between 2 and 3 times faster than the branch and bound method. However, this six vertex graph is small and the branch and bound method may yet have better performance on larger graphs. I will have to create larger test graphs and then select whichever method has better performance overall.
Additionally, this is an example where $f(\pi)$, the gap between a tour and 1-arborescence, converges to 0. This is not always the case, so I will need to test on an example where the minimum gap is greater than 0.
Finally, the output of my Held Karp relaxation program is a tour. This is just one part of the Asadpour asymmetric traveling salesperson problem and that algorithm takes a modified vector which is produced based on the final result of the relaxation. I still need to convert the output to match the expectation of the overall algorithm I am seeking to implement this summer of code.
I hope to move onto the next step of the Asadpour algorithm on either June 30th or July 1st.
[1] Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>It has been far longer than I would have preferred since I wrote a blog post. As I expected in my original GSoC proposal, the Held-Karp relaxation is proving to be quite difficult to implement.
My mentors and I agreed that the branch and bound method discussed in Held and Karp’s 1970 paper The Traveling-Salesman Problem and Minimum Spanning Trees which first required the implementation of the ascent method because it is used in the branch and bound method. For the last week and a half I have been implementing and debugging the ascent method and wanted to take some time to reflect on what I have learned.
I will start by saying that as of the writing of this post, my version of the ascent method is not giving what I expect to be the optimal solution. For my testing, I took the graph which Held and Karp use in their example of the branch and bound method, a weighted $\mathcal{K}_6$, and converted to a directed but symmetric version given in the following adjacency matrix.
$$ \begin{bmatrix} 0 & 97 & 60 & 73 & 17 & 52 \\\ 97 & 0 & 41 & 52 & 90 & 30 \\\ 60 & 41 & 0 & 21 & 35 & 41 \\\ 73 & 52 & 21 & 0 & 95 & 46 \\\ 17 & 90 & 35 & 95 & 0 & 81 \\\ 52 & 30 & 41 & 46 & 81 & 0 \end{bmatrix} $$
The original solution is an undirected tour but in the directed version, the expected solutions depend on which way they are traversed. Both of these cycles have a total weight of 207.
This is the cycle returned by the program, which has a total weight of 246.
All of this code goes into the function _held_karp()
within traveling_saleaman.py
in NetworkX and I tried to follow the algorithm outlined in the paper as closely as I could.
The _held_karp()
function itself has three inner functions, k_pi()
, direction_of_ascent()
and find_epsilon()
which represent the main three steps used in each iteration of the ascent method.
k_pi()
#k_pi()
uses the ArborescenceIterator
I implemented during the first week of coding for the Summer of Code to find all of the minimum 1-arborescences in the graph.
My original assessment of creating 1-arborescences was slightly incorrect.
I stated that
In order to connect vertex 1, we would choose the outgoing arc with the smallest cost and the incoming arc with the smallest cost.
In reality, this method would produce graphs which are almost arborescences based solely on the fact that the outgoing arc would almost certainly create a vertex with two incoming arcs. Instead, we need to connect vertex 1 with the incoming edge of lowest cost and the edge connecting to the root node of the arborescence on nodes ${2, 3, \dots, n}$ that way the in-degree constraint is not violated.
For the test graph on the first iteration of the ascent method, k_pi()
returned 10 1-arborescences but the costs were not all the same.
Notice that because we have no agency in choosing the outgoing edge of vertex 1 that the total cost of the 1-arborescence will vary by the difference between the cheapest root to connect to and the most expensive node to connect to.
My original writing of this function was not very efficient and it created the 1-arborescence from all of the minimum spanning arborescences and then iterated over them to delete all of the non-minimum ones.
Yesterday I re-wrote this function so that once a 1-arborescence of lower weight was found it would delete all of the current minimum ones in favor on the new one and not add any 1-arborescences it found with greater weight to the set of minimum 1-arborescences.
The real reason that I re-wrote the method was to try something new in hopes of pushing the program from a suboptimal solution to the optimal one.
As I mentioned early, the forced choice of connecting to the root node created 1-arborescences of different weight.
I suspected then that different choices of vertex 1 would be able to create 1-arborescences of even lower weight than just arbitrarily using the one returned by next(G.__iter__())
.
So I wrapped all of k_pi()
with a for
loop over the vertices of the graph and found that the choice of vertex 1 made a difference.
Excluded node: 0, Total Weight: 161.0
Chosen incoming edge for node 0: (4, 0), chosen outgoing edge for node 0: (0, 4)
(2, 3, 21)
(2, 5, 41)
(4, 2, 35)
(4, 0, 17)
(5, 1, 30)
(0, 4, 17)
Excluded node: 0, Total Weight: 161.0
Chosen incoming edge for node 0: (4, 0), chosen outgoing edge for node 0: (0, 4)
(1, 5, 30)
(2, 1, 41)
(2, 3, 21)
(4, 2, 35)
(4, 0, 17)
(0, 4, 17)
Excluded node: 1, Total Weight: 174.0
Chosen incoming edge for node 1: (5, 1), chosen outgoing edge for node 1: (1, 5)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 2, 41)
(5, 1, 30)
(1, 5, 30)
Excluded node: 2, Total Weight: 187.0
Chosen incoming edge for node 2: (3, 2), chosen outgoing edge for node 2: (2, 3)
(0, 4, 17)
(3, 5, 46)
(3, 2, 21)
(5, 0, 52)
(5, 1, 30)
(2, 3, 21)
Excluded node: 3, Total Weight: 165.0
Chosen incoming edge for node 3: (2, 3), chosen outgoing edge for node 3: (3, 2)
(1, 5, 30)
(2, 1, 41)
(2, 4, 35)
(2, 3, 21)
(4, 0, 17)
(3, 2, 21)
Excluded node: 3, Total Weight: 165.0
Chosen incoming edge for node 3: (2, 3), chosen outgoing edge for node 3: (3, 2)
(2, 4, 35)
(2, 5, 41)
(2, 3, 21)
(4, 0, 17)
(5, 1, 30)
(3, 2, 21)
Excluded node: 4, Total Weight: 178.0
Chosen incoming edge for node 4: (0, 4), chosen outgoing edge for node 4: (4, 0)
(0, 5, 52)
(0, 4, 17)
(1, 2, 41)
(2, 3, 21)
(5, 1, 30)
(4, 0, 17)
Excluded node: 4, Total Weight: 178.0
Chosen incoming edge for node 4: (0, 4), chosen outgoing edge for node 4: (4, 0)
(0, 5, 52)
(0, 4, 17)
(2, 3, 21)
(5, 1, 30)
(5, 2, 41)
(4, 0, 17)
Excluded node: 5, Total Weight: 174.0
Chosen incoming edge for node 5: (1, 5), chosen outgoing edge for node 5: (5, 1)
(1, 2, 41)
(1, 5, 30)
(2, 3, 21)
(2, 4, 35)
(4, 0, 17)
(5, 1, 30)
Note that because my test graph is symmetric it likes to make cycles with only two nodes. The weights of these 1-arborescences range from 161 to 178, so I tried to run the test which had been taking about 300 ms using the new approach… and the program was non-terminating. I created breakpoints in PyCharm after 200 iterations of the ascent method and found that the program was stuck in a loop where it alternated between two different minimum 1-arborescences. This was a long shot, but it did not work out so I reverted the code to always pick the same vertex for vertex 1.
Either way, the fact that I had almost entirely re-written this function without a change in output suggests that this function is not the source of the problem.
direction_of_ascent()
#This was the one function which has pseudocode in the Held and Karp paper:
- Set $d$ equal to the zero $n$-vector.
- Find a 1-tree $T^k$ such that $k \in K(\pi, d)$. [A method of executing Step 2 follows from the results of Section 6 (the greedy algorithm).]
- If $\sum_{i=1}^{i=n} d_i v_{i k} > 0$, STOP.
- $d_i \rightarrow d_i + v_{i k}$, for $i = 2, 3, \dots, n$
- GO TO 2.
Using this as a guide, the implementation of this function was simple until I got to the terminating condition, which is a linear program discussed on page 1149 as
Thus, when failure to terminate is suspected, it is necessary to check whether no direction of ascent exists; by the Minkowski-Farkas lemma this is equivalent to the existence of nonnegative coefficients $\alpha_k$ such that
$ \sum_{k \in K(\pi)} \alpha_kv_{i k} = 0, \quad i = 1, 2, \dots, n $
This can be checked by linear programming.
While I was able to implement this without much issue, one very important constraint of the linear program was not mentioned here, but rather the page before during a proof. That constraint is
$$ \sum_{k \in K(\pi)} \alpha_k = 1 $$
Once I spent several hours trying to debug the original linear program and noticed the missing constraint. The linear program started to behave correctly, terminating the program when a tour is found.
find_epsilon()
#This function requires a completely different implementation compared to the one described in the Held and Karp paper.
The basic idea in both my implementation for directed graphs and the description for undirected graphs is finding edges which are substitutes for each other, or an edge outside the 1-arborescence which can replace an edge in the arborescence and will result in a 1-arborescence.
The undirected version uses the idea of fundamental cycles in the tree to find the substitutes, and I tried to use this idea as will with the find_cycle()
function in the NetworkX library.
I executed the first iteration of the ascent method by hand and noticed that what I computed for all of the possible values of $\epsilon$ and what the program found did not match.
I had found several that it had missed and it found several that I missed.
For the example graph, I found that the following edge pairs are substitutes where the first edge is not in the 1-arborescence and the second one is the one in the 1-arborescence which it can replace using the below minimum 1-arborescence.
$$ \begin{array}{l} (0, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = 56 \\\ (0, 2) \rightarrow (4, 2) \text{ valid: } \epsilon = 25 \\\ (0, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 52 \\\ (0, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = \frac{30 - 52}{0 - 0} \text{, not valid} \\\ (1, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 15.5 \\\ (2, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = 5.5 \\\ (3, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = 5.5 \\\ (3, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = \frac{30 - 46}{-1 + 1} \text{, not valid} \\\ (4, 1) \rightarrow (2, 1) \text{ valid: } \epsilon = \frac{41 - 90}{1 - 1} \text{, not valid} \\\ (4, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = \frac{30 - 95}{1 - 1} \text{, not valid} \\\ (4, 5) \rightarrow (1, 5) \text{ valid: } \epsilon = -25.5 \text{, not valid (negative }\epsilon) \\\ (5, 3) \rightarrow (2, 3) \text{ valid: } \epsilon = 25 \\\ \end{array} $$
I missed the following substitutes which the program did find.
$$ \begin{array}{l} (1, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 80 \\\ (1, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = 73 \\\ (2, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = \frac{17 - 60}{1 - 1} \text{, not valid} \\\ (2, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = -18 \text{, not valid (negative }\epsilon) \\\ (3, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 28 \\\ (3, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = 78 \\\ (5, 0) \rightarrow (4, 0) \text{ valid: } \epsilon = 35 \\\ (5, 4) \rightarrow (0, 4) \text{ valid: } \epsilon = \frac{17 - 81}{0 - 0} \text{, not valid} \\\ \end{array} $$
Notice that some substitutions do not cross over if we move in the direction of ascent, which are the pairs which have a zero as the denominator. Additionally, $\epsilon$ is a distance, and the concept of a negative distance does not make sense. Interpreting a negative distance as a positive distance in the opposite direction, if we needed to move in that direction, the direction of ascent vector would be pointing the other way.
The reason that my list did not match the list of the program was because find_cycle()
did not always return the fundamental cycle containing the new edge.
If I called find_cycle()
on a vertex in the other cycle in the graph (in this case ${(0, 4), (4, 0)}$), it would return that rather than the true fundamental cycle.
This prompted me to think about what really determines if edges in a 1-arborescence are substitutes for each other. In every case where a substitute was valid, both of those edges lead to the same vertex. If they did not, then the degree constraint of the arborescence would be violated because we did not replace the edge leading into a node with another edge leading into the same node. This is true regardless of if the edges are part of the same fundamental cycle or not.
Thus, find_epsilon()
now takes every edge in the graph but not the chosen 1-arborescence $k \in K(\pi, d)$ and find the other edge in $k$ pointing to the same vertex, swaps them and then checks that the degree constraint is not violated, it has the correct number of edges and it is still connected.
This is a more efficient method to use, and it found more valid substitutions as well so I was hopeful that it would finally bring the returned solution down to the optimal solution, perhaps because it was missing the correct value of $\epsilon$ on even just one of the iterations.
It did not.
At this point I have no real course forward, but two unappealing options.
find_epsilon()
by executing the first iteration of the ascent method by hand. It took about 90 minutes.
I could try to continue this process and hope that while iteration 1 is executing correctly I find some other bug in the code, but I doubt that I will ever reach the 9 iterations the program needs
to find the faulty solution.I will be discussing the next steps with my GSoC mentors soon.
Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>We are coming into the end of the first week of coding for the Summer of Code, and I have implemented two new, but related, features in NetworkX. In this post, I will discuss how I implemented them, some of the challenges and how I tested them. Those two new features are a spanning tree iterator and a spanning arborescence iterator.
The arborescence iterator is the feature that I will be using directly in my GSoC project, but I though that it was a good idea to implement the spanning tree iterator first as it would be easier and I could directly refer back to the research paper as needed. The partition schemes between the two are the same, so once I figured it out for the spanning tress what I learned there would directly port into the arborescence iterator and there I could focus on modifying Edmond’s algorithm to respect the partition.
This was the first of the new freatures. It follows the algorithm detailed in a paper by Sörensen and Janssens from 2005 titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost which can be found here [2].
Now, I needed to tweak the implementation of the algorithm because I wanted to implement a python iterator, so somebody can write
for tree in nx.SpanningTreeIterator(G):
pass
and that loop would return spanning trees starting with the ones of minimum cost and climbing to the ones of maximum cost.
In order to implement this feature, my first step was to ensure that once I know what the edge partition of the graph was, I could find a minimum spanning tree which respected the partition. As a brief reminder, the edge partition creates two disjoint sets of edges of which one must appear in the resulting spanning tree and one cannot appear in the spanning tree. Edges which are neither included or excluded from the spanning tree and called open.
The easiest algorithm to implement this which is Kruskal’s algorithm. The included edges are all added to the spanning tree first, and then the algorithm can join the components created with the included edges using the open edges.
This was easy to implement in NetworkX. The Kruskal’s algorithm in NetworkX is a generator which returns the edges in the minimum spanning tree one at a time using a sorted list of edges. All that I had to do was change the sorting process so that the included edges where always at the front of that list, then the algorithm would always select them, regardless of weight for the spanning tree.
Additionally, since the general spanning tree of a graph is a partitioned tree where the partition has no included or excluded edges, I was about to convert the normal Kruskal’s implementation into a wrapper for my partition respecting one in order to reduce redundant code.
As for the partitioning process itself, that proved to be a bit more tricky mostly stemming from my own limited python experience.
(I have only been working with python since the start of the calendar year)
In order to implement the partitioning scheme I needed an ordered data structure and choose the PriorityQueue
class.
This was convienct, but for elements with the same weight for their minimum spanning trees it tried to compare the dictionaries hold the edge data was is not a supported operation.
Thus, I implemented a dataclass where only the weight of the spanning tree was comparable.
This means that for ties in spanning tree weight, the oldest partition with that weight is considered first.
Once the implementation details were ironed out, I moved on to testing.
At the time of this writing, I have tested the SpanningTreeIterator
on the sample graph in the Sörensen and Janssens paper.
That graph is
It has eight spanning trees, ranging in weight from 17 to 23 which are all shown below.
Since this graph only has a few spanning trees, it was easy to explicitly test that each graph returned from the iterator was the next one in the sequence. The iterator also works backwards, so calling
for tree in nx.SpanningTreeIterator(G, minimum=False):
pass
starts with the maximum spanning tree and works down to the minimum spanning tree.
The code for the spanning tree iterator can be found here starting around line 761.
The arborescence iterator is what I actually need for my GSoC project, and as expected was more complicated to implement.
In my original post titled Finding All Minimum Arborescences, I discussed cases that Edmond’s algorithm [1] would need to handle and proposed a change to the desired_edge
method.
These changes where easy to make, but were not the extent of the changes that needed to be made as I originally thought. The original graph from Edmonds’ 1967 paper is below
In my first test, which was limited to the minimum spanning arborescence of a random partition I created, the results where close. Below, the blue edges are included and the red one is excluded.
The minimum spanning arborescence initially is shown below.
While the $(3, 0)$ edge is properly excluded and the $(2, 3)$ edge is included, the $(6, 2)$ is not present in the arborescence (show as a dashed edge). Tracking this problem down was a hassle, but the way that Edmonds’ algorithm works is that a cycle, which would have been present if the $(6, 2)$ edge was included, are collapsed into a single vertex as the algorithm moves to the next iteration. Once that cycle is collapsed into a vertex, it still has to choose how to access that vertex and the choice is based on the best edge as before (this is step I1 in [1]). Then, when the algorithm expands the cycle out, it will remove the edge which is
Which is this case, would be $(6, 2)$ shown in red in the next image. Represented visually, the cycle with incoming edges would look like
And that would be collapsed into a new vertex, $N$ from which the incoming edge with weight 12 would be selected.
In this example we want to forbid the algorithm from picking the edge with weight 12, so that when the cycle is reconstructed the included edge $(6, 2)$ is still present. Once we make one of the incoming edges an included edge, we know from the definition of an arborescence that we cannot get to that vertex from any other edges. They are all effectively excluded, so once we find an included edge directed towards a vertex we can made all of the other incoming edges excluded.
Returning to the example, the collapsed vertex $N$ would have the edge of weight 12 excluded and would pick the edge of weight 13.
At this point the iterator would find 236 arborescences with cost ranging from 96 to 125. I thought that I was very close to being finished and I knew that the cost of the minimum spanning arborescence was 96, until I checked to see what the weight of the maximum spanning arborescence was: 131.
This means that I was removing partitions which contained a valid arborescence before they were being added to priority queue.
My check_partition
method within the ArborescenceIterator
was doing the following:
False
.Rather than try to debug what I though was a good method, I decided to change my process.
I moved the last bullet point into the write_partition
method and then stopped using the check_partition
method.
If an edge partition does not have a spanning arborescence, the partition_spanning_arborescence
function will return None
and I discard the partition.
This approach is more computationally intensive, but it increased the number of returned spanning araborescences from 236 to 680 and the range expanded to the proper 96 - 131.
But how do I know that it isn’t skipping arborescences within that range? Since 680 arborescences is too many to explicitly check, I decided to write another test case. This one would check that the number of arborescences was correct and that the sequence never decreases.
In order to check the number of arborescecnes, I decided to take a brute force approach. There are
$$ \binom{18}{8} = 43,758 $$
possible combinations of edges which could be arborescences. That’s a lot of combintation, more than I wanted to check by hand so I wrote a short python script.
from itertools import combinations
import networkx as nx
edgelist = [
(0, 2),
(0, 4),
(1, 0),
(1, 5),
(2, 1),
(2, 3),
(2, 5),
(3, 0),
(3, 4),
(3, 6),
(4, 7),
(5, 6),
(5, 8),
(6, 2),
(6, 8),
(7, 3),
(7, 6),
(8, 7),
]
combo_count = 0
arbor_count = 0
for combo in combinations(edgelist, 8):
combo_count += 1
combo_test = nx.DiGraph()
combo_test.add_edges_from(combo)
if nx.is_arborescence(combo_test):
arbor_count += 1
print(
f"There are {combo_count} possible combinations of eight edges which "
f"could be an arboresecnce."
)
print(f"Of those {combo_count} combinations, {arbor_count} are arborescences.")
The output of this script is
There are 43758 possible combinations of eight edges which could be an arboresecnce.
Of those 43758 combinations, 680 are arborescences.
So now I know how many arborescences where in the graph and it matched the number returned from the iterator. Thus, I believe that the iterator is working well.
The iterator code is here and starts around line 783. It can be used in the same way as the spanning tree iterator.
Attached is a sample output from the iterator detailing all 680 arborescences of the test graph. Since Jekyll will not let me put up the txt file I had to convert it into a pdf which is 127 pages to show the 6800 lines of output from displaying all of the arborescences.
[1] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[2] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning trees in order of increasing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
]]>There is only one thing that I need to figure out before the first coding period for GSoC starts on Monday: how to find all of the minimum arborescences of a graph. This is the set $K(\pi)$ in the Held and Karp paper from 1970 which can be refined down to $K(\pi, d)$ or $K_{X, Y}(\pi)$ as needed. For more information as to why I need to do this, please see my last post here.
This is a place where my contributions to NetworkX to implement the Asadpour algorithm [1] for the directed traveling salesman problem will be useful to the rest of the NetworkX community (I hope). The research paper that I am going to template this off of is this 2005 paper by Sörensen and Janssens titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost [4].
The basic idea here is to implement their algorithm and then generate spanning trees until we find the first one with a cost that is greater than the first one generated, which we know is a minimum, so that we have found all of the minimum spanning trees. I know what you guys are saying, “Matt, this paper discusses spanning trees, not spanning arborescences, how is this helpful?”. Well, the heart of this algorithm is to partition the vertices into either excluded edges which cannot appear in the tree, included edges which must appear in the tree and open edges which can be but are not required to be in the tree. Once we have a partition, we need to be able to find a minimum spanning tree or minimum spanning arborescence that respects the partitioned edges.
In NetworkX, the minimum spanning arborescences are generated using Chu-Liu/Edmonds’ Algorithm developed by Yoeng-Jin Chu and Tseng-Hong Liu in 1965 and independently by Jack Edmonds in 1967. I believe that Edmonds’ Algorithm [2] can be modified to require an arc to be either included or excluded from the resulting spanning arborescence, thus allowing me to implement Sörensen and Janssens’ algorithm for directed graphs.
First, let’s explore whether the partition scheme discussed in the Sörensen and Janssens paper [4] will work for a directed graph. The critical ideas for creating the partitions are given on pages 221 and 222 and are as follows:
Given an MST of a partition, this partition can be split into a set of resulting partitions in such a way that the following statements hold:
- the intersection of any two resulting partitions is the empty set,
- the MST of the original partition is not an element of any of the resulting partitions,
- the union of the resulting partitions is equal to the original partition, minus the MST of the original partition.
In order to achieve these conditions, they define the generation of the partitions using this definition for a minimum spanning tree
$$ s(P) = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1}} $$
where the $(i, j)$ edges are the included edges of the original partition and the $(t, v)$ are from the open edges of the original partition. Now, to create the next set of partitions, take each of the $(t, v)$ edges sequentially and introduce them one at a time, make that edge an excluded edge in the first partition it appears in and an included edge in all subsequent partitions. This will produce something to the effects of
$$ \begin{array}{l} P_1 = {(i_1, j_1), \dots, (i_r, j_r), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_1, v_1})} \\\ P_2 = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_2, v_2})} \\\ P_3 = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), (t_2, v_2), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), (\overline{t_3, v_3})} \\\ \vdots \\\ \begin{multline*} P_{n-r-1} = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-2}, v_{n-r-2}), (\overline{m_1, p_1}), \dots, (\overline{m_l, p_l}), \\\ (\overline{t_{n-r-1}, v_{n-r-1}})} \end{multline*} \\\ \end{array} $$
Now, if we extend this to a directed graph, our included and excluded edges become included and excluded arcs, but the definition of the spanning arborescence of a partition does not change. Let $s_a(P)$ be the minimum spanning arborescence of a partition $P$. Then
$$ s_a(P) = {(i_1, j_1), \dots, (i_r, j_r), (t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1}} $$
$s_a(P)$ is still constructed of all of the included arcs of the partition and a subset of the open arcs of that partition. If we partition in the same manner as the Sörensen and Janssens paper [4], then their cannot be spanning trees which both include and exclude a given edge and this conflict exists for every combination of partitions.
Clearly the original arborescence, which includes all of the $(t_1, v_1), \dots, (t_{n-r-1}, v_{n-r-1})$ cannot be an element of any of the resulting partitions.
Finally, there is the claim that the union of the resulting partitions is the original partition minus the original minimum spanning tree. Being honest here, this claim took a while for me to understand. In fact, I had a whole paragraph talking about how this claim doesn’t make sense before all of a sudden I realized that it does. The important thing to remember here is that the union of all of the partitions isn’t the union of the sets of included and excluded edges (which is where I went wrong the first time), it is a subset of spanning trees. The original partition contains many spanning trees, one or more of which are minimum, but each tree in the partition is a unique subset of the edges of the original graph. Now, because each of the resulting partitions cannot include one of the edges of the original partition’s minimum spanning tree we know that the original minimum spanning tree is not an element of the union of the resulting partitions. However, because every other spanning tree in the original partition which was not the selected minimum one is different by at least one edge it is a member of at least one of the resulting partitions, specifically the one where that one edge of the selected minimum spanning tree which it does not contain is the excluded edge.
So now we know that this same partition scheme which works for undirected graphs will work for directed ones. We need to modify Edmonds’ algorithm to mandate that certain arcs be included and others excluded. To start, a review of this algorithm is in order. The original description of the algorithm is given on pages 234 and 235 of Jack Edmonds’ 1967 paper Optimum Branchings [2] and roughly speaking it has three major steps.
Now that we are familiar with the minimum arborescence algorithm, we can discuss modifying it to force it to include certain edges or reject others. The changes will be primarily located in step 1. Under the normal operation of the algorithm, the consideration which happens at each vertex might look like this.
Where the bolded arrow is chosen by the algorithm as it is the incoming arc with minimum weight. Now, if we were required to include a different edge, say the weight 6 arc, we would want this behavior even though it is strictly speaking not optimal. In a similar case, if the arc of weight 2 was excluded we would also want to pick the arc of weight 6. Below the excluded arc is a dashed line.
But realistically, these are routine cases that would not be difficult to implement. A more interesting case would be if all of the arcs were excluded or if more than one are included.
Under this case, there is no spanning arborescence for the partition because the graph is not connected. The Sörensen and Janssens paper characterize these as empty partitions and they are ignored.
In this case, things start to get a bit tricky. With two (or more) included arcs leading to this vertex, it is but definition not an arborescence as according to Edmonds on page 233
A branching is a forest whose edges are directed so that each is directed toward a different node. An arborescence is a connected branching.
At first I thought that there was a case where because this case could result in the creation of a cycle that it was valid, but I realize now that in step 3 of Edmonds’ algorithm that one of those arcs would be removed anyways. Thus, any partition with multiple included arcs leading to a single vertex is empty by definition. While there are ways in which the algorithm can handle the inclusion of multiple arcs, one (or more) of them by definition of an arborescence will be deleted by the end of the algorithm.
I propose that these partitions are screened out before we hand off to Edmonds’ algorithm to find the arborescences.
As such, Edmonds’ algorithm will need to be modified for the cases of at most one included edge per vertex and any number of excluded edges per vertex.
The critical part of altering Edmonds’ Algorithm is contained within the desired_edge
function in the NetworkX implementation starting on line 391 in algorithms.tree.branchings
.
The whole function is as follows.
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if new_weight > weight:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
The function would be changed to automatically return an included arc and then skip considering any excluded arcs.
Because this is an inner function, we can access parameters passed to the parent function such as something along the lines as partition=None
where the value of partition
is the edge attribute detailing true
if the arc is included and false
if it is excluded.
Open edges would not need this attribute or could use None
.
The creation of an enum is also possible which would unify the language if I talk to my GSoC mentors about how it would fit into the NetworkX ecosystem.
A revised version of desired_edge
using the true
and false
scheme would then look like this:
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if data[partition]:
return edge, data[attr]
if new_weight > weight and not data[partition]:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
And a version using the enum might look like
def desired_edge(v):
"""
Find the edge directed toward v with maximal weight.
"""
edge = None
weight = -INF
for u, _, key, data in G.in_edges(v, data=True, keys=True):
new_weight = data[attr]
if data[partition] is Partition.INCLUDED:
return edge, data[attr]
if new_weight > weight and data[partition] is not Partition.EXCLUDED:
weight = new_weight
edge = (u, v, key, new_weight)
return edge, weight
Once Edmonds’ algorithm has been modified to be able to use partitions, the pseudocode from the Sörensen and Janssens paper would be applicable.
Input: Graph G(V, E) and weight function w
Output: Output_File (all spanning trees of G, sorted in order of increasing cost)
List = {A}
Calculate_MST(A)
while MST ≠ ∅ do
Get partition Ps in List that contains the smallest spanning tree
Write MST of Ps to Output_File
Remove Ps from List
Partition(Ps)
And the corresponding Partition
function being
P1 = P2 = P
for each edge i in P do
if i not included in P and not excluded from P then
make i excluded from P1
make i include in P2
Calculate_MST(P1)
if Connected(P1) then
add P1 to List
P1 = P2
I would need to change the format of the first code block as I would like it to be a Python iterator so that a for
loop would be able to iterate through all of the spanning arborescences and then stop once the cost increases in order to limit it to only minimum spanning arborescences.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), p. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] J. Edmonds, Optimum Branchings, Journal of Research of the National Bureau of Standards, 1967, Vol. 71B, p.233-240, https://archive.org/details/jresv71Bn4p233
[3] M. Held, R.M. Karp, The traveling-salesman problem and minimum spanning trees, Operations research, 1970-11-01, Vol.18 (6), p.1138-1162, https://www.jstor.org/stable/169411
[4] G.K. Janssens, K. Sörensen, An algorithm to generate all spanning trees in order of increasing cost, Pesquisa Operacional, 2005-08, Vol. 25 (2), p. 219-229, https://www.scielo.br/j/pope/a/XHswBwRwJyrfL88dmMwYNWp/?lang=en
]]>After talking with my GSoC mentors about what we all believe to be the most difficult part of the Asadpour algorithm, the Held-Karp relaxation, we came to several conclusions:
Thus, alternative methods for solving the Held-Karp relaxation needed to be investigated. To this end, we turned to the original 1970 paper by Held and Karp, The Traveling Salesman Problem and Minimum Spanning Trees to see how they proposed solving the relaxation (Note that this paper was published before the ellipsoid algorithm was applied to linear programming in 1979). The Held and Karp paper discusses three methods for solving the relaxation:
But before we explore the methods that Held and Karp discuss, we need to ensure that these methods still apply to solving the Held-Karp relaxation within the context of the Asadpour paper. The definition of the Held-Karp relaxation that I have been using on this blog comes from the Asadpour paper, section 3 and is listed below.
$$ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} $$
The closest match to this program in the Held Karp paper is their linear program 3, which is a linear programming representation of the entire traveling salesman problem, not solely the relaxed version. Note that Held and Karp were dealing with the symmetric TSP (STSP) while Asadpour is addressing the asymmetric or directed TSP (ATSP).
$$ \begin{array}{c l l} \text{min} & \sum_{1 \leq i < j \leq n} c_{i j}x_{i j} \\\ \text{s.t.} & \sum_{j > i} x_{i j} + \sum_{j < i} x_{j i} = 2 & (i = 1, 2, \dots, n) \\\ & \sum_{i \in S\\\ j \in S\\\ i < j} x_{i j} \leq |S| - 1 & \text{for any proper subset } S \subset {2, 3, \dots, n} \\\ & 0 \leq x_{i j} \leq 1 & (1 \leq i < j \leq n) \\\ & x_{i j} \text{integer} \\\ \end{array} $$
The last two constraints on the second linear program is correctly bounded and fits within the scope of the original problem while the first two constraints do most of the work in finding a TSP tour. Additionally, changing the last two constraints to be $x_{i j} \geq 0$ is the Held Karp relaxation. The first constraint, $\sum_{j > i} x_{i j} + \sum_{j < i} x_{j i} = 2$, ensures that for every vertex in the resulting tour there is one edge to get there and one edge to leave by. This matches the second constraint in the Asadpour ATSP relaxation. The second constraint in the Held Karp formulation is another form of the subtour elimination constraint seen in the Asadpour linear program.
Held and Karp also state that
In this section, we show that minimizing the gap $f(\pi)$ is equivalent to solving this program without the integer constraints.
on page 1141, so it would appear that solving one of the equivalent programs that Held and Karp forumalate should work here.
The Column Generation technique seeks to solve linear program 2 from the Held and Karp paper, stated as
$$ \begin{array}{c l} \text{min} & \sum_{k} c_ky_k \\\ \text{s.t.} & y_k \geq 0 \\\ & \sum_k y_k = 1 \\\ & \sum_{i = 2}^{n - 1} (-v_{i k})y_k = 0 \\\ \end{array} $$
Where $v_{i k}$ is the degree of vertex $i$ in 1-Tree $k$ minus two, or $v_{i k} = d_{i k} - 2$ and each variable $y_k$ corresponds to a 1-Tree $T^k$. The associated cost $c_k$ for each tree is the weight of $T^k$.
The rest of this method uses a simplex algorithm to solve the linear program. We only focus on the edges which are in each of the 1-Trees, giving each column the form
$$ \begin{bmatrix} 1 & -v_{2k} & -v_{3k} & \dots & -v_{n-1,k} \end{bmatrix}^T $$
and the column which enters the solution in the 1-Tree for which $c_k + \theta + \sum_{j=2}^{n-1} \pi_jv_{j k}$ is a minimum where $\theta$ and $\pi_j$ come from the vector of ‘shadow prices’ given by $(\theta, \pi_2, \pi_3, \dots, \pi_{n-1})$. Now the basis is $(n - 1) \times (n - 1)$ and we can find the 1-Tree to add to the basis using a minimum 1-Tree algorithm which Held and Karp say can be done in $O(n^2)$ steps.
I am already familiar with the simplex method, so I will not detail it’s implementation here.
This technique is slow to converge. Held and Karp programmed in on an IBM/360 and where able to solve problems consestinal for up to $n = 12$. Now, on a modern computer the clock rate is somewhere between 210 and 101,500 times faster (depending on the model of IBM/360 used), so we expect better performance, but cannot say at this time how much of an improvement.
They also talk about a heuristic procedure in which a vertex is eliminated from the program whenever the choice of its adjacent vertices was ’evident’. Technical details for the heuristic where essentially non-existent, but
The procedure showed promise on examples up to $n = 48$, but was not explored systematically
This paper from Held and Karp is about minimizing $f(\pi)$ where $f(\pi)$ is the gap between the permuted 1-Trees and a TSP tour. One way to do this is to maximize the dual of $f(\pi)$ which is written as $\text{max}_{\pi}\ w(\pi)$ where
$$ w(\pi) = \text{min}_k\ (c_k + \sum_{i=1}^{i=n} \pi_iv_{i k}) $$
This method uses the set of indices of 1-Trees that are of minimum weight with respect to the weights $\overline{c}_{i j} = c_{i j} + \pi_i + \pi_j$.
$$ K(\pi) = {k\ |\ w(\pi) = c_k + \sum_{i=1}^{i=n} \pi_i v_{i k}} $$
If $\pi$ is not a maximum point of $w$, then there will be a vector $d$ called the direction of ascent at $\pi$. This is theorem 3 and a proof is given on page 1148. Let the functions $\Delta(\pi, d)$ and $K(\pi, d)$ be defined as below.
$$ \Delta(\pi, d) = \text{min}_{k \in K(\pi)}\ \sum_{i=1}^{i=n} d_iv_{i k} \\\ K(\pi, d) = {k\ |\ k \in K(\pi) \text{ and } \sum_{i=1}^{i=n} d_iv_{i k} = \Delta(\pi, d)} $$
Now for a sufficiently small $\epsilon$, $K(\pi + \epsilon d) = K(\pi, d)$ and $w(\pi + \epsilon d) = w(\pi) + \epsilon \Delta(\pi, d)$, or the value of $w(\pi)$ increases and the growth rate of the minimum 1-Trees is at its smallest so we maintain the low weight 1-Trees and progress farther towards the optimal value. Finally, let $\epsilon(\pi, d)$ be the following quantity
$$ \epsilon(\pi, d) = \text{max}\ {\epsilon\ |\text{ for } \epsilon’ < \epsilon,\ K(\pi + \epsilon’d = K(\pi, d)} $$
So in other words, $\epsilon(\pi, d)$ is the maximum distance in the direction of $d$ that we can travel to maintain the desired behavior.
If we can find $d$ and $\epsilon$ then we can set $\pi = \pi + \epsilon d$ and move to the next iteration of the ascent method. Held and Karp did give a protocol for finding $d$ on page 1149.
There are two things which must be refined about this procedure in order to make it implementable in Python.
Held and Karp have provided guidance on both of these points.
In section 6 on matroids, we are told to use a method developed by Dijkstra in A Note on Two Problems in Connexion with Graphs, but in this particular case that is not the most helpful.
I have found this document, but there is a function called minimum_spanning_arborescence
already within NetworkX which we can use to create a minimum 1-Arborescence.
That process would be to find a minimum spanning arborescence on only the vertices in ${2, 3, \dots, n}$ and then connect vertex 1 to create the cycle.
In order to connect vertex 1, we would choose the outgoing arc with the smallest cost and the incoming arc with the smallest cost.
Finally, at the maximum value of $w(\pi)$, there is no direction of ascent and the procedure outlined by Held and Karp will not terminate. Their article states on page 1149 that
Thus, when failure to terminate is suspected, it is necessary to check whether no direction of ascent exists; by the Minkowski-Farkas lemma this is equivalent to the existence of nonnegative coefficients $\alpha_k$ such that
$ \sum_{k \in K(\pi)} \alpha_kv_{i k} = 0, \quad i = 1, 2, \dots, n $
This can be checked by linear programming.
While it is nice that they gave that summation, the rest of the linear program would have been useful too. The entire linear program would be written as follows
$$ \begin{array}{c l l} \text{max} & \sum_k \alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{i k} = 0 & \forall\ i \in {1, 2, \dots n} \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} $$
This linear program is not in standard form, but it is not difficult to convert it. First, change the maximization to a minimization by minimizing the negative.
$$ \begin{array}{c l l} \text{min} & \sum_k -\alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{i k} = 0 & \forall\ i \in {1, 2, \dots n} \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} $$
While the constraint is not intuitively in standard form, a closer look reveals that it is. Each column in the matrix form will be for one entry of $\alpha_k$, and each row will represent a different value of $i$, or a different vertex. The one constraint is actually a collection of very similar one which could be written as
$$ \begin{array}{c l} \text{min} & \sum_k -\alpha_k \\\ \text{s.t.} & \sum_{k \in K(\pi)} \alpha_k v_{1 k} = 0 \\\ & \sum_{k \in K(\pi)} \alpha_k v_{2 k} = 0 \\\ & \vdots \\\ & \sum_{k \in K(\pi)} \alpha_k v_{n k} = 0 \\\ & \alpha_k \geq 0 & \forall\ k \\\ \end{array} $$
Because all of the summations must equal zero, no stack and surplus variables are required, so the constraint matrix for this program is $n \times k$.
The $n$ obviously has a linear growth rate, but I’m not sure how big to expect $k$ to become.
$k$ is the set of minimum 1-Trees, so I believe that it will be manageable.
This linear program can be solved using the built in linprog
function in the SciPy library.
As an implementation note, to start with I would probably check the terminating condition every iteration, but eventually we can find a number of iterations it has to execute before it starts to check for the terminating condition to save computational power.
One possible difficulty with the terminating condition is that we need to run the linear program with data from every minimum 1-Trees or 1-Arborescences, which means that we need to be able to generate all of the minimum 1-Trees. There does not seem to be an easy way to do this within NetworkX at the moment. Looking through the tree algorithms here they seem exclusively focused on finding one minimum branching of the required type and not all of those branchings.
Now we have to find $\epsilon$. Theorem 4 on page 1150 states that
Let $k$ be any element of $K(\pi, d)$, where $d$ is a direction of ascent at $\pi$. Then $\epsilon(\pi, d) = \text{min}{\epsilon\ |\text{ for some pair } (e, e’),\ e’ \text{ is a substitute for } e \text{ in } T^k \\\ \text{ and } e \text{ and } e’ \text{ cross over at } \epsilon }$
The first step then is to determine if $e$ and $e’$ are substitutes. $e’$ is a substitute if for a 1-Tree $T^k$, $(T^k - {e}) \cup {e’}$ is also a 1-Tree. The edges $e = {r, s}$ and $e’ = {i, j}$ cross over at $\epsilon$ if the pairs $(\overline{c}_{i j}, d_i + d_j)$ and $(\overline{c}_{r s}, d_r + d_s)$ are different but
$$ \overline{c}_{i j} + \epsilon(d_i + d_j) = \overline{c}_{r s} + \epsilon(d_r + d_s) $$
From that equation, we can derive a formula for $\epsilon$.
$$ \begin{array}{r c l} \overline{c}_{i j} + \epsilon(d_i + d_j) &=& \overline{c}_{r s} + \epsilon(d_r + d_s) \\\ \epsilon(d_i + d_j) &=& \overline{c}_{r s} + \epsilon(d_r + d_s) - \overline{c}_{i j} \\\ \epsilon(d_i + d_j) - \epsilon(d_r + d_s) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon\left((d_i + d_j) - (d_r + d_s)\right) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon(d_i + d_j - d_r - d_s) &=& \overline{c}_{r s} - \overline{c}_{i j} \\\ \epsilon &=& \displaystyle \frac{\overline{c}_{r s} - \overline{c}_{i j}}{d_i + d_j - d_r - d_s} \end{array} $$
So we can now find $epsilon$ for any two pairs of edges which are substitutes for each other, but we need to be able to find substitutes in the 1-Tree.
We know that $e’$ is a substitute for $e$ if and only if $e$ and $e’$ are both incident to vertex 1 or $e$ is in a cycle of $T^k \cup {e’}$ that does not pass through vertex 1.
In a more formal sense, we are trying to find edges in the same fundamental cycle as $e’$.
A fundamental cycle is created when any edge not in a spanning tree is added to that spanning tree.
Because the endpoints of this edge are connected by one, unique path this creates a unique cycle.
In order to find this cycle, we will take advantage of find_cycle
within the NetworkX library.
Below is a pseudocode procedure that uses Theorem 4 to find $\epsilon(\pi, d)$ that I sketched out. It is not well optimized, but will find $\epsilon(\pi, d)$.
# Input: An element k of K(pi, d), the vector pi and the vector d.
# Output: epsilon(pi, d) using Theorem 4 on page 1150.
for each edge e in the graph G
if e is in k:
continue
else:
add e to k
let v be the terminating end of e
c = find_cycle(k, v)
for each edge a in c not e:
if a[cost] = e[cost] and d[i] + d[j] = d[r] + d[s]:
continue
epsilon = (a[cost] - e[cost])/(d[i] + d[j] - d[r] - d[s])
min_epsilon = min(min_epsilon, epsilon)
remove e from k
return min_epsilon
The ascent method is also slow, but would be better on a modern computer. When Held and Karp programmed it, they tested it on some small problems up to 25 vertices and while the time per iteration was small, the number of iterations grew quickly. They do not comment on if this is a better method than the Column Generation technique, but do point up that they did not determine if this method always converges to a maximum point of $w(\pi)$.
After talking with my GSoC mentors, we believe that this is the best method we can implement for the Held-Karp relaxation as needed by the Asadpour algorithm. The ascent method is embedded within this method, so the in depth exploration of the previous method is required to implement this one. Most of the notation in this method is reused from the ascent method.
The branch and bound method utilizes the concept that a vertex can be out-of-kilter. A vertex $i$ is out-of-kilter high if
$$ \forall\ k \in K(\pi),\ v_{i k} \geq 1 $$
Similarly, vertex $i$ is out-of-kilter low if
$$ \forall\ k \in K(\pi),\ v_{i k} = -1 $$
Remember that $v_{i k}$ is the degree of the vertex minus 2. We know that all the vertices have a degree of at least one, otherwise the 1-Tree $T^k$ would not be connected. An out-of-kilter high vertex has a degree of 3 or higher in every minimum 1-Tree and an out-of-kilter low vertex has a degree of only one in all of the minimum 1-Trees. Our goal is a minimum 1-Tree where every vertex has a degree of 2.
If we know that a vertex is out-of-kilter in either direction, we know the direction of ascent and that direction is a unit vector. Let $u_i$ be an $n$-dimensional unit vector with 1 in the $i$-th coordinate. $u_i$ is the direction of ascent if vertex $i$ is out-of-kilter high and $-u_i$ is the direction of ascent if vertex $i$ is out-of-kilter low.
Corollaries 3 and 4 from page 1151 also show that finding $\epsilon(\pi, d)$ is simpler when a vertex is out-of-kilter as well.
Corollary 3. Assume vertex $i$ is out-of-kilter low and let $k$ be an element of $K(\pi, -u_i)$. Then $\epsilon(\pi, -u_i) = \text{min} (\overline{c}_{i j} - \overline{c}_{r s})$ such that ${i, j}$ is a substitute for ${r, s}$ in $T^k$ and $i \not\in {r, s}$.
Corollary 4. Assume vertex $r$ is out-of-kilter high. Then $\epsilon(\pi, u_r) = \text{min} (\overline{c}_{i j} - \overline{c}_{r s})$ such that ${i, j}$ is a substitute for ${r, s}$ in $T^k$ and $r \not\in {i, j}$.
These corollaries can be implemented with a modified version of the pseudocode listing above for finding $\epsilon$ in the ascent method section.
Once there are no more out-of-kilter vertices, the direction of ascent is not a unit vector and fractional weights are introduced. This is the cause of a major slow down in the convergence of the ascent method to the optimal solution, so it should be avoided if possible.
Before we can discuss implementation details, there are still some more primaries to be reviewed. Let $X$ and $Y$ be disjoint sets of edges in the graph. Then let $\mathsf{T}(X, Y)$ denote the set of 1-Trees which include all edges in $X$ but none of the edges in $Y$. Finally, define $w_{X, Y}(\pi)$ and $K_{X, Y}(\pi)$ as follows.
$$ w_{X, Y}(\pi) = \text{min}_{k \in \mathsf{T}(X, Y)} (c_k + \sum_{i=1}^{i=n} \pi_i v_{i k}) \\\ K_{X, Y}(\pi) = {k\ |\ c_k + \sum \pi_i v_{i k} = w_{X, Y}(\pi)} $$
From these functions, a revised definition of out-of-kilter high and low arise, allowing a vertex to be out-of-kilter relative to $X$ and $Y$.
During the completion of the branch and bound method, the branches are tracking in a list where each entry has the following format.
$$[X, Y, \pi, w_{X, Y}(\pi)]$$
Where $X$ and $Y$ are the disjoint sets discussed earlier, $\pi$ is the vector we are using to perturb the edge weights and $w_{X, Y}(\pi)$ is the bound of the entry.
At each iteration of the method, we consider the list entry with the minimum bound and try to find an out-of-kilter vertex. If we find one, we apply one iteration of the ascent method using the simplified unit vector as the direction of ascent. Here we can take advantage of integral weights if they exist. Perhaps the documentation for the Asadpour implementation in NetworkX should state that integral edge weights will perform better but that claim will have to be supported by our testing.
If there is not an out-of-kilter vertex, we still need to find the direction of ascent in order to determine if we are at the maximum of $w(\pi)$. If the direction of ascent exists, we branch. If there is no direction of ascent, we search for a tour among $K_{X, Y}(\pi)$ and if none is found, we also branch.
The branching process is as follows. From entry $[X, Y, \pi, w_{X, Y}(\pi)]$ an edge $e \not\in X \cup Y$ is chosen (Held and Karp do not give any criteria to branch on, so I believe the choose can be arbitrary) and the parent entry is replaced with two other entries of the forms
$$ [X \cup {e}, Y^*, \pi, w_{X \cup {e}, Y^*}(\pi)] \quad \text{and} \quad [X^*, Y \cup {e}, \pi, w_{X^*, Y \cup {e}}(\pi)] $$
An example of the branch and bound method is given on pages 1153 through 1156 in the Held and Karp paper.
In order to implement this method, we need to be able to determine in addition to modifying some of the details of the ascent method.
The Held and Karp paper states that in order to find an out-of-kilter vertex, all we need to do is test the unit vectors. If for arbitrary member $k$ of $K(\pi, u_i)$, $v_{i k} \geq 1$ and the appropriate inverse holds for out-of-kilter low. From this process we can find out-of-kilter vertices by sequentially checking the $u_i$’s in an $O(n^2)$ procedure.
Searching $K_{X, Y}(\pi)$ for a tour would be easy if we can enumerate that set minimum 1-Trees. While I know how find one of the minimum 1-Trees, or a member of $K(\pi)$, I am not sure how to find elements in $K(\pi, d)$ or even all of the members of $K(\pi)$. Using the properties in the Held and Karp paper, I do know how to refine $K(\pi)$ into $K(\pi, d)$ and $K(\pi)$ into $K_{X, Y}(\pi)$. This will have to a blog post for another time.
The most promising research paper I have been able to find on this problem is this 2005 paper by Sörensen and Janssens titled An Algorithm to Generate all Spanning Trees of a Graph in Order of Increasing Cost. From here we generate spanning trees or arborescences until the cost moves upward at which point we have found all elements of $K(\pi)$.
Held and Karp did not program this method. We have some reason to believe that the performance of this method will be the best due to the fact that it is designed to be an improvement over the ascent method which was tested (somewhat) until $n = 25$ which is still better than the column generation technique which was only consistently able to solve up to $n = 12$.
A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
Held, M., Karp, R.M. The traveling-salesman problem and minimum spanning trees. Operations research, 1970-11-01, Vol.18 (6), p.1138-1162. https://www.jstor.org/stable/169411
]]>Now that my proposal was accepted by NetworkX for the 2021 Google Summer of Code (GSoC), I can get more into the technical details of how I plan to implement the Asadpour algorithm within NetworkX.
In this post I am going to outline my thought process for the control scheme of my implementation and create function stubs according to my GSoC proposal.
Most of the work for this project will happen in netowrkx.algorithms.approximation.traveling_salesman.py
, where I will finish the last algorithm for the Traveling Salesman Problem so it can be merged into the project. The main function in traveling_salesman.py
is
def traveling_salesman_problem(G, weight="weight", nodes=None, cycle=True, method=None):
"""
...
Parameters
----------
G : NetworkX graph
Undirected possibly weighted graph
nodes : collection of nodes (default=G.nodes)
collection (list, set, etc.) of nodes to visit
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
cycle : bool (default: True)
Indicates whether a cycle should be returned, or a path.
Note: the cycle is the approximate minimal cycle.
The path simply removes the biggest edge in that cycle.
method : function (default: None)
A function that returns a cycle on all nodes and approximates
the solution to the traveling salesman problem on a complete
graph. The returned cycle is then used to find a corresponding
solution on `G`. `method` should be callable; take inputs
`G`, and `weight`; and return a list of nodes along the cycle.
Provided options include :func:`christofides`, :func:`greedy_tsp`,
:func:`simulated_annealing_tsp` and :func:`threshold_accepting_tsp`.
If `method is None`: use :func:`christofides` for undirected `G` and
:func:`threshold_accepting_tsp` for directed `G`.
To specify parameters for these provided functions, construct lambda
functions that state the specific value. `method` must have 2 inputs.
(See examples).
...
"""
All user calls to find an approximation to the traveling salesman problem will go through this function.
My implementation of the Asadpour algorithm will also need to be compatible with this function.
traveling_salesman_problem
will handle creating a new, complete graph using the weight of the shortest path between nodes $u$ and $v$ as the weight of that arc, so we know that by the time the graph is passed to the Asadpour algorithm it is a complete digraph which satisfies the triangle inequality.
The main function also handles the nodes
and cycles
parameters by only copying the necessary nodes into the complete digraph before calling the requested method and afterwards searching for and removing the largest arc within the returned cycle.
Thus, the parent function for the Asadpour algorithm only needs to deal with the graph itself and the weights or costs of the arcs in the graph.
My controlling function will have the following signature and I have included a draft of the docstring as well.
def asadpour_tsp(G, weight="weight"):
"""
Returns an O( log n / log log n ) approximate solution to the traveling
salesman problem.
This approximate solution is one of the best known approximations for
the asymmetric traveling salesman problem developed by Asadpour et al,
[1]_. The algorithm first solves the Held-Karp relaxation to find a
lower bound for the weight of the cycle. Next, it constructs an
exponential distribution of undirected spanning trees where the
probability of an edge being in the tree corresponds to the weight of
that edge using a maximum entropy rounding scheme. Next we sample that
distribution $2 \\\\\\log n$ times and saves the minimum sampled tree once
the direction of the arcs is added back to the edges. Finally,
we argument then short circuit that graph to find the approximate tour
for the salesman.
Parameters
----------
G : nx.DiGraph
The graph should be a complete weighted directed graph.
The distance between all pairs of nodes should be included.
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
Returns
-------
cycle : list of nodes
Returns the cycle (list of nodes) that a salesman can follow to minimize
the total weight of the trip.
Raises
------
NetworkXError
If `G` is not complete, the algorithm raises an exception.
References
----------
.. [1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi,
An o(log n/log log n)-approximation algorithm for the asymmetric
traveling salesman problem, Operations research, 65 (2017),
pp. 1043–1061
"""
pass
Following my GSoC proposal, the next function is held_karp
, which will solve the Held-Karp relaxation on the complete digraph using the ellipsoid method (See my last two posts here and here for my thoughts on why and how to accomplish this).
Solving the Held-Karp relaxation is the first step in the algorithm.
Recall that the Held-Karp relaxation is defined as the following linear program:
$$ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} $$
and that it is a semi-infinite program so it is too large to be solved in conventional forms. The algorithm uses the solution to the Held-Karp relaxation to create a vector $z^*$ which is a symmetrized and slightly scaled down version of the true Held-Karp solution $x^*$. $z^*$ is defined as
$$ z^*_{{u, v}} = \frac{n - 1}{n} \left(x^*_{uv} + x^*_{vu}\right) $$
and since this is what the algorithm using to build the rest of the approximation, this should be one of the return values from held_karp
.
I will also return the value of the cost of $x^*$, which is denoted as $c(x^*)$ or $OPT_{HK}$ in the Asadpour paper [1].
Additionally, the separation oracle will be defined as an inner function within held_karp
.
At the present moment I am not sure what the exact parameters for the separation oracle, sep_oracle
, but it should be the the point the algorithm wishes to test and will need to access the graph the algorithm is relaxing.
In particular, I’m not sure yet how I will represent the hyperplane which is returned by the separation oracle.
def _held_karp(G, weight="weight"):
"""
Solves the Held-Karp relaxation of the input complete digraph and scales
the output solution for use in the Asadpour [1]_ ASTP algorithm.
The Held-Karp relaxation defines the lower bound for solutions to the
ATSP, although it does return a fractional solution. This is used in the
Asadpour algorithm as an initial solution which is later rounded to a
integral tree within the spanning tree polytopes. This function solves
the relaxation with the ellipsoid method for linear programs.
Parameters
----------
G : nx.DiGraph
The graph should be a complete weighted directed graph.
The distance between all paris of nodes should be included.
weight : string, optional (default="weight")
Edge data key corresponding to the edge weight.
If any edge does not have this attribute the weight is set to 1.
Returns
-------
OPT : float
The cost for the optimal solution to the Held-Karp relaxation
z_star : numpy array
A symmetrized and scaled version of the optimal solution to the
Held-Karp relaxation for use in the Asadpour algorithm
References
----------
.. [1] A. Asadpour, M. X. Goemans, A. Madry, S. O. Gharan, and A. Saberi,
An o(log n/log log n)-approximation algorithm for the asymmetric
traveling salesman problem, Operations research, 65 (2017),
pp. 1043–1061
"""
def sep_oracle(point):
"""
The separation oracle used in the ellipsoid algorithm to solve the
Held-Karp relaxation.
This 'black-box' takes a point and check to see if it violates any
of the Held-Karp constraints, which are defined as
- The out-degree of all non-empty subsets of $V$ is at lest one.
- The in-degree and out-degree of each vertex in $V$ is equal to
one. Note that if a vertex has more than one incoming or
outgoing arcs the values of each could be less than one so long
as they sum to one.
- The current value for each arc is greater
than zero.
Parameters
----------
point : numpy array
The point in n dimensional space we will to test to see if it
violations any of the Held-Karp constraints.
Returns
-------
numpy array
The hyperplane which was the most violated by `point`, i.e the
hyperplane defining the polytope of spanning trees which `point`
was farthest from, None if no constraints are violated.
"""
pass
pass
Next the algorithm uses the symmetrized and scaled version of the Held-Karp solution to construct an exponential distribution of undirected spanning trees which preserves the marginal probabilities.
def _spanning_tree_distribution(z_star):
"""
Solves the Maximum Entropy Convex Program in the Asadpour algorithm [1]_
using the approach in section 7 to build an exponential distribution of
undirected spanning trees.
This algorithm ensures that the probability of any edge in a spanning
tree is proportional to the sum of the probabilities of the trees
containing that edge over the sum of the probabilities of all spanning
trees of the graph.
Parameters
----------
z_star : numpy array
The output of `_held_karp()`, a scaled version of the Held-Karp
solution.
Returns
-------
gamma : numpy array
The probability distribution which approximately preserves the marginal
probabilities of `z_star`.
"""
pass
Now that the algorithm has the distribution of spanning trees, we need to sample them. Each sampled tree is a $\lambda$-random tree and can be sampled using algorithm A8 in [2].
def _sample_spanning_tree(G, gamma):
"""
Sample one spanning tree from the distribution defined by `gamma`,
roughly using algorithm A8 in [1]_ .
We 'shuffle' the edges in the graph, and then probabilistically
determine whether to add the edge conditioned on all of the previous
edges which were added to the tree. Probabilities are calculated using
Kirchhoff's Matrix Tree Theorem and a weighted Laplacian matrix.
Parameters
----------
G : nx.Graph
An undirected version of the original graph.
gamma : numpy array
The probabilities associated with each of the edges in the undirected
graph `G`.
Returns
-------
nx.Graph
A spanning tree using the distribution defined by `gamma`.
References
----------
.. [1] V. Kulkarni, Generating random combinatorial objects, Journal of
algorithms, 11 (1990), pp. 185–207
"""
pass
At this point there is only one function left to discuss, laplacian_matrix
.
This function already exists within NetworkX at networkx.linalg.laplacianmatrix.laplacian_matrix
, and even though this is relatively simple to implement, I’d rather use an existing version than create duplicate code within the project.
A deeper look at the function signature reveals
@not_implemented_for("directed")
def laplacian_matrix(G, nodelist=None, weight="weight"):
"""Returns the Laplacian matrix of G.
The graph Laplacian is the matrix L = D - A, where
A is the adjacency matrix and D is the diagonal matrix of node degrees.
Parameters
----------
G : graph
A NetworkX graph
nodelist : list, optional
The rows and columns are ordered according to the nodes in nodelist.
If nodelist is None, then the ordering is produced by G.nodes().
weight : string or None, optional (default='weight')
The edge data key used to compute each value in the matrix.
If None, then each edge has weight 1.
Returns
-------
L : SciPy sparse matrix
The Laplacian matrix of G.
Notes
-----
For MultiGraph/MultiDiGraph, the edges weights are summed.
See Also
--------
to_numpy_array
normalized_laplacian_matrix
laplacian_spectrum
"""
Which is exactly what I need, except the decorator states that it does not support directed graphs and this algorithm deals with those types of graphs. Fortunately, our distribution of spanning trees is for trees in a directed graph once the direction is disregarded, so we can actually use the existing function. The definition given in the Asadpour paper [1], is
$$ L_{i,j} = \left\{ \begin{array}{l l} -\lambda_e & e = (i, j) \in E \\\ \sum_{e \in \delta({i})} \lambda_e & i = j \\\ 0 & \text{otherwise} \end{array} \right. $$
Where $E$ is defined as “Let $E$ be the support of graph of $z^*$ when the direction of the arcs are disregarded” on page 5 of the Asadpour paper. Thus, I can use the existing method without having to create a new one, which will save time and effort on this GSoC project.
In addition to being discussed here, these function stubs have been added to my fork of NetworkX
on the bothTSP
branch.
The commit, Added function stubs and draft docstrings for the Asadpour algorithm
is visible on my GitHub using that link.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] V. Kulkarni, Generating random combinatorial objects, Journal of algorithms, 11 (1990), pp. 185–207
]]>The day of result, was a very, very long day.
With this small writeup, I intend to talk about everything before that day, my experiences, my journey, and the role of Matplotlib throughout!
I am a third-year undergraduate student currently pursuing a Dual Degree (B.Tech + M.Tech) in Information Technology at Indian Institute of Information Technology, Gwalior.
During my sophomore year, my interests started expanding in the domain of Machine Learning, where I learnt about various amazing open-source libraries like NumPy, SciPy, pandas, and Matplotlib! Gradually, in my third year, I explored the field of Computer Vision during my internship at a startup, where a big chunk of my work was to integrate their native C++ codebase to Android via JNI calls.
To actuate my learnings from the internship, I worked upon my own research along with a friend from my university. The paper was accepted in CoDS-COMAD’21 and is published at ACM Digital Library. (Link, if anyone’s interested)
During this period, I also picked up the knack for open-source and started glaring at various issues (and pull requests) in libraries, including OpenCV [contributions] and NumPy [contributions].
I quickly got involved in Matplotlib’s community; it was very welcoming and beginner-friendly.
Fun fact: Its dev call was the very first I attended with people from all around the world!
We all mess up, my very first PR to an organisation like OpenCV went horrible, till date, it looks like this:
In all honesty, I added a single commit with only a few lines of diff.
However, I pulled all the changes from upstream
master
to my working branch, whereas the PR was to be made on3.4
branch.
I’m sure I could’ve done tons of things to solve it, but at that time I couldn’t do anything - imagine the anxiety!
At this point when I look back at those fumbled PRs, I feel like they were important for my learning process.
Fun Fact: Because of one of these initial contributions, I got a shiny little badge [Mars 2020 Helicopter Contributor] on GitHub!
It was around initial weeks of November last year, I was scanning through Good First Issue
and New Feature
labels, I realised a pattern - most Mathtext related issues were unattended.
To make it simple, Mathtext is a part of Matplotlib which parses mathematical expressions and provides TeX-like outputs, for example:
I scanned the related source code to try to figure out how to solve those Mathtext issues. Eventually, with the help of maintainers reviewing the PRs and a lot of verbose discussions on GitHub issues/pull requests and on the Gitter channel, I was able to get my initial PRs merged!
Most of us use libraries without understanding the underlining structure of them, which sometimes can cause downstream bugs!
While I was studying Matplotlib’s architecture, I figured that I could use the same ideology for one of my own projects!
Matplotlib uses a global dictionary-like object named as rcParams
, I used a smaller interface, similar to rcParams, in swi-ml - a small Python library I wrote, implementing a subset of ML algorithms, with a switchable backend.
It was around January, I had a conversation with one of the maintainers (hey Antony!) about the long-list of issues with the current ways of handling texts/fonts in the library.
After compiling them into an order, after few tweaks from maintainers, GSoC Idea-List for Matplotlib was born. And so did my journey of building a strong proposal!
The aim of the project is divided into 3 subgoals:
Font-Fallback: A redesigned text-first font interface - essentially parsing all family before rendering a “tofu”.
(similar to specifying font-family in CSS!)
Font Subsetting: Every exported PS/PDF would contain embedded glyphs subsetted from the whole font.
(imagine a plot with just a single letter “a”, would you like it if the PDF you exported from Matplotlib to embed the whole font file within it?)
Most mpl backends would use the unified TeX exporting mechanism
Mentors Thomas A Caswell, Antony Lee, Hannah.
Thanks a lot for spending time reading the blog! I’ll be back with my progress in subsequent posts.
Continuing the theme of my last post, we know that the Held-Karp relaxation in the Asadpour Asymmetric Traveling Salesman Problem cannot be practically written into the standard matrix form of a linear program. Thus, we need a different method to solve the relaxation, which is where the ellipsoid method comes into play. The ellipsoid method can be used to solve semi-infinite linear programs, which is what the Held-Karp relaxation is.
One of the keys to the ellipsoid method is the separation oracle. From the perspective of the algorithm itself, the oracle is a black-box program which takes a vector and determines
In the most basic form, the ellipsoid method is a decision algorithm rather than an optimization algorithm, so it terminates once a single, but almost certainly nonoptimal, vector within the feasible region is found. However, we can convert the ellipsoid method into an algorithm which is truly an optimization one. What this means for us is that we can assume that the separation oracle will return a hyperplane.
The hyperplane that the oracle returns is then used to construct the next ellipsoid in the algorithm, which is of smaller volume and contains a half-ellipsoid from the originating ellipsoid. This is, however, a topic for another post. Right now I want to focus on this ‘black-box’ separation oracle.
The reason that the Held-Karp relaxation is semi-infinite is because for a graph with $n$ vertices, there are $2^n + 2n$ constraints in the program. A naive approach to the separation oracle would be to check each constraint individually for the input vector, creating a program with $O(2^n)$ running time. While it would terminate eventually, it certainly would take a long time to do so.
So, we look for a more efficient way to do this. Recall from the Asadpour paper [1] that the Held-Karp relaxation is the following linear program.
$$ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} $$
The first set of constraints ensures that the output of the relaxation is connected. This is called subtour elimination, and it prevents a solution with multiple disconnected clusters by ensuring that every set of vertices has at least one total outgoing arc (we are currently dealing with fractional arcs). From the perspective of the separation oracle, we do not care about all of the sets of vertices for which $x(\delta^+(U)) \geqslant 1$, only trying to find one such subset of the vertices where $x(\delta^+(U)) < 1$.
In order to find such a set of vertices $U \in V$ where $x(\delta^+(U)) < 1$ we can find the subset $U$ with the smallest value of $\delta^+(x)$ for all $U \subset V$. That is, find the global minimum cut in the complete digraph using the edge capacities given by the input vector to the separation oracle. Using lecture notes by Michel X. Goemans (who is also one of the authors of the Asadpour algorithm this project seeks to implement), [2] we can find such a minimum cut with $2(n - 1)$ maximum flow calculations.
The algorithm described in section 6.4 of the lecture notes [2] is fairly simple. Let $S$ be a subset of $V$ and $T$ be a subset of $V$ such that the $s-t$ cut is the global minimum cut for the graph. First, we pick an arbitrary $s$ in the graph. By definition, $s$ is either in $S$ or it is in $T$. We now iterate through every other vertex in the graph $t$, and compute the $s-t$ and $t-s$ minimum cut. If $s \in S$ than we will find that one of the choices of $t$ will produce the global minimum cut and the case where $s \not\in S$ or $s \in T$ is covered by using the $t-s$ cuts.
According to Geoman [2], the complexity of finding the global min cut in a weighted digraph, using an effeicent maxflow algorithm, is $O(mn^2\log(n^2/m))$.
The second constraint can be checked in $O(n)$ time with a simple loop. It makes sense to actually check this one first as it is computationally simpler and thus if one of these conditions are violated we will be able to return the violated hyperplane faster.
Now we have reduced the complexity of the oracle from $O(2^n)$ to the same as finding the global min cut, $O(mn^2\log(n^2/m))$ which is substantially better. For example, let us consider an initial graph with 100 vertices. Using the $O(2^n)$ method, that is $1.2677 \times 10^{30}$ subsets $U$ that we need to check times whatever the complexity of actually determining whether the constraint violates $x(\delta^+(U)) \geqslant 1$. For that same complete digraph on 100 vertices, we know that there $n = 100$ and $m = \binom{100}{2} = 4950$. Using the global min cut approach, the complexity which includes finding the max flow as well as the number of times it needs to be found, is $15117042$ or $1.5117 \times 10^7$ which is faster by a factor of $10^{23}$.
[1] A. Asadpour, M. X. Goemans, A. Mardry, S. O. Ghran, and A. Saberi, An o(log n / log log n)-approximation algorithm for the asymmetric traveling salesman problem, Operations Research, 65 (2017), pp. 1043-1061, https://homes.cs.washington.edu/~shayan/atsp.pdf.
[2] M. X. Goemans, Lecture notes on flows and cuts, Handout 18, Massachusetts Institute of Technology, Cambridge, MA, 2009 http://www-math.mit.edu/~goemans/18433S09/flowscuts.pdf.
]]>In linear programming, we sometimes need to take what would be a integer program and ‘relax’ it, or unbound the values of the variables so that they are continuous. One particular application of this process is Held-Karp relaxation used the first part of the Asadpour algorithm for the Asymmetric Traveling Salesman Problem, where we find the lower bound of the approximation. Normally the relaxation is written as follows.
$$ \begin{array}{c l l} \text{min} & \sum_{a} c(a)x_a \\\ \text{s.t.} & x(\delta^+(U)) \geqslant 1 & \forall\ U \subset V \text{ and } U \not= \emptyset \\\ & x(\delta^+(v)) = x(\delta^-(v)) = 1 & \forall\ v \in V \\\ & x_a \geqslant 0 & \forall\ a \end{array} $$
This is a convenient way to write the program, but if we want to solve it, and we definitely do, we need it written in standard form for a linear program. Standard form is represented using a matrix for the set of constraints and vectors for the objective function. It is shown below
$$ \begin{array}{c l} \text{min} & Z = c^TX \\\ \text{s.t.} & AX = b \\\ & X \geqslant 0 \end{array} $$
Where $c$ is the coefficient vector for objective function, $X$ is the vector for the values of all of the variables, $A$ is the coefficient matrix for the constraints and $b$ is a vector of what the constraints are equal to. Once a linear program is in this form there are efficient algorithms which can solve it.
In the Held-Karp relaxation, the objective function is a summation, so we can expand it to a summation. If there are $n$ edges then it becomes
$$ \sum_{a} c(a)x_a = c(1)x_1 + c(2)x_2 + c(3)x_3 + \dots + c(n)_n $$
Where $c(a)$ is the weight of that edge in the graph. From here it is easy to convert the objective function into two vectors which satisfies the standard form.
$$ \begin{array}{rCl} c &=& \begin{bmatrix} c_1 & c_2 & c_3 & \dots & c_n \end{bmatrix}^T \\\ X &=& \begin{bmatrix} x_1 & x_2 & x_3 & \dots & x_n \end{bmatrix}^T \end{array} $$
Now we have to convert the constraints to be in standard form. First and foremost, notice that the Held-Karp relaxation contains $x_a \geqslant 0\ \forall\ a$ and the standard form uses $X \geqslant 0$, so these constants match already and no work is needed. As for the others… well they do need some work.
Starting with the first constraint in the Held-Karp relaxation, $x(\delta^+(U)) \geqslant 1\ \forall\ U \subset V$ and $U \not= \emptyset$. This constraint specifies that for every subset of the vertex set $V$, that subset must have at lest one arc with its tail in $U$ and its head not in $U$. For any given $\delta^+(U)$, which is defined in the paper is $\delta^+(U) = {a = (u, v) \in A: u \in U, v \not\in U}$ where $A$ in this set is the set of all arcs in the graph, the coefficients on arcs not in $U$ are zero. Arcs in $\delta^+(U)$ have a coefficient of $1$ as their full weight is counted as part of $\delta^+(U)$. We know that there are about $2^{|V|}$ subsets of the vertex $V$, so this constraint adds that many rows to the constraint matrix $A$.
Moving to the next constraint, $x(\delta^+(v)) = x(\delta^-(v)) = 1$, we first need to split it in two.
$$ \begin{array}{rCl} x(\delta^+(v)) &=& 1 \\\ x(\delta^-(v)) &=& 1 \end{array} $$
Similar to the last constraint, each of these say that the number of arcs entering and leaving a vertex in the graph need to equal one. For each vertex $v$ we find all the arcs which start at $v$ and those are the members of $\delta^+(v)$, so they have a weight of 1 and all others have a weight of zero. The opposite is true for $\delta^-(v)$, every vertex which has a head on $v$ has a weight or coefficient of 1 while the rest have a weight of zero. This adds $2 \times |V|$ rows to $A$, the coefficient matrix which brings the total to $2^{|V|} + 2|V|$ rows.
We already know that $A$ will have $2^{|V|} + 2|V|$ rows. But how many columns will $A$ have? We know that each arc is a variable so at lest $|E|$ rows, but in a traditional matrix form of a linear program, we have to introduce slack and surplus variables so that $AX = b$ and not $AX \geqslant b$ or any other inequality operation. The $2|V|$ rows already comply with this requirement, but the rows created with every subset of $V$ do not, those rows only require that $x(\delta^+(U)) \geqslant 1$, so we introduce a surplus variable for each of these rows bring the column count to $|E| + 2^{|V|}$.
Now, the Held-Karp relaxation performed in the Asadpour algorithm in is done on the complete bi-directed graph. For a graph with $n$ vertices, there will be $2 \times \binom{n}{2}$ arcs in the graph. The updated value for the size of $A$ is then that it is a
$$ \left(2^n + 2n \right)\times \left(2\binom{n}{2} + 2^n\right) $$
matrix. This is very large. For $n = 100$ there are $1.606 \times 10^{60}$ elements in the matrix. Allocating a measly 8 bits per entry sill consumes over $1.28 \times 10^{52}$ gigabytes of memory.
This is an impossible amount of memory for any computer that we could run NetworkX on.
The Held-Karp relaxation must be solved in the Asadpour Asymmertic Traveling Salesman Problem Algorithm, but clearly putting it into standard form is not possible. This means that we will not be able to use SciPy’s linprog method which I was hoping to use. I will instead have to research and write an ellipsoid method solver, which hopefully will be able to solve the Held-Karp relaxation in both polynomial time and a practical amount of memory.
]]>In May 2020, Alexandre Morin-Chassé published a blog post about the stellar chart. This type of chart is an (approximately) direct alternative to the radar chart (also known as web, spider, star, or cobweb chart) — you can read more about this chart here.
In this tutorial, we will see how we can create a quick-and-dirty stellar chart. First of all, let’s get the necessary modules/libraries, as well as prepare a dummy dataset (with just a single record).
from itertools import chain, zip_longest
from math import ceil, pi
import matplotlib.pyplot as plt
data = [
("V1", 8),
("V2", 10),
("V3", 9),
("V4", 12),
("V5", 6),
("V6", 14),
("V7", 15),
("V8", 25),
]
We will also need some helper functions, namely a function to round up to the nearest 10 (round_up()
) and a function to join two sequences (even_odd_merge()
). In the latter, the values of the first sequence (a list or a tuple, basically) will fill the even positions and the values of the second the odd ones.
def round_up(value):
"""
>>> round_up(25)
30
"""
return int(ceil(value / 10.0)) * 10
def even_odd_merge(even, odd, filter_none=True):
"""
>>> list(even_odd_merge([1,3], [2,4]))
[1, 2, 3, 4]
"""
if filter_none:
return filter(None.__ne__, chain.from_iterable(zip_longest(even, odd)))
return chain.from_iterable(zip_longest(even, odd))
That said, to plot data
on a stellar chart, we need to apply some transformations, as well as calculate some auxiliary values. So, let’s start by creating a function (prepare_angles()
) to calculate the angle of each axis on the chart (N
corresponds to the number of variables to be plotted).
def prepare_angles(N):
angles = [n / N * 2 * pi for n in range(N)]
# Repeat the first angle to close the circle
angles += angles[:1]
return angles
Next, we need a function (prepare_data()
) responsible for adjusting the original data (data
) and separating it into several easy-to-use objects.
def prepare_data(data):
labels = [d[0] for d in data] # Variable names
values = [d[1] for d in data]
# Repeat the first value to close the circle
values += values[:1]
N = len(labels)
angles = prepare_angles(N)
return labels, values, angles, N
Lastly, for this specific type of chart, we require a function (prepare_stellar_aux_data()
) that, from the previously calculated angles, prepares two lists of auxiliary values: a list of intermediate angles for each pair of angles (stellar_angles
) and a list of small constant values (stellar_values
), which will act as the values of the variables to be plotted in order to achieve the star-like shape intended for the stellar chart.
def prepare_stellar_aux_data(angles, ymax, N):
angle_midpoint = pi / N
stellar_angles = [angle + angle_midpoint for angle in angles[:-1]]
stellar_values = [0.05 * ymax] * N
return stellar_angles, stellar_values
At this point, we already have all the necessary ingredients for the stellar chart, so let’s move on to the Matplotlib side of this tutorial. In terms of aesthetics, we can rely on a function (draw_peripherals()
) designed for this specific purpose (feel free to customize it!).
def draw_peripherals(ax, labels, angles, ymax, outer_color, inner_color):
# X-axis
ax.set_xticks(angles[:-1])
ax.set_xticklabels(labels, color=outer_color, size=8)
# Y-axis
ax.set_yticks(range(10, ymax, 10))
ax.set_yticklabels(range(10, ymax, 10), color=inner_color, size=7)
ax.set_ylim(0, ymax)
ax.set_rlabel_position(0)
# Both axes
ax.set_axisbelow(True)
# Boundary line
ax.spines["polar"].set_color(outer_color)
# Grid lines
ax.xaxis.grid(True, color=inner_color, linestyle="-")
ax.yaxis.grid(True, color=inner_color, linestyle="-")
To plot the data and orchestrate (almost) all the steps necessary to have a stellar chart, we just need one last function: draw_stellar()
.
def draw_stellar(
ax,
labels,
values,
angles,
N,
shape_color="tab:blue",
outer_color="slategrey",
inner_color="lightgrey",
):
# Limit the Y-axis according to the data to be plotted
ymax = round_up(max(values))
# Get the lists of angles and variable values
# with the necessary auxiliary values injected
stellar_angles, stellar_values = prepare_stellar_aux_data(angles, ymax, N)
all_angles = list(even_odd_merge(angles, stellar_angles))
all_values = list(even_odd_merge(values, stellar_values))
# Apply the desired style to the figure elements
draw_peripherals(ax, labels, angles, ymax, outer_color, inner_color)
# Draw (and fill) the star-shaped outer line/area
ax.plot(
all_angles,
all_values,
linewidth=1,
linestyle="solid",
solid_joinstyle="round",
color=shape_color,
)
ax.fill(all_angles, all_values, shape_color)
# Add a small hole in the center of the chart
ax.plot(0, 0, marker="o", color="white", markersize=3)
Finally, let’s get our chart on a blank canvas (figure).
fig = plt.figure(dpi=100)
ax = fig.add_subplot(111, polar=True) # Don't forget the projection!
draw_stellar(ax, *prepare_data(data))
plt.show()
It’s done! Right now, you have an example of a stellar chart and the boilerplate code to add this type of chart to your repertoire. If you end up creating your own stellar charts, feel free to share them with the world (and me!). I hope this tutorial was useful and interesting for you!
]]>The IPCC’s Special Report on Global Warming of 1.5°C (SR15), published in October 2018, presented the latest research on anthropogenic climate change. It was written in response to the 2015 UNFCCC’s “Paris Agreement” of
holding the increase in the global average temperature to well below 2 °C above pre-industrial levels and to pursue efforts to limit the temperature increase to 1.5 °C […]".
cf. Article 2.1.a of the Paris Agreement
As part of the SR15 assessment, an ensemble of quantitative, model-based scenarios was compiled to underpin the scientific analysis. Many of the headline statements widely reported by media are based on this scenario ensemble, including the finding that
global net anthropogenic CO2 emissions decline by ~45% from 2010 levels by 2030
in all pathways limiting global warming to 1.5°C (cf. statement C.1 in the Summary For Policymakers).
When preparing the SR15, the authors wanted to go beyond previous reports not just regarding the scientific rigor and scope of the analysis, but also establish new standards in terms of openness, transparency and reproducibility.
The scenario ensemble was made accessible via an interactive IAMC 1.5°C Scenario Explorer (link) in line with the FAIR principles for scientific data management and stewardship. The process for compiling, validating and analyzing the scenario ensemble was described in an open-access manuscript published in Nature Climate Change (doi: 10.1038/s41558-018-0317-4).
In addition, the Jupyter notebooks generating many of the headline statements, tables and figures (using Matplotlib) were released under an open-source license to facilitate a better understanding of the analysis and enable reuse for subsequent research. The notebooks are available in rendered format and on GitHub.
To facilitate reusability of the scripts and plotting utilities developed for the SR15 analysis, we started the open-source Python package pyam as a toolbox for working with scenarios from integrated-assessment and energy system models.
The package is a wrapper for pandas and Matplotlib geared for several data formats commonly used in energy modelling. Read the docs!
]]>This year’s Google Season of Docs (GSoD) provided me the opportunity to work with the open source organization, Matplotlib. In early summer, I submitted my proposal of Developing Matplotlib Entry Paths with the goal of improving the documentation with an alternative approach to writing.
I had set out to identify with users more by providing real world contexts to examples and programming. My purpose was to lower the barrier of entry for others to begin using the Python library with an expository approach. I focused on aligning with users based on consistent derived purposes and a foundation of task-based empathy.
The project began during the community bonding phase with learning the fundamentals of building documentation and working with open source code. I later generated usability testing surveys to the community and consolidated findings. From these results, I developed two new documents for merging into the Matplotlib repository, a Getting Started introductory tutorial and a lean Style Guide for the documentation.
Throughout this year’s Season of Docs with Matplotlib, I learned a great deal about working on open source projects, provided contributions of surveying communities and interviewing subject matter experts in documentation usability testing, and produced a comprehensive introductory guide for improving entry-level content with an initiative style guide section.
As a new user to Git and GitHub, I had a learning curve in getting started with building documentation locally on my machine. Working with cloning repositories and familiarizing myself with commits and pull requests took the bulk of the first few weeks on this project. However, with experiencing errors and troubleshooting broken branches, it was excellent to be able to lean on my mentors for resolving these issues. Platforms like Gitter, Zoom, and HackMD were key in keeping communication timely and concise. I was fortunate to be able to get in touch with the team to help me as soon as I had problems.
With programming, I was not a completely fresh face to Python and Matplotlib. However, installing the library from the source and breaking down functionality to core essentials helped me grow in my understanding of not only the fundamentals, but also the terminology. Tackling everything through my own experience of using Python and then also having suggestions and advice from the development team accelerated the ideas and implementations I aimed to work towards.
New formats and standards with reStructuredText files and Sphinx compatibility were unfamiliar avenues to me at first. In building documentation and reading through already written content, I adapted to making the most of the features available with the ideas I had for writing material suited for users new to Matplotlib. Making use of tables and code examples embedded allowed me to be more flexible in visual layout and navigation.
During the beginning stages of the project, I was able to incorporate usability testing for the current documentation. By reaching out to communities on Twitter, Reddit, and various Slack channels, I compiled and consolidated findings that helped shape the language and focus of new content to create. I summarized and shared the community’s responses in addition to separate informational interviews conducted with subject matter experts in my location. These data points helped in justifying and supporting decisions for the scope and direction of the language and content.
At the end of the project, I completed our agreed upon expectations for the documentation. The focused goal consisted of a Getting Started tutorial to introduce and give context to Matplotlib for new users. In addition, through the documentation as well as the meetings with the community, we acknowledged a missing element of a Style Guide. Though a comprehensive document for the entire library was out of the scope of the project, I put together, in conjunction with the featured task, a lean version that serves as a foundational resource for writing Matplotlib documentation.
The two sections are part of a current pull request to merge into Matplotlib’s repository. I have already worked through smaller changes to the content and am working with the community in moving forward with the process.
This Season of Docs proposal began as a vision of ideals I hoped to share and work towards with an organization and has become a technical writing experience full of growth and camaraderie. I am pleased with the progress I had made and cannot thank the team enough for the leadership and mentorship they provided. It is fulfilling and rewarding to both appreciate and be appreciated within a team.
In addition, the opportunity put together by the team at Google to foster collaboration among skilled contributors cannot be understated. Highlighting the accomplishments of these new teams raises the bar for the open source community.
Special thanks to Emily Hsu, Joe McEwen, and Smriti Singh for their time and responses, fellow Matplotlib Season of Docs writer Bruno Beltran for his insight and guidance, and the Matplotlib development team mentors Tim, Tom, and Hannah for their patience, support, and approachability for helping a new technical writer like me with my own Getting Started.
My name is Jerome Villegas and I’m a technical writer based in Seattle. I’ve been in education and education-adjacent fields for several years before transitioning to the industry of technical communication. My career has taken me to Taiwan to teach English and work in publishing, then to New York City to work in higher education, and back to Seattle where I worked at a private school.
Since leaving my job, I’ve taken to supporting my family while studying technical writing at the University of Washington and supplementing the knowledge with learning programming on the side. Along with a former classmate, the two of us have worked with the UX writing community in the Pacific Northwest. We host interview sessions, moderate sessions at conferences, and generate content analyzing trends and patterns in UX/tech writing.
In telling people what I’ve got going on in my life, you can find work I’ve done at my personal site and see what we’re up to at shift J. Thanks for reading!
]]>Code-switching is the practice of alternating between two or more languages in the context of a single conversation, either consciously or unconsciously. As someone who grew up bilingual and is currently learning other languages, I find code-switching a fascinating facet of communication from not only a purely linguistic perspective, but also a social one. In particular, I’ve personally found that code-switching often helps build a sense of community and familiarity in a group and that the unique ways in which speakers code-switch with each other greatly contribute to shaping group dynamics.
This is something that’s evident in seven-member pop boy group WayV. Aside from their discography, artistry, and group chemistry, WayV is well-known among fans and many non-fans alike for their multilingualism and code-switching, which many fans have affectionately coined as “WayV language.” Every member in the group is fluent in both Mandarin and Korean, and at least one member in the group is fluent in one or more of the following: English, Cantonese, Thai, Wenzhounese, and German. It’s an impressive trait that’s become a trademark of WayV as they’ve quickly drawn a global audience since their debut in January 2019. Their multilingualism is reflected in their music as well. On top of their regular album releases in Mandarin, WayV has also released singles in Korean and English, with their latest single “Bad Alive (English Ver.)” being a mix of English, Korean, and Mandarin.
As an independent translator who translates WayV content into English, I’ve become keenly aware of the true extent and rate of WayV’s code-switching when communicating with each other. In a lot of their content, WayV frequently switches between three or more languages every couple of seconds, a phenomenon that can make translating quite challenging at times, but also extremely rewarding and fun. I wanted to be able to present this aspect of WayV in a way that would both highlight their linguistic skills and present this dimension of their group dynamic in a more concrete, quantitative, and visually intuitive manner, beyond just stating that “they code-switch a lot.” This prompted me to make step charts - perfect for displaying data that changes at irregular intervals but remains constant between the changes - in hopes of enriching the viewer’s experience and helping make a potentially abstract concept more understandable and readily consumable. With a step chart, it becomes more apparent to the viewer the extent of how a group communicates, and cross-sections of the graph allow a rudimentary look into how multilinguals influence each other in code-switching.
This tutorial on creating step charts uses one of WayV’s livestreams as an example. There were four members in this livestream and a total of eight languages/dialects spoken. I will go through the basic steps of creating a step chart that depicts the frequency of code-switching for just one member. A full code chunk that shows how to layer two or more step chart lines in one graph to depict code-switching for multiple members can be found near the end.
First, we import the required libraries and load the data into a Pandas dataframe.
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
This dataset includes the timestamp of every switch (in seconds) and the language of switch for one speaker.
df_h = pd.read_csv("WayVHendery.csv")
HENDERY = df_h.reset_index()
HENDERY.head()
index | time | lang |
---|---|---|
0 | 2 | ENG |
1 | 3 | KOR |
2 | 10 | ENG |
3 | 13 | MAND |
4 | 15 | ENG |
With the dataset loaded, we can now set up our graph in terms of determining the size of the figure, dpi, font size, and axes limits. We can also play around with the aesthetics, such as modifying the colors of our plot. These few simple steps easily transform the default all-white graph into a more visually appealing one.
fig, ax = plt.subplots(figsize = (20,12))
sns.set(rc={'axes.facecolor':'aliceblue', 'figure.facecolor':'c'})
fig, ax = plt.subplots(figsize = (20,12), dpi = 300)
plt.xlabel("Duration of Instagram Live (seconds)", fontsize = 18)
plt.ylabel("Cumulative Number of Times of Code-Switching", fontsize = 18)
plt.xlim(0, 570)
plt.ylim(0, 85)
Following this, we can make our step chart line easily with matplotlib.pyplot.step, in which we plot the x and y values and determine the text of the legend, color of the step chart line, and width of the step chart line.
ax.step(HENDERY.time, HENDERY.index, label = "HENDERY", color = "palevioletred", linewidth = 4)
Of course, we want to know not only how many switches there were and when they occurred, but also to what language the member switched. For this, we can write a for loop that labels each switch with its respective language as recorded in our dataset.
for x,y,z in zip(HENDERY["time"], HENDERY["index"], HENDERY["lang"]):
label = z
ax.annotate(label, #text
(x,y), #label coordinate
textcoords = "offset points", #how to position text
xytext = (15,-5), #distance from text to coordinate (x,y)
ha = "center", #alignment
fontsize = 8.5) #font size of text
Now add a title, save the graph, and there you have it!
plt.title("WayV Livestream Code-Switching", fontsize = 35)
fig.savefig("wayv_codeswitching.png", bbox_inches = "tight", facecolor = fig.get_facecolor())
Below is the complete code for layering step chart lines for multiple speakers in one graph. You can see how easy it is to take the code for visualizing the code-switching of one speaker and adapt it to visualizing that of multiple speakers. In addition, you can see that I’ve intentionally left the title blank so I can incorporate external graphic adjustments after I created the chart in Matplotlib, such as the addition of my social media handle and the use of a specific font I wanted, which you can see in the final graph. With visualizations being all about communicating information, I believe using Matplotlib in conjunction with simple elements of graphic design can be another way to make whatever you’re presenting that little bit more effective and personal, especially when you’re doing so on social media platforms.
# Initialize graph color and size
sns.set(rc={'axes.facecolor':'aliceblue', 'figure.facecolor':'c'})
fig, ax = plt.subplots(figsize = (20,12), dpi = 120)
# Set up axes and labels
plt.xlabel("Duration of Instagram Live (seconds)", fontsize = 18)
plt.ylabel("Cumulative Number of Times of Code-Switching", fontsize = 18)
plt.xlim(0, 570)
plt.ylim(0, 85)
# Layer step charts for each speaker
ax.step(YANGYANG.time, YANGYANG.index, label = "YANGYANG", color = "firebrick", linewidth = 4)
ax.step(HENDERY.time, HENDERY.index, label = "HENDERY", color = "palevioletred", linewidth = 4)
ax.step(TEN.time, TEN.index, label = "TEN", color = "mediumpurple", linewidth = 4)
ax.step(KUN.time, KUN.index, label = "KUN", color = "mediumblue", linewidth = 4)
# Add legend
ax.legend(fontsize = 17)
# Label each data point with the language switch
for i in (KUN, TEN, HENDERY, YANGYANG): #for each dataset
for x,y,z in zip(i["time"], i["index"], i["lang"]): #looping within the dataset
label = z
ax.annotate(label, #text
(x,y), #label coordinate
textcoords = "offset points", #how to position text
xytext = (15,-5), #distance from text to coordinate (x,y)
ha = "center", #alignment
fontsize = 8.5) #font size of text
# Add title (blank to leave room for external graphics)
plt.title("\n\n", fontsize = 35)
# Save figure
fig.savefig("wayv_codeswitching.png", bbox_inches = "tight", facecolor = fig.get_facecolor())
Languages/dialects: Korean (KOR), English (ENG), Mandarin (MAND), German (GER), Cantonese (CANT), Hokkien (HOKK), Teochew (TEO), Thai (THAI)
186 total switches! That’s approximately one code-switch in the group every 2.95 seconds.
And voilà! There you have it: a brief guide on how to make step charts. While I utilized step charts here to visualize code-switching, you can use them to visualize whatever data you would like. Please feel free to contact me here if you have any questions or comments. I hope you enjoyed this tutorial, and thank you so much for reading!
]]>Google Summer of Code 2020 is completed. Hurray!! This post discusses about the progress so far in the three months of the coding period from 1 June to 24 August 2020 regarding the project Baseline Images Problem
under matplotlib
organisation under the umbrella of NumFOCUS
organization.
This project helps with the difficulty in adding/modifying tests which require a baseline image. Baseline images are problematic because
So, the idea is to not store the baseline images in the repository, instead to create them from the existing tests.
We had created the matplotlib_baseline_images
package. This package is involved in the sub-wheels directory so that more packages can be added in the same directory, if needed in future. The matplotlib_baseline_images
package contain baseline images for both matplotlib
and mpl_toolkits
.
The package can be installed by using python3 -mpip install matplotlib_baseline_images
.
We successfully created the generate_missing
command line flag for baseline image generation for matplotlib
and mpl_toolkits
in the previous months. It was generating the matplotlib
and the mpl_toolkits
baseline images initially. Now, we have also modified the existing flow to generate any missing baseline images, which would be fetched from the master
branch on doing git pull
or git checkout -b feature_branch
.
Now, the image generation on the time of fresh install of matplotlib and the generation of missing baseline images works with the python3 -pytest lib/matplotlib matplotlib_baseline_image_generation
for the lib/matplotlib
folder and python3 -pytest lib/mpl_toolkits matplotlib_baseline_image_generation
for the lib/mpl_toolkits
folder.
We have written documentation explaining the following scenarios:
matplotlib_baseline_images_package
to be used for testing by the developer?I am grateful to be part of such a great community. Project is really interesting and challenging :)
Thanks Thomas, Antony and Hannah for helping me to complete this project.
]]>Google Summer of Code 2020’s second evaluation is completed. I passed!!! Hurray! Now we are in the mid way of the last evaluation. This post discusses about the progress so far in the first two weeks of the third coding period from 26 July to 9 August 2020.
We successfully created the matplotlib_baseline_image_generation
command line flag for baseline image generation for matplotlib
and mpl_toolkits
in the previous months. It was generating the matplotlib and the matplotlib toolkit baseline images successfully. Now, we modified the existing flow to generate any missing baseline images, which would be fetched from the master
branch on doing git pull
or git checkout -b feature_branch
.
We initially thought of creating a command line flag generate_baseline_images_for_test "test_a,test_b"
, but later on analysis of the approach, we came to the conclusion that the developer will not know about the test names to be given along with the flag. So, we tried to generate the missing images by generate_missing
without the test names. This worked successfully.
Later, we refactored the matplot_baseline_image_generation
and generate_missing
command line flags to single command line flag matplotlib_baseline_image_generation
as the logic was similar for both of them. Now, the image generation on the time of fresh install of matplotlib and the generation of missing baseline images works with the python3 -pytest lib/matplotlib matplotlib_baseline_image_generation
for the lib/matplotlib
folder and python3 -pytest lib/mpl_toolkits matplotlib_baseline_image_generation
for the lib/mpl_toolkits
folder.
We have written documentation explaining the following scenarios:
matplotlib_baseline_images_package
to be used for testing by the developer?Right now, we are trying to refactor the code and maintain git clean history. The current PR is under review. I am working on the suggested changes. We are trying to merge this :)
Monday to Thursday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Thomas, Antony and Hannah for helping me so far.
]]>Google Summer of Code 2020’s second evaluation is about to complete. Now we are about to start with the final coding phase. This post discusses about the progress so far in the last two weeks of the second coding period from 13 July to 26 July 2020.
We have divided the work in two parts as discussed in the previous blog. The first part is the generation of the baseline images discussed below. The second part is the modification of the baseline images. The modification part will be implemented in the last phase of the Google Summer of Code 2020.
Now, we have started removing the use of the matplotlib_baseline_images
package. After the changes proposed in the previous PR, the developer will have no baseline images on fresh install of matplotlib. So, the developer would need to generate matplotlib baseline images locally to get started with the testing part of the mpl.
The images can be generated by the image comparison tests with use of matplotlib_baseline_image_generation
flag from the command line. Once these images are generated for the first time, then they can be used as the baseline images for the later times for comparison. This is the main principle adopted.
We successfully created the matplotlib_baseline_image_generation
flag in the beginning of the second evaluation but images were not created in the baseline images
directory inside the matplotlib
and mpl_toolkits
directories, instead they were created in the result_images
directory. So, we implemented this functionality. The images are created in the lib/matplotlib/tests/baseline_images
directory directly now in the baseline image generation step. The baseline image generation step uses python3 -mpytest lib/matplotlib --matplotlib_baseline_image_generation
command. Later on, running the pytests with python3 -mpytest lib/matplotlib
will start the image comparison.
Right now, the matplotlib_baseline_image_generation flag works for the matplotlib directory. We are trying to achieve the same functionality for the mpl_toolkits directory.
Once the generation of the baseline images for mpl_toolkits
directory is completed in the current PR, we will move to the modification of the baseline images in the third coding phase. The addition of new baseline image and deletion of the old baseline image will also be implemented in the last phase of GSoC. Modification of baseline images will be further divided into two sub tasks: addition of new baseline image and the deletion of the previous baseline image.
Monday to Thursday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Thomas, Antony and Hannah for helping me so far.
]]>Cellular automata are discrete models, typically on a grid, which evolve in time. Each grid cell has a finite state, such as 0 or 1, which is updated based on a certain set of rules. A specific cell uses information of the surrounding cells, called it’s neighborhood, to determine what changes should be made. In general cellular automata can be defined in any number of dimensions. A famous two dimensional example is Conway’s Game of Life in which cells “live” and “die”, sometimes producing beautiful patterns.
In this post we will be looking at a one dimensional example known as elementary cellular automaton, popularized by Stephen Wolfram in the 1980s.
Imagine a row of cells, arranged side by side, each of which is colored black or white. We label black cells 1 and white cells 0, resulting in an array of bits. As an example lets consider a random array of 20 bits.
import numpy as np
rng = np.random.RandomState(42)
data = rng.randint(0, 2, 20)
print(data)
[0 1 0 0 0 1 0 0 0 1 0 0 0 0 1 0 1 1 1 0]
To update the state of our cellular automaton we will need to define a set of rules. A given cell \(C\) only knows about the state of it’s left and right neighbors, labeled \(L\) and \(R\) respectively. We can define a function or rule, \(f(L, C, R)\), which maps the cell state to either 0 or 1.
Since our input cells are binary values there are \(2^3=8\) possible inputs into the function.
for i in range(8):
print(np.binary_repr(i, 3))
000
001
010
011
100
101
110
111
For each input triplet, we can assign 0 or 1 to the output. The output of \(f\) is the value which will replace the current cell \(C\) in the next time step. In total there are \(2^{2^3} = 2^8 = 256\) possible rules for updating a cell. Stephen Wolfram introduced a naming convention, now known as the Wolfram Code, for the update rules in which each rule is represented by an 8 bit binary number.
For example “Rule 30” could be constructed by first converting to binary and then building an array for each bit
rule_number = 30
rule_string = np.binary_repr(rule_number, 8)
rule = np.array([int(bit) for bit in rule_string])
print(rule)
[0 0 0 1 1 1 1 0]
By convention the Wolfram code associates the leading bit with ‘111’ and the final bit with ‘000’. For rule 30 the relationship between the input, rule index and output is as follows:
for i in range(8):
triplet = np.binary_repr(i, 3)
print(f"input:{triplet}, index:{7-i}, output {rule[7-i]}")
input:000, index:7, output 0
input:001, index:6, output 1
input:010, index:5, output 1
input:011, index:4, output 1
input:100, index:3, output 1
input:101, index:2, output 0
input:110, index:1, output 0
input:111, index:0, output 0
We can define a function which maps the input cell information with the associated rule index. Essentially we are converting the binary input to decimal and adjusting the index range.
def rule_index(triplet):
L, C, R = triplet
index = 7 - (4 * L + 2 * C + R)
return int(index)
Now we can take in any input and look up the output based on our rule, for example:
rule[rule_index((1, 0, 1))]
0
Finally, we can use Numpy to create a data structure containing all the triplets for our state array and apply the function across the appropriate axis to determine our new state.
all_triplets = np.stack([np.roll(data, 1), data, np.roll(data, -1)])
new_data = rule[np.apply_along_axis(rule_index, 0, all_triplets)]
print(new_data)
[1 1 1 0 1 1 1 0 1 1 1 0 0 1 1 0 1 0 0 1]
That is the process for a single update of our cellular automata.
To do many updates and record the state over time, we will create a function.
def CA_run(initial_state, n_steps, rule_number):
rule_string = np.binary_repr(rule_number, 8)
rule = np.array([int(bit) for bit in rule_string])
m_cells = len(initial_state)
CA_run = np.zeros((n_steps, m_cells))
CA_run[0, :] = initial_state
for step in range(1, n_steps):
all_triplets = np.stack(
[
np.roll(CA_run[step - 1, :], 1),
CA_run[step - 1, :],
np.roll(CA_run[step - 1, :], -1),
]
)
CA_run[step, :] = rule[np.apply_along_axis(rule_index, 0, all_triplets)]
return CA_run
initial = np.array([0, 1, 0, 0, 0, 1, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 1, 1, 1, 0])
data = CA_run(initial, 10, 30)
print(data)
[[0. 1. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 0. 1. 1. 1. 0.]
[1. 1. 1. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 1. 1. 0. 1. 0. 0. 1.]
[0. 0. 0. 0. 1. 0. 0. 0. 1. 0. 0. 1. 1. 1. 0. 0. 1. 1. 1. 1.]
[1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 1. 1. 0. 0. 1. 1. 1. 0. 0. 0.]
[1. 1. 1. 1. 0. 0. 0. 1. 0. 0. 0. 0. 1. 1. 1. 0. 0. 1. 0. 1.]
[0. 0. 0. 0. 1. 0. 1. 1. 1. 0. 0. 1. 1. 0. 0. 1. 1. 1. 0. 1.]
[1. 0. 0. 1. 1. 0. 1. 0. 0. 1. 1. 1. 0. 1. 1. 1. 0. 0. 0. 1.]
[0. 1. 1. 1. 0. 0. 1. 1. 1. 1. 0. 0. 0. 1. 0. 0. 1. 0. 1. 1.]
[0. 1. 0. 0. 1. 1. 1. 0. 0. 0. 1. 0. 1. 1. 1. 1. 1. 0. 1. 0.]
[1. 1. 1. 1. 1. 0. 0. 1. 0. 1. 1. 0. 1. 0. 0. 0. 0. 0. 1. 1.]]
For larger simulations, interesting patterns start to emerge. To visualize our simulation results we will use the ax.matshow
function.
import matplotlib.pyplot as plt
plt.rcParams["image.cmap"] = "binary"
rng = np.random.RandomState(0)
data = CA_run(rng.randint(0, 2, 300), 150, 30)
fig, ax = plt.subplots(figsize=(16, 9))
ax.matshow(data)
ax.axis(False)
With the code set up to produce the simulation, we can now start to explore the properties of these different rules. Wolfram separated the rules into four classes which are outlined below.
def plot_CA_class(rule_list, class_label):
rng = np.random.RandomState(seed=0)
fig, axs = plt.subplots(
1, len(rule_list), figsize=(10, 3.5), constrained_layout=True
)
initial = rng.randint(0, 2, 100)
for i, ax in enumerate(axs.ravel()):
data = CA_run(initial, 100, rule_list[i])
ax.set_title(f"Rule {rule_list[i]}")
ax.matshow(data)
ax.axis(False)
fig.suptitle(class_label, fontsize=16)
return fig, ax
Cellular automata which rapidly converge to a uniform state
_ = plot_CA_class([4, 32, 172], "Class One")
Cellular automata which rapidly converge to a repetitive or stable state
_ = plot_CA_class([50, 108, 173], "Class Two")
Cellular automata which appear to remain in a random state
_ = plot_CA_class([60, 106, 150], "Class Three")
Cellular automata which form areas of repetitive or stable states, but also form structures that interact with each other in complicated ways.
_ = plot_CA_class([54, 62, 110], "Class Four")
Amazingly, the interacting structures which emerge from rule 110 has been shown to be capable of universal computation.
In all the examples above a random initial state was used, but another interesting case is when a single 1 is initialized with all other values set to zero.
initial = np.zeros(300)
initial[300 // 2] = 1
data = CA_run(initial, 150, 30)
fig, ax = plt.subplots(figsize=(10, 5))
ax.matshow(data)
ax.axis(False)
For certain rules, the emergent structures interact in chaotic and interesting ways.
I hope you enjoyed this brief look into the world of elementary cellular automata, and are inspired to make some pretty pictures of your own.
]]>Google Summer of Code 2020’s first evaluation is completed. I passed!!! Hurray! Now we are in the mid way of the second evaluation. This post discusses about the progress so far in the first two weeks of the second coding period from 30 June to 12 July 2020.
We successfully created the matplotlib_baseline_images package. It contains the matplotlib and the matplotlib toolkit baseline images. Symlinking is done for the baseline images, related changes for Travis, appvoyer, azure pipelines etc. are functional and tests/test_data is created as discussed in the previous blog. PR is reviewed and suggested work is done.
We have divide the work in two parts. The first part is the generation of the baseline images discussed below. The second part is the modification of the baseline images which happens when some baseline images gets modified due to git push
or git merge
. Modification of baseline images will be further divided into two sub tasks: addition of new baseline image and the deletion of the previous baseline image. This will be discussed in the second half of the second phase of the Google Summer of Code 2020.
After the changes proposed in the previous PR, the developer will have no baseline images on fresh install of matplotlib. The developer would need to install the sub-wheel matplotlib_baseline_images package to get started with the testing part of the mpl. Now, we have started removing the use of the matplotlib_baseline_images package. It will require two steps as discussed above.
The images can be generated by the image comparison tests. Once these images are generated for the first time, then they can be used as the baseline images for the later times for comparison. This is the main principle adopted. The images are first created in the result_images
directory. Then they will be moved to the lib/matplotlib/tests/baseline_images
directory. Later on, running the pytests will start the image comparison.
I learned about the pytest hooks and fixtures. I build a command line flag matplotlib_baseline_image_generation
which will create the baseline images in the result_images
directory. The full command will be python3 pytest --matplotlib_baseline_image_generation
. In order to do this, we have done changes in the conftest.py
and also added markers to the image_comparison
decorator.
I came to know about the git worktree and the scenarios in which we can use it. I also know more about virtual environments and their need in different scenarios.
Once the generation of the baseline images is completed in the current PR, we will move to the modification of the baseline images in the second half of the second coding phase.
Monday to Thursday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Thomas, Antony and Hannah for helping me so far.
]]>Imagine zooming an image over and over and never go out of finer details. It may sound bizarre but the mathematical concept of fractals opens the realm towards this intricating infinity. This strange geometry exhibits the same or similar patterns irrespectively of the scale. We can see one fractal example in the image above.
The fractals may seem difficult to understand due to their peculiarity, but that’s not the case. As Benoit Mandelbrot, one of the founding fathers of the fractal geometry said in his legendary TED Talk:
A surprising aspect is that the rules of this geometry are extremely short. You crank the formulas several times and at the end, you get things like this (pointing to a stunning plot)
– Benoit Mandelbrot
In this tutorial blog post, we will see how to construct fractals in Python and animate them using the amazing Matplotlib’s Animation API. First, we will demonstrate the convergence of the Mandelbrot Set with an enticing animation. In the second part, we will analyze one interesting property of the Julia Set. Stay tuned!
We all have a common sense of the concept of similarity. We say two objects are similar to each other if they share some common patterns.
This notion is not only limited to a comparison of two different objects. We can also compare different parts of the same object. For instance, a leaf. We know very well that the left side matches exactly the right side, i.e. the leaf is symmetrical.
In mathematics, this phenomenon is known as self-similarity. It means a given object is similar (completely or to some extent) to some smaller part of itself. One remarkable example is the An orange Koch Snowflake. It has 6 bulges which themselves have 3 sub-bulges. These sub-bulges have another 3 sub-sub bulges. as shown in the image below:
We can infinitely magnify some part of it and the same pattern will repeat over and over again. This is how fractal geometry is defined.
Mandelbrot Set is defined over the set of complex numbers. It consists of all complex numbers c, such that the sequence zᵢ₊ᵢ = zᵢ² + c, z₀ = 0 is bounded. It means, after a certain number of iterations the absolute value must not exceed a given limit. At first sight, it might seem odd and simple, but in fact, it has some mind-blowing properties.
The Python implementation is quite straightforward, as given in the code snippet below:
def mandelbrot(x, y, threshold):
"""Calculates whether the number c = x + i*y belongs to the
Mandelbrot set. In order to belong, the sequence z[i + 1] = z[i]**2 + c
must not diverge after 'threshold' number of steps. The sequence diverges
if the absolute value of z[i+1] is greater than 4.
:param float x: the x component of the initial complex number
:param float y: the y component of the initial complex number
:param int threshold: the number of iterations to considered it converged
"""
# initial conditions
c = complex(x, y)
z = complex(0, 0)
for i in range(threshold):
z = z**2 + c
if abs(z) > 4.0: # it diverged
return i
return threshold - 1 # it didn't diverge
As we can see, we set the maximum number of iterations encoded in the variable threshold
. If the magnitude of the
sequence at some iteration exceeds 4, we consider it as diverged (c does not belong to the set) and return the
iteration number at which this occurred. If this never happens (c belongs to the set), we return the maximum
number of iterations.
We can use the information about the number of iterations before the sequence diverges. All we have to do is to associate this number to a color relative to the maximum number of loops. Thus, for all complex numbers c in some lattice of the complex plane, we can make a nice animation of the convergence process as a function of the maximum allowed iterations.
One particular and interesting area is the 3x3 lattice starting at position -2 and -1.5 for the real and imaginary axis respectively. We can observe the process of convergence as the number of allowed iterations increases. This is easily achieved using the Matplotlib’s Animation API, as shown with the following code:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
x_start, y_start = -2, -1.5 # an interesting region starts here
width, height = 3, 3 # for 3 units up and right
density_per_unit = 250 # how many pixles per unit
# real and imaginary axis
re = np.linspace(x_start, x_start + width, width * density_per_unit)
im = np.linspace(y_start, y_start + height, height * density_per_unit)
fig = plt.figure(figsize=(10, 10)) # instantiate a figure to draw
ax = plt.axes() # create an axes object
def animate(i):
ax.clear() # clear axes object
ax.set_xticks([], []) # clear x-axis ticks
ax.set_yticks([], []) # clear y-axis ticks
X = np.empty((len(re), len(im))) # re-initialize the array-like image
threshold = round(1.15 ** (i + 1)) # calculate the current threshold
# iterations for the current threshold
for i in range(len(re)):
for j in range(len(im)):
X[i, j] = mandelbrot(re[i], im[j], threshold)
# associate colors to the iterations with an interpolation
img = ax.imshow(X.T, interpolation="bicubic", cmap="magma")
return [img]
anim = animation.FuncAnimation(fig, animate, frames=45, interval=120, blit=True)
anim.save("mandelbrot.gif", writer="imagemagick")
We make animations in Matplotlib using the FuncAnimation
function from the Animation API. We need to specify
the figure
on which we draw a predefined number of consecutive frames
. A predetermined interval
expressed in
milliseconds defines the delay between the frames.
In this context, the animate
function plays a central role, where the input argument is the frame number, starting
from 0. It means, in order to animate we always have to think in terms of frames. Hence, we use the frame number
to calculate the variable threshold
which is the maximum number of allowed iterations.
To represent our lattice we instantiate two arrays re
and im
: the former for the values on the real axis
and the latter for the values on the imaginary axis. The number of elements in these two arrays is defined by
the variable density_per_unit
which defines the number of samples per unit step. The higher it is, the better
quality we get, but at a cost of heavier computation.
Now, depending on the current threshold
, for every complex number c in our lattice, we calculate the number of
iterations before the sequence zᵢ₊ᵢ = zᵢ² + c, z₀ = 0 diverges. We save them in an initially empty matrix called X
.
In the end, we interpolate the values in X
and assign them a color drawn from a prearranged colormap.
After cranking the animate
function multiple times we get a stunning animation as depicted below:
The Julia Set is quite similar to the Mandelbrot Set. Instead of setting z₀ = 0 and testing whether for some complex number c = x + i*y the sequence zᵢ₊ᵢ = zᵢ² + c is bounded, we switch the roles a bit. We fix the value for c, we set an arbitrary initial condition z₀ = x + i*y, and we observe the convergence of the sequence. The Python implementation is given below:
def julia_quadratic(zx, zy, cx, cy, threshold):
"""Calculates whether the number z[0] = zx + i*zy with a constant c = x + i*y
belongs to the Julia set. In order to belong, the sequence
z[i + 1] = z[i]**2 + c, must not diverge after 'threshold' number of steps.
The sequence diverges if the absolute value of z[i+1] is greater than 4.
:param float zx: the x component of z[0]
:param float zy: the y component of z[0]
:param float cx: the x component of the constant c
:param float cy: the y component of the constant c
:param int threshold: the number of iterations to considered it converged
"""
# initial conditions
z = complex(zx, zy)
c = complex(cx, cy)
for i in range(threshold):
z = z**2 + c
if abs(z) > 4.0: # it diverged
return i
return threshold - 1 # it didn't diverge
Obviously, the setup is quite similar as the Mandelbrot Set implementation. The maximum number of iterations is
denoted as threshold
. If the magnitude of the sequence is never greater than 4, the number z₀ belongs to
the Julia Set and vice-versa.
The number c is giving us the freedom to analyze its impact on the convergence of the sequence, given that the number of maximum iterations is fixed. One interesting range of values for c is for c = r cos α + i × r sin α such that r=0.7885 and α ∈ [0, 2π].
The best possible way to make this analysis is to create an animated visualization as the number c changes. This ameliorates our visual perception and understanding of such abstract phenomena in a captivating manner. To do so, we use the Matplotlib’s Animation API, as demonstrated in the code below:
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
x_start, y_start = -2, -2 # an interesting region starts here
width, height = 4, 4 # for 4 units up and right
density_per_unit = 200 # how many pixles per unit
# real and imaginary axis
re = np.linspace(x_start, x_start + width, width * density_per_unit)
im = np.linspace(y_start, y_start + height, height * density_per_unit)
threshold = 20 # max allowed iterations
frames = 100 # number of frames in the animation
# we represent c as c = r*cos(a) + i*r*sin(a) = r*e^{i*a}
r = 0.7885
a = np.linspace(0, 2 * np.pi, frames)
fig = plt.figure(figsize=(10, 10)) # instantiate a figure to draw
ax = plt.axes() # create an axes object
def animate(i):
ax.clear() # clear axes object
ax.set_xticks([], []) # clear x-axis ticks
ax.set_yticks([], []) # clear y-axis ticks
X = np.empty((len(re), len(im))) # the initial array-like image
cx, cy = r * np.cos(a[i]), r * np.sin(a[i]) # the initial c number
# iterations for the given threshold
for i in range(len(re)):
for j in range(len(im)):
X[i, j] = julia_quadratic(re[i], im[j], cx, cy, threshold)
img = ax.imshow(X.T, interpolation="bicubic", cmap="magma")
return [img]
anim = animation.FuncAnimation(fig, animate, frames=frames, interval=50, blit=True)
anim.save("julia_set.gif", writer="imagemagick")
The logic in the animate
function is very similar to the previous example. We update the number c as a function
of the frame number. Based on that we estimate the convergence of all complex numbers in the defined lattice, given the
fixed threshold
of allowed iterations. Same as before, we save the results in an initially empty matrix X
and
associate them to a color relative to the maximum number of iterations. The resulting animation is illustrated below:
The fractals are really mind-gobbling structures as we saw during this blog. First, we gave a general intuition of the fractal geometry. Then, we observed two types of fractals: the Mandelbrot and Julia sets. We implemented them in Python and made interesting animated visualizations of their properties.
]]>Google Summer of Code 2020’s first evaluation is about to complete. This post discusses about the progress so far in the last two weeks of the first coding period from 15 June to 30 June 2020.
We successfully created the demo app and uploaded it to the test.pypi. It contains the main and the secondary package. The main package is analogous to the matplotlib and secondary package is analogous to the matplotlib_baseline_images package as discussed in the previous blog.
I came across another way to merge the master into the branch to resolve conflicts is by rebasing the master. I understood how to create modular commits inside a pull request for easy reviewal process and better understandability of the code.
Then, we implemented the similar changes to create the matplotlib_baseline_images
package. Finally, we were successful in uploading it to the test.pypi. This package is involved in the sub-wheels
directory so that more packages can be added in the same directory, if needed in future. The matplotlib_baseline_images
package contain baseline images for both matplotlib
and mpl_toolkits
.
Some changes were required in the main matplotlib
package’s setup.py so that it will not take information from the packages present in the sub-wheels
directory.
As baseline images are moved out of the lib/matplotlib
and lib/mpl_toolkits
directory. We symlinked the locations where they are used, namely in lib/matplotlib/testing/decorator.py
, tools/triage_tests.py
, lib/matplotlib/tests/__init__.py
and lib/mpl_toolkits/tests/__init__.py
.
There are some test data that is present in the baseline_images
which doesn’t need to be moved to the matplotlib_baseline_images
package. So, that is stored under the lib/matplotlib/tests/test_data
folder.
I came across the Continuous Integration tools used at mpl. We tried to install the matplotlib
followed by matplotlib_baseline_images
package in all three travis, appvoyer and azure-pipeline.
Once the current PR is merged, we will move to the Proposal for the baseline images problem.
Everyday meeting initiated at 11:00pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Antony and Hannah for helping me so far.
]]>The ocean is a key component of the Earth climate system. It thus needs a continuous real-time monitoring to help scientists better understand its dynamic and predict its evolution. All around the world, oceanographers have managed to join their efforts and set up a Global Ocean Observing System among which Argo is a key component. Argo is a global network of nearly 4000 autonomous probes or floats measuring pressure, temperature and salinity from the surface to 2000m depth every 10 days. The localisation of these floats is nearly random between the 60th parallels (see live coverage here). All data are collected by satellite in real-time, processed by several data centers and finally merged in a single dataset (collecting more than 2 millions of vertical profiles data) made freely available to anyone.
In this particular case, we want to plot temperature (surface and 1000m deep) data measured by those floats, for the period 2010-2020 and for the Mediterranean sea. We want this plot to be circular and animated, now you start to get the title of this post: Animated polar plot.
First we need some data to work with. To retrieve our temperature values from Argo, we use Argopy, which is a Python library that aims to ease Argo data access, manipulation and visualization for standard users, as well as Argo experts and operators. Argopy returns xarray dataset objects, which make our analysis much easier.
import pandas as pd
import numpy as np
from argopy import DataFetcher as ArgoDataFetcher
argo_loader = ArgoDataFetcher(cache=True)
# Query surface and 1000m temp in Med sea with argopy
df1 = argo_loader.region(
[-1.2, 29.0, 28.0, 46.0, 0, 10.0, "2009-12", "2020-01"]
).to_xarray()
df2 = argo_loader.region(
[-1.2, 29.0, 28.0, 46.0, 975.0, 1025.0, "2009-12", "2020-01"]
).to_xarray()
Here we create some arrays we’ll use for plotting, we set up a date array and extract day of the year and year itself that will be useful. Then to build our temperature array, we use xarray very useful methods : where()
and mean()
. Then we build a pandas Dataframe, because it’s prettier!
# Weekly date array
daterange = np.arange("2010-01-01", "2020-01-03", dtype="datetime64[7D]")
dayoftheyear = pd.DatetimeIndex(
np.array(daterange, dtype="datetime64[D]") + 3
).dayofyear # middle of the week
activeyear = pd.DatetimeIndex(
np.array(daterange, dtype="datetime64[D]") + 3
).year # extract year
# Init final arrays
tsurf = np.zeros(len(daterange))
t1000 = np.zeros(len(daterange))
# Filling arrays
for i in range(len(daterange)):
i1 = (df1["TIME"] >= daterange[i]) & (df1["TIME"] < daterange[i] + 7)
i2 = (df2["TIME"] >= daterange[i]) & (df2["TIME"] < daterange[i] + 7)
tsurf[i] = df1.where(i1, drop=True)["TEMP"].mean().values
t1000[i] = df2.where(i2, drop=True)["TEMP"].mean().values
# Creating dataframe
d = {"date": np.array(daterange, dtype="datetime64[D]"), "tsurf": tsurf, "t1000": t1000}
ndf = pd.DataFrame(data=d)
ndf.head()
This produces:
date tsurf t1000
0 2009-12-31 0.0 0.0
1 2010-01-07 0.0 0.0
2 2010-01-14 0.0 0.0
3 2010-01-21 0.0 0.0
4 2010-01-28 0.0 0.0
Then it’s time to plot, for that we first need to import what we need, and set some useful variables.
import matplotlib.pyplot as plt
import matplotlib
plt.rcParams["xtick.major.pad"] = "17"
plt.rcParams["axes.axisbelow"] = False
matplotlib.rc("axes", edgecolor="w")
from matplotlib.lines import Line2D
from matplotlib.animation import FuncAnimation
from IPython.display import HTML
big_angle = 360 / 12 # How we split our polar space
date_angle = (
((360 / 365) * dayoftheyear) * np.pi / 180
) # For a day, a corresponding angle
# inner and outer ring limit values
inner = 10
outer = 30
# setting our color values
ocean_color = ["#ff7f50", "#004752"]
Now we want to make our axes like we want, for that we build a function dress_axes
that will be called during the animation process. Here we plot some bars with an offset (combination of bottom
and ylim
after). Those bars are actually our background, and the offset allows us to plot a legend in the middle of the plot.
def dress_axes(ax):
ax.set_facecolor("w")
ax.set_theta_zero_location("N")
ax.set_theta_direction(-1)
# Here is how we position the months labels
middles = np.arange(big_angle / 2, 360, big_angle) * np.pi / 180
ax.set_xticks(middles)
ax.set_xticklabels(
[
"January",
"February",
"March",
"April",
"May",
"June",
"July",
"August",
"September",
"October",
"November",
"December",
]
)
ax.set_yticks([15, 20, 25])
ax.set_yticklabels(["15°C", "20°C", "25°C"])
# Changing radial ticks angle
ax.set_rlabel_position(359)
ax.tick_params(axis="both", color="w")
plt.grid(None, axis="x")
plt.grid(axis="y", color="w", linestyle=":", linewidth=1)
# Here is the bar plot that we use as background
bars = ax.bar(
middles,
outer,
width=big_angle * np.pi / 180,
bottom=inner,
color="lightgray",
edgecolor="w",
zorder=0,
)
plt.ylim([2, outer])
# Custom legend
legend_elements = [
Line2D(
[0],
[0],
marker="o",
color="w",
label="Surface",
markerfacecolor=ocean_color[0],
markersize=15,
),
Line2D(
[0],
[0],
marker="o",
color="w",
label="1000m",
markerfacecolor=ocean_color[1],
markersize=15,
),
]
ax.legend(handles=legend_elements, loc="center", fontsize=13, frameon=False)
# Main title for the figure
plt.suptitle(
"Mediterranean temperature from Argo profiles",
fontsize=16,
horizontalalignment="center",
)
From there we can plot the frame of our plot.
fig = plt.figure(figsize=(10, 10))
ax = fig.add_subplot(111, polar=True)
dress_axes(ax)
plt.show()
Then it’s finally time to plot our data. Since we want to animated the plot, we’ll build a function that will be called in FuncAnimation
later on. Since the state of the plot changes on every time stamp, we have to redress the axes for each frame, easy with our dress_axes
function. Then we plot our temperature data using basic plot()
: thin lines for historical measurements, thicker lines for the current year.
def draw_data(i):
# Clear
ax.cla()
# Redressing axes
dress_axes(ax)
# Limit between thin lines and thick line, this is current date minus 51 weeks basically.
# why 51 and not 52 ? That create a small gap before the current date, which is prettier
i0 = np.max([i - 51, 0])
ax.plot(
date_angle[i0 : i + 1],
ndf["tsurf"][i0 : i + 1],
"-",
color=ocean_color[0],
alpha=1.0,
linewidth=5,
)
ax.plot(
date_angle[0 : i + 1],
ndf["tsurf"][0 : i + 1],
"-",
color=ocean_color[0],
linewidth=0.7,
)
ax.plot(
date_angle[i0 : i + 1],
ndf["t1000"][i0 : i + 1],
"-",
color=ocean_color[1],
alpha=1.0,
linewidth=5,
)
ax.plot(
date_angle[0 : i + 1],
ndf["t1000"][0 : i + 1],
"-",
color=ocean_color[1],
linewidth=0.7,
)
# Plotting a line to spot the current date easily
ax.plot([date_angle[i], date_angle[i]], [inner, outer], "k-", linewidth=0.5)
# Display the current year as a title, just beneath the suptitle
plt.title(str(activeyear[i]), fontsize=16, horizontalalignment="center")
# Test it
draw_data(322)
plt.show()
Finally it’s time to animate, using FuncAnimation
. Then we save it as a mp4 file or we display it in our notebook with HTML(anim.to_html5_video())
.
anim = FuncAnimation(
fig, draw_data, interval=40, frames=len(daterange) - 1, repeat=False
)
# anim.save('ArgopyUseCase_MedTempAnimation.mp4')
HTML(anim.to_html5_video())
I Sidharth Bansal, was waiting for the coding period to start from the March end so that I can make my hands dirty with the code. Finally, coding period has started. Two weeks have passed. This blog contains information about the progress so far from 1 June to 14 June 2020.
Initially, we thought of creating a mpl-test and mpl package. Mpl-test package would contain the test suite and baseline images while the other package would contain parts of repository other than test and baseline-images related files and folders. We changed our decision to creation of mpl and mpl-baseline-images packages as we don’t need to create separate package for entire test suite. Our main aim was to eliminate baseline_images from the repository. Mpl-baseline-images package will contain the data[/baseline images] and related information. The other package will contain files and folders other than baseline images. We are now trying to create the following structure for the repository:
mpl/
setup.py
lib/mpl/...
lib/mpl/tests/... [contains the tests .py files]
baseline_images/
setup.py
data/... [contains the image files]
It will involve:
pip install mpl-baseline-images
).I am creating a prototype first with two packages - main package and sub-wheel package. Once the demo app works well on Test PyPi, we can do similar changes to the main mpl repository. The structure of demo app is analogous to the work needed for separation of baseline-images to a new package mpl-baseline-images as given below:
testrepo/
setup.py
lib/testpkg/__init__.py
baseline_images/setup.py
baseline_images/testdata.txt
This will also include related MANIFEST files and setup.cfg.template files. The setup.py will also contain logic for exclusion of baseline-images folder from the main mpl-package.
After the current PR is merged, we will focus on eliminating the baseline-images from the mpl-baseline-images package. Then we will do similar changes for the Travis CI.
Every Tuesday and every Friday meeting is initiated at 8:30pm IST via Zoom. Meeting notes are present at HackMD.
I am grateful to be part of such a great community. Project is really interesting and challenging :) Thanks Antony and Hannah for helping me so far.
]]>To get acquainted with the basics of plotting with matplotlib
, let’s try plotting how much distance an object under free-fall travels with respect to time and also, its velocity at each time step.
If, you have ever studied physics, you can tell that is a classic case of Newton’s equations of motion, where
$$ v = a \times t $$
$$ S = 0.5 \times a \times t^{2} $$
We will assume an initial velocity of zero.
import numpy as np
time = np.arange(0.0, 10.0, 0.2)
velocity = np.zeros_like(time, dtype=float)
distance = np.zeros_like(time, dtype=float)
We know that under free-fall, all objects move with the constant acceleration of $$g = 9.8~m/s^2$$
g = 9.8 # m/s^2
velocity = g * time
distance = 0.5 * g * np.power(time, 2)
The above code gives us two numpy
arrays populated with the distance and velocity data points.
When using matplotlib
we have two approaches:
pyplot
interface / functional interface.matplotlib
on the surface is made to imitate MATLAB’s method of generating plots, which is called pyplot
. All the pyplot
commands make changes and modify the same figure. This is a state-based interface, where the state (i.e., the figure) is preserved through various function calls (i.e., the methods that modify the figure). This interface allows us to quickly and easily generate plots. The state-based nature of the interface allows us to add elements and/or modify the plot as we need, when we need it.
This interface shares a lot of similarities in syntax and methodology with MATLAB. For example, if we want to plot a blue line where each data point is marked with a circle, we can use the string 'bo-'
.
import matplotlib.pyplot as plt
plt.figure(figsize=(9, 7), dpi=100)
plt.plot(time, distance, "bo-")
plt.xlabel("Time")
plt.ylabel("Distance")
plt.legend(["Distance"])
plt.grid(True)
The plot shows how much distance was covered by the free-falling object with each passing second.
plt.figure(figsize=(9, 7), dpi=100)
plt.plot(time, velocity, "go-")
plt.xlabel("Time")
plt.ylabel("Velocity")
plt.legend(["Velocity"])
plt.grid(True)
The plot below shows us how the velocity is increasing.
Let’s try to see what kind of plot we get when we plot both distance and velocity in the same plot.
plt.figure(figsize=(9, 7), dpi=100)
plt.plot(time, velocity, "g-")
plt.plot(time, distance, "b-")
plt.ylabel("Distance and Velocity")
plt.xlabel("Time")
plt.legend(["Distance", "Velocity"])
plt.grid(True)
Here, we run into some obvious and serious issues. We can see that since both the quantities share the same axis but have very different magnitudes, the graph looks disproportionate. What we need to do is separate the two quantities on two different axes. This is where the second approach to making plot comes into play.
Also, the pyplot
approach doesn’t really scale when we are required to make multiple plots or when we have to make intricate plots that require a lot of customisation. However, internally matplotlib
has an Object-Oriented interface that can be accessed just as easily, which allows to reuse objects.
When using the OO interface, it helps to know how the matplotlib
structures its plots. The final plot that we see as the output is a ‘Figure’ object. The Figure
object is the top level container for all the other elements that make up the graphic image. These “other” elements are called Artists
. The Figure
object can be thought of as a canvas, upon which different artists act to create the final graphic image. This Figure
can contain any number of various artists.
Things to note about the anatomy of a figure are:
Artists
. Artists
are basically all the elements that are rendered onto the figure. This can include text, patches (like arrows and shapes), etc. Thus, all the following Figure
, Axes
and Axis
objects are also Artists.Axes
object. The Axes
object holds the actual data that we are going to display. It will also contain X- and Y-axis labels, a title. Each Axes
object will contain two or more Axis
objects.Axis
objects set the data limits. It also contains the ticks and ticks labels. ticks
are the marks that we see on a axis.Understanding this hierarchy of Figure
, Artist
, Axes
and Axis
is immensely important, because it plays a crucial role in how me make an animation in matplotlib
.
Now that we understand how plots are generated, we can easily solve the problem we faced earlier. To make Velocity and Distance plot to make more sense, we need to plot each data item against a separate axis, with a different scale. Thus, we will need one parent Figure
object and two Axes
objects.
fig, ax1 = plt.subplots()
ax1.set_ylabel("distance (m)")
ax1.set_xlabel("time")
ax1.plot(time, distance, "blue")
ax2 = ax1.twinx() # create another y-axis sharing a common x-axis
ax2.set_ylabel("velocity (m/s)")
ax2.set_xlabel("time")
ax2.plot(time, velocity, "green")
fig.set_size_inches(7, 5)
fig.set_dpi(100)
plt.show()
This plot is still not very intuitive. We should add a grid and a legend. Perhaps, we can also change the color of the axis labels and tick labels to the color of the lines.
But, something very weird happens when we try to turn on the grid, which you can see here at Cell 8. The grid lines don’t align with the tick labels on the both the Y-axes. We can see that tick values matplotlib
is calculating on its own are not suitable to our needs and, thus, we will have to calculate them ourselves.
fig, ax1 = plt.subplots()
ax1.set_ylabel("distance (m)", color="blue")
ax1.set_xlabel("time")
ax1.plot(time, distance, "blue")
ax1.set_yticks(np.linspace(*ax1.get_ybound(), 10))
ax1.tick_params(axis="y", labelcolor="blue")
ax1.xaxis.grid()
ax1.yaxis.grid()
ax2 = ax1.twinx() # create another y-axis sharing a common x-axis
ax2.set_ylabel("velocity (m/s)", color="green")
ax2.set_xlabel("time")
ax2.tick_params(axis="y", labelcolor="green")
ax2.plot(time, velocity, "green")
ax2.set_yticks(np.linspace(*ax2.get_ybound(), 10))
fig.set_size_inches(7, 5)
fig.set_dpi(100)