-
Contributing to a Github PR
Jul 6, 2022
As an open source maintainer, I spend a good amount of time mentoring and providing feedback on pull requests (PRs). In some cases, providing comments to a PR may not be enough and I’ve had the occasional need to step in and make code contributions back to a contributor’s PR. How to accomplish this was a mystery to me but these steps seemed to work well:
- install the Github CLI (e.g.,
conda install -c conda-forge gh
) - clone main (parent) repo (owned by me) (e.g., git clone https://github.com/TDAmeritrade/stumpy stumpy.git)
- navigate to the PR page and copy the Github CLI command provided in the
< > Code
button in the upper right (e.g.,gh pr checkout 595
) - navigate to the cloned repo in your terminal & execute the CLI cmd in Step 3
- finally, change the code (it will already be on the right branch), commit, and push
And that’s it! Thank you to all of those who chimed in with useful suggestions.
- install the Github CLI (e.g.,
-
Handling Special Characters In Your Proxy Password
Apr 24, 2022
In the past, I’ve discussed how not to fetch conda packages behind a firewall and, instead, how to leverage your coporate proxy. However, in any modern tech environment, you will likely be required to include special characters in your password that might cause your script to fail since special characters will need to be “URL encoded”. This post builds upon our previous knowledge and provides a solution to this problem:
With the proxy function above, you can now update conda and install other packages with:
Now, you should be able to use the proxy and to install the software that you need to be productive!
-
Installing And Updating Conda With A Proxy
Nov 7, 2021
In the past, I’ve discussed how not to fetch conda packages behind a firewall. In this post, let’s assume that you are working behind a firewall in some corporate environment but your network administrator has given you access to a username/password-procted proxy:port that allows you to reach the “outside world”:
Now, how do we download, install, and updateconda
but make sure that we direct our requests through our proxy? First, let’s downloadminiconda
through our proxy and then install it:With the proxy function above, you can now update conda and install other packages with:
Now, you should be able to use the proxy and to install the software that you need to be productive!
-
Installing line_profiler For the Apple M1 ARM-based Chips
Apr 19, 2021
Recently, I found the need to install
line_profiler
on my Apple M1 machine in order to profile some code in Jupyter. Whilepip
appeared to successfully install the package without any errors, I encountered segmentation faults when I loaded the package. Instead, the solution was to install directly from source :
Now, you should be able to useline_profiler
!
-
Adding LaTeX Equations to Github Issues
Mar 31, 2021
Recently, I found the need to embed some LaTex equations in a Github issue and discovered that you can do this by using this HTML image tag along with the desired equation:
which should produce:
Now, you should be able to embed this anywhere!Check out this thread to add additional formatting like superscript or fractions
-
Installing Miniconda
Feb 19, 2021
When developing on a new VM, this is the basic setup that I like to build on top of:
And then, after installingminiconda
, I just need to exit and re-enter the VM
-
Installing A Conda Environment
Jul 24, 2020
When developing STUMPY, I frequently and purposely delete my entire Python (conda) environment and start from scratch. This serves two purposes:
- It ensures that I continue developing and testing using the latest dependencies
- It helps to ensure that no new local dependencies have crept into STUMPY
However, every time I wipe out my environment (i.e.,
rm -rf /path/to/miniconda3/
), I have to try and remember what I need to re-install. Instead, it’s far more efficient to document all of this in anenvironment.yml
file:
And then, after reinstallingminiconda
, I just need to execute the following commands in the same directory as theenvironment.yml
file
Note that this installs the desired packages in the base conda environment. To install this in a named environment, you’ll need:
-
Finding the Top or Bottom K Elements from a NumPy Array
Jan 3, 2020
If you have a NumPy array, you can use
np.partition
to find the top k elements or the bottom k without having to sort the entire array first. To find the top k:
To find the bottom k:
Alternatively, you can usenp.argpartition
to find the indices of either the top k elements or the bottom k without having to sort the entire array first. To find the top k:
To find the bottom k:
-
Write and Read a NumPy Array
Dec 23, 2019
The documentation for writing/reading a NumPy array to/from a file without pickling is a tad hard to follow. One may need to do this when dealing with large NumPy arrays. Below is some simple code for writing and reading a NumPy array to a temporary file:
Here, we leverage thetempfile
module for dynamically creating our intermediate files and then deleting them only after the file is no longer needed. This is rather useful if you are using themultiprocessing
module and your files are created within a subprocess. The key part to all of this is the necessity to append the.npy
extension to the file name that is created bytempfile
and also to turn off pickling (this will allow us to create files that are larger than 4GiB) in size.
-
Ignoring Requests SSL Verification in Third Party Apps
Dec 3, 2019
WARNING: This is unsafe!
One of the most annoying things is not being able to perform an https request using a third party application (e.g., twine) if you’re behind a network firewall that requires SSL verification. Luckily, you can turn this off by temporarily modifying the requests package. Open the
~/miniconda3/lib/python3.7/site-packages/requests/sessions.py
and, in theSession
class, setAlternatively, directly from the command line, you might be able to tell the
ssl
Python module to temporarily disable SSL verification before running the Python program:Note that this is a complete hack and not usually advised especially if the site that you are visiting is not trusted. This is only here for demonstration purposes. Do not use this code!
-
Fetching Code Repositories Without SSL
Nov 6, 2019
One of the most annoying things is not being able to clone a code repository from trusted sources such as Github or Bitbucket if you’re behind a network firewall that requires SSL verification. Luckily, you can turn this off in Git via:
If code (or a file) is retrieved via
wget
then you can turn off certificate checking globally with:
-
Creating a Time-Dependent Sleep Function
Oct 23, 2019
I’ve been retrieving data from a database that is being accessed by many people and it is especially busy during the mornings. So, to avoid this congestion, I set a simple timer in my Python code that detects the (Eastern) time and sleeps for a minute until it is outside of the congested hours. Here’s what it looks like:
Things seemed fine when I ran the script locally and it would update the cron job as I had expected. So, I let it go on its merry way. A couple of weeks later and the site that I was monitoring goes down but I don’t get any notifications. Again, running the above code directly from the command line worked perfectly but the cron job was not doing anything. Upon further inspection, the cron job was somehow looking for therun_monitor.sh
script in thehome
directory. Then it dawned on me that thepwd
was the problem. Since the cron job is being executed relative to the home directory, thepwd
command within the script actually returns the home directory and not the location of therun_monitor.sh
script. So, instead, we need the$APPDIR
to point to the location of therun_monitor.sh
script like this:
Problem solved! It’s a subtle bug that was hard to detect/debug but became obvious only after I spent some time thinking about it. I hope to never encounter this again!
-
Special Cron Job
Oct 16, 2019
Today, I encountered a really nasty scripting “bug” that involves running a cron job. I have a simple script called
run_monitor.sh
that:- Curls a website to get its response status
- Checks the response status
- Updates the cron job depending on the status
Things seemed fine when I ran the script locally and it would update the cron job as I had expected. So, I let it go on its merry way. A couple of weeks later and the site that I was monitoring goes down but I don’t get any notifications. Again, running the above code directly from the command line worked perfectly but the cron job was not doing anything. Upon further inspection, the cron job was somehow looking for therun_monitor.sh
script in thehome
directory. Then it dawned on me that thepwd
was the problem. Since the cron job is being executed relative to the home directory, thepwd
command within the script actually returns the home directory and not the location of therun_monitor.sh
script. So, instead, we need the$APPDIR
to point to the location of therun_monitor.sh
script like this:
Problem solved! It’s a subtle bug that was hard to detect/debug but became obvious only after I spent some time thinking about it. I hope to never encounter this again!
-
Detecting File Encodings
Oct 16, 2019
There have been numerous times where I’ve tried read a CSV file into a Pandas DataFrame and it fails due to the file encoding. The best thing to do is to detect the file encoding by reading a few lines from the file and then passing that encoding to Pandas. The file encoding detection part can be done with the
chardet
package and below is a convenience function for grabbing the encoding for the firstn_lines
:
Then you can call this function with a file name:
I hope this help you too!
-
Checking if a Local Dask Cluster is Running
Oct 8, 2019
One thing that I find myself doing fairly often is spinning up a local Dask cluster, forgetting that it is running, and then starting it up again on a different port. This is here as a reminder that you can first
try
to connect to the cluster and, if it times out, then create theLocalCluster
on the fly:
Since I always forget how to do this, I hope that others will find this helpful!
-
Conda Base Ignores Linux .Profile
Aug 5, 2019
Recently, I reinstalled miniconda and allowed it to set up
conda init
on Mac OSX. From what I could tell, it:- Updates
.bash_profile
and checks to see ifconda.sh
exists and executes it - Otherwise, if 1. is False, then pre-pend the
miniconda3/bin
directory to$PATH
- Add
. /Users/law978/miniconda3/etc/profile.d/conda.sh
to the.profile
and remove references to pre-pendminiconda3/bin
to$PATH
Now, this seemed great but I noticed that all of the aliases that had been set in
.profile
were being ignored. In fact, it looks like.profile
wasn’t being sourced at all. When I looked inside of.bash_profile
, I realized that the usual command to source the contents of.profile
was missing:
Simply adding these three lines back to.bash_profile
did the trick. So, now, all of my aliases are loaded when a new terminal is opened andconda
is initialized according to plan.
- Updates
-
Leveraging Sparse Arrays for Large-ish (Feature) Data Sets
Jul 26, 2019
When building a machine learning model, you’ll often have your feature data stored in one or more database tables (e.g., with a couple of million rows and maybe a thousand columns) where each row represents, say, an individual user and each column represents some feature that you’ve engineered (i.e., number of logins to your site, number of daily purchases, number of ads clicked on, 52 consecutive columns storing a weekly moving average over a year, etc). Since storage is cheap and you can’t afford to pay for a compute cluster, your instinct might be to download the full table onto your laptop as a quick and dirty CSV file or, possibly, in a more portable and compact parquet format. However, depending on your local computer hardware, trying to load that full data set into a Pandas DataFrame might consume all of your memory and cause your system to grind to a halt!
Luckily, there is one approach that a lot of people often overlook but that might work wonderfully for you. The secret is to exploit the inherent sparsity that likely exists within your large data set!
-
A Simple Restart Script
Jul 20, 2019
Let’s say that you have a Python script (or other command), called run.py, that you’d like to run in the background on a Linux OS and then end the session:
To prevent the session from timing out, one common way to avoid this is to use:
This will execute the command in the background and also write any output to a LOG file. However, if I come back months later and forget that it is still running and I execute this command again then I might end up having multiple processes running the same command. To avoid this, we can do a little better and ensure that we kill the existing process. This typically requires recording the PID of the original process:
This simple little trick has come in handy many times and I hope you find it useful too!
-
STUMPY - A Re-newed Approach to Time Series Analysis
May 13, 2019
Thanks to the support of TD Ameritrade, I recently open sourced (BSD-3-Clause) a new, powerful, and scalable Python library called STUMPY that can be used for a variety of time series data mining tasks. At the heart of it, this library takes any time series or sequential data and efficiently computes something called the matrix profile, which, with only a few extra lines of code, enables you to perform:
- pattern/motif (approximately repeated subsequences within a longer time series) discovery
- anomaly/novelty (discord) discovery
- shapelet discovery
- semantic segmentation
- density estimation
- time series chains (temporally ordered set of subsequence patterns)
- and more…
First, let’s install stumpy with Conda (preferred):
or, alternatively, you can install stumpy with Pip:
Once stumpy is installed, typical usage would be to take your time series and compute the matrix profile:
For a more detailed example, check out our tutorials and documentation or feel free to file a Github issue. We welcome contributions in any form!
I’d love to hear from you so let me know what you think!
-
Discussing Data Science R&D on the DataFramed Podcast
Apr 1, 2019
Due to DataCamp’s internal mishandling of the sexual assault, I will no longer be promoting this podcast recording and, instead, encourage you to read more about what happened here.
I was recently invited to sit down and chat with Hugo Bowne-Anderson on the DataFramed podcast to talk Data Science R&D. Have a listen and leave your comments below!
-
Setting Values of a Sparse Matrix
Feb 27, 2019
Let’s say that you have a sparse matrix:
One of the most common things that you might want to do is to make a conditional selection from the matrix and then set those particular elements of the matrix to, say, zero. For example, we can take our matrix from above and set all elements that have a value that are less than three to zero. Naively, one could do:
This works and is fine for small matrices. However, you’ll likely encounter a warning message such as the following:
The problem here is that for large sparse matrices, the majority of the matrix is full of zeros and so the<
comparison becomes highly inefficient. Instead, you really only want to perform your comparison only with the nonzero elements of the matrix. However, this takes a little more work and a few more lines of code to accomplish the same thing. Additionally, we want to avoid converting our sparse matrices into costly dense arrays.
-
Pip Installing Wheels with Conda GCC/G++
Jan 17, 2019
I was trying to
pip install
a simple package that contained wheels that needed to be compiled with both GCC and G++. Of course, without using SUDO (i.e.,yum install gcc
) meant that I needed to rely on my good friend, Conda:
Now, we aren’t done yet! According to the conda documentation, the compilers are found in/path/to/anaconda/bin
but thegcc
andg++
executables are prefixed with something likex86_64-conda-cos6-linux-gnu-gcc
. So, we’ll need to create some symbolic links to these executables:
Alternatively, you can avoid the symbolic link by setting your environment variables accordingly
Now, my pip install command is able to compile the wheel successfully!
-
Compiling Facebook's StarSpace with Conda Boost
Jun 17, 2018
Recently, I was playing around with Facebook’s StarSpace, a general-purpose neural model for efficient learning of entity embeddings for solving a wide variety of problems. According to the installation instructions, you need a C++11 compiler and the Boost library. I already had GCC installed and Boost was only a quick conda command away:
The StarSpace Makefile is hardcoded to look for the Boost library in /usr/local/bin/boost_1_63_0/, which is a problem. But how should I modify the StarSpace Makefile so that it knew where to include the Boost library? After a little digging, I found the Boost files in/path/to/anaconda/include
. So, all I had to do was modify the following line in the StarSpace Makefile:
Executedmake
on the command line and everything compiled nicely! Yay!
-
I'm Melting! From Wide to Long Format and Quarterly Groupby
Dec 27, 2017
Recently, a colleague of mine asked me how one might go about taking a dataset that is in wide format and converting it into long format so that you could then perform some groupby operations by quarter.
Here’s a quick example to illustrate one way to go about this using the Pandas melt function.Getting Started
Let’s import the Pandas packageLoad Some Data
First, we’ll create a fake dataframe that contains the name of a state and city along with some data for each month in the year 2000. For simplicity, imagine that the data are the number of Canadians spotted eating poutine.
-
Select Rows with Keys Matching Multiple Columns in Subquery
Oct 17, 2016
When you query a database table using SQL, you might find the need to:
- select rows from table A using a certain criteria (i.e., a WHERE clause)
- then, use one or more columns from result set (coming from the above query)
as a subquery to subselect from table B
You can do this quite easily in SQL
key1 key2 val 0 A B 2 1 A Z 3 2 C D 4 3 E F 5 4 G H 6 color key1 key2 0 red A B 1 orange A Z 2 yellow C D 3 green E F 4 blue G H
So, if we wanted to grab all rows from df_colors where the value in df_vals is inclusively between 2 and 6, then:
-
Pandas Split-Apply-Combine Example
May 28, 2016
There are times when I want to use split-apply-combine to save the results of a groupby to a json file while preserving the resulting column values as a list. Before we start, let’s import Pandas and generate a dataframe with some example email data
Import Pandas and Create an Email DataFrame
-
Pandas End-to-End Example
May 25, 2016
The indexing capabilities that come with Pandas are incredibly useful. However, I find myself forgetting the concepts beyond the basics when I haven’t touched Pandas in a while. This tutorial serves as my own personal reminder but I hope others will find it helpful as well.
To motivate this, we we’ll explore a baseball dataset and plot batting averages for some of the greatest players of all time.
-
Fetching Conda Packages Behind a Firewall
Dec 23, 2015
WARNING: This is unsafe!
One of the most annoying things is not being able to update software if you’re behind a network firewall that requires SSL verification. You can turn this off in Anaconda via
and for pip via
Optionally, you can also specify the package version like this:
Better yet, you can permanently set the trusted-host by adding the following to the $HOME/.pip/pip.conf file:Alternatively, you can also temporarily disable SSL verification from the command line with:
Do not use this code! –>
-
Convert a Pandas DataFrame to Numeric
Dec 15, 2015
Pandas has deprecated the use of
convert_object
to convert a dataframe into, say, float or datetime. Instead, for a series, one should use:
or, for an entire dataframe:
-
Python 2 Unicode Problem
Dec 1, 2015
The following Python error is one of the most annoying one’s I’ve ever encountered:
Essentially, you can’t write unicode characters as string unless you’ve converted the text to a string first before printing it. A detailed explanation can be found in Kumar McMillan’s wonderful talk titled ‘Unicode in Python, Completely Demystified’. To summarize, McMillan offers three useful yet simple rules:
-
2015 Nobel Prize in Chemistry Awarded to DNA Repair
Oct 7, 2015
Today, the 2015 Nobel Prize in Chemistry was awarded to the field of DNA repair. I am especially excited by this news since I had spent six years researching the role that DNA base-flipping plays in DNA repair when I was a graduate student studying at Michigan State University under the mentorship of Dr. Michael Feig. Thus, my research sat at the crossroads between the exciting worlds of computational chemistry (which was awarded the Nobel Prize in Chemistry two years ago in 2013) and DNA repair which have ultimately shaped my appreciation for doing science.
Dr. Paul Modrich, one of the three Nobel Prize recipients this year, is a pioneer in the field of DNA mismatch repair and has spent decades trying to understand the mechanism by which humans (and other eukaryotes) maintain the efficacy and fidelity of their genome. As a computational biochemist/biophysicist, I am honored to have had the opportunity to make significant contributions to this field of research and am delighted to see this area be recognized!
Others scientists who have also made an impact in the area of DNA mismatch repair include (in no particular order) Drs. Richard Kolodner, Richard Fishel, Thomas Kunkel, Dorothy Erie, Manju Hingorani, Peggy Hsieh, Shayantani Mukherjee, Alexander Predeus, Meindert Lamers, Titia Sixma, et al.
Congratulations to all!
-
Installing Downloaded Anaconda Python Packages
Sep 22, 2015
If you work in a secure network at your job then conda may not be able to hit the Anaconda repositories directly even if it’s for accessing free packages. Additionally, it’s not recommended to use pip over conda when installing new packages. However, installing new packages can be done manually by:
- Downloading the package(s) (and its necessary dependencies) directly from the Continuum Repo
- And installing the tar.bz2 file using
conda install ./package_name.tar.bz2
-
Using NumPy Argmin or Argmax Along with a Conditional
Sep 10, 2015
It’s no secret that I love me some Python! Yes, even more than Perl, my first love from my graduate school days.
I’ve always found NumPy to be great for manipulating, analyzing, or transforming arrays containing large numerical data sets. It is both fast and efficient and it comes with a tonne of great functions.
-
Modern Data Scientist
Aug 8, 2015
The qualities of a modern data scientist is summed up very nicely in this article/guide and image by Marketing Distillery. As they point out, the team should be composed of people with a “mixture of broad, multidisciplinary skills ranging from an intersection of mathematics, statistics, computer science, communication and business”. More importantly:
“Being a data scientist is not only about data crunching. It’s about understanding the business challenge, creating some valuable actionable insights to the data, and communicating their findings to the business”.
I couldn’t agree more!
-
Anaconda Environment
Jul 9, 2015
I’ve been using Continuum’s enterprise Python distribution package, Anaconda, for several months now and I love it. Recently, people have been asking about Python 2.7 vs Python 3.x and so I looked into how to switch between these environments using Anaconda.
In fact, it’s quite straightforward and painless with Anaconda.
To set up a new environment with Python 3.4:
-
Stitch Fix Loves UNIX
May 27, 2015
The wonderful group of people at Stitch Fix has shared an informative list of useful UNIX commands. Go check it out now!
-
Drafts in Jekyll
Mar 14, 2015
The great thing about Jekyll is that you can start writing a draft without publishing it and still be able to see the post locally.
- Create a draft directory called
_drafts
in the root directory - Create a new post in this directory but omit the date in the file name
- Serve up the page locally using
jekyll serve --drafts
Then, Jekyll will automatically adjust the date of the post to today’s date and display the post as the most recent post. Note that this post won’t be displayed on your github pages since they aren’t using the
--drafts
option. So, you’ll be able to save all of your drafts without worrying about them showing up on your live site. Once the post is ready for the prime time, then simply move it over to the_posts
directory and prepend a date to the file name. That’s it!
- Create a draft directory called
-
3D Coordinates Represented on a 2D Triangle
Mar 6, 2015
I came across this interesting way of showing 3D coordinates on a 2D triangle published in the Journal of Physical Chemistry B. It takes a minute to orient yourself and figure out how best to interpret the results but the idea is pretty cool. I wonder what type of geometric transformation is need to create this plot. Once I figure it out, I’ll be sure to blog about it and prototype it in Python!
-
MathJax
Feb 28, 2015
As I embark on the PyESL project, I’ll need to include math equations in future blog posts. The easiest way to accomplish this is to use MathJax so that I can incorporate Tex/LaTeX/MathML-based equations within HTML. In Jekyll, all you need to do is add the MathJax javascript to the header section of your default.html and add a new variable to your _config.yml file.
<head> ... <script type="text/javascript" src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"> </script> ... </head>
and add the following to your _config.yml file:markdown: kramdown
For example, this markdown:Inline equation \\( {y} = {m}{x} + {b} \\) and block equation \\[ {y} = {m}{x}+{b} \\]
produces:Inline equation \( {y} = {m}{x}+{b} \) and block equation \[ {y} = {m}{x}+{b} \]
Here, the parentheses denote an inline equation while the square brackets denote a block equation.
And this is a multiline equation:
-
Tag Aware Previous/Next Links for Jekyll
Feb 22, 2015
Creating and maintaining a vanilla Jekyll-Boostrap website is pretty straightforward. However, I couldn’t find an obvious way to customize the previous/next links below each blog post so that:
- The links were aware of the tags listed in the front matter
- The method did not depend on plugin (since my site is being hosted on Github)
After tonnes of digging, I managed to piece together a Liquid-based solution (see my last post, to see how I add Liquid code in Jekyll)!
-
Escaping Liquid Code in Jekyll
Feb 21, 2015
To document some of my challenges in customizing this site, I’ve had to delve into Liquid code. However, adding Liquid code tags in Jekyll can be quite tricky and painful. Luckily, some smart people have identified a couple of nice solutions exists. Below is the markdown code that I’ve adopted for use in future posts:
-
Elements of Statistical Learning
Feb 20, 2015
My new book purchase, Elements of Statistical Learning by Trevor Hastie, Robert Tibshirani, and Jerome Friedman arrived in the mail the last week and I’m excited to get reading! Springer was also kind enough to make this classic book available free to download. Get your copy here! Python implementations of each chapter will be added in the PyESL section.
-
Github and Jekyll-Bootstrap, FTW!
Feb 17, 2015
My blog is finally up and running! Currently, it’s being hosted (for free) and backed up on Github pages using vanilla Jekyll-Bootstrap. Font awesome, which definitely lives up to its name, was used to produce the social icons along the navigation bar and Dropbox is being used for redundancy. More design customizations will follow but I’m loving how easy the process has been!