Code, Linux, python

Instead of eliminating the GIL, Python should work around it

Note: In the following text Python refers to CPython

Python is a great language. With everything it has going for it, it has one big hairy wart – the Global Interpreter Lock. The GIL is a mutex that prevents multiple threads from running Python code at the same time. Unless your program uses a C extension that releases the GIL for an extended period of time, your threads will do nothing but wait for the GIL to become unlocked until it is their turn to run. I wrote a quick test that shows, even though the same work is divided into 25 threads it runs as the same speed as 1 thread.

This test was run with Python 3.6 in 2018. Each instance of the loop ran between .04 and .06 seconds with the single threaded and multi threaded code taking turns at being the fastest depending on the iteration.

Even though the threaded code runs in the same amount of time, it uses vastly more CPU time. When the OS tries to run the threads that don’t have the GIL they spin in a while-sleep loop waiting for their turn to hold the GIL.

Threads in Python are real system threads. They have all the overhead of threads and very few of the benefits. Before Asyncio, I/O was the only thing in threads in Python were really good for. You could have a reader read from one socket and a writer write to another socket. The threads never ran at exactly the same time. But your program wasn’t deadlocked until some new data came in or the socket timed out. A core Python developer once wrote:

The GIL’s effect on the threads in your program is simple enough that you can write the principle on the back of your hand: “One thread runs Python, while N others sleep or await I/O.”

You could have 50 threads, and at any given time the operating system can try to run one of them. 98% of the time, any given thread would just sit in a while loop waiting for the GIL to become unlocked, wasting clock cycles and accomplishing nothing. Threads in Python are real system threads.  They get scheduled separately but they will never run at the same time unless you are using a C extension that releases the GIL. Python threads have all the overhead of real threads with few of the benefits.

With the advent of Asyncio, threads in Python are now the wrong tool for the job in many situations. Ayncio is able to run multiple tasks on the same thread. Because of this, the CPU caches stay fresh, context switching happens much less, there is no need for locking to maintain state (although it still happens), and locked threads are not stuck in a sleep-while loop waiting for the GIL to become unlocked.

You get all the benefits of threading in Python without the performance hit of constant context switching.  During context switching CPUs invalidate their caches.  The L1 cache is about 100x faster than system memory and the L2 cache is about 25x faster.

Because of this, it seems that Asyncio should make CPU bound tasks faster as well or at least as fast as their multithreaded counterparts. In reality, the same code I posted above runs 3x-5x slower when you throw it on an event loop with Asyncio. Though keep in mind this code abuses async and await in ways that make no sense in the real world.

Asyncio will make IO bound tasks fast but it won’t make your CPU bound code any faster. Today, the multiprocessing module is what many programmers reach for to solve this problem. It uses processes to accomplish what many other languages do with threads. It forks the program allowing two instances of the interpreter to run at once. The downside to this is that all data has to be pickled and you have to deal with the overhead of multiple processes. You can light up every CPU core on your system but passing objects is significantly harder than it is with threads. Usually you end up passing dicts when you really wanted to pass a class.

There have been attempts to remove the GIL. The Gilectomy was probably the most well known and recent of these efforts. It’s last commit was in 2016. The project appears to be abandoned now. The creator of this project gave two talks on what was involved. The GIL  had to be replaced with hundreds of locks all over the CPython code. The garbage collector needed major changes. They struggled to get performance to where we are with Python’s current threading implementation. Many if not most C extensions would break.

Python is incredibly dynamic. They were trying to make it possible that any thread could do any of the things Python is known for. Monkey patch a module or change the global state from 5 threads at once. These are features that most people would probably never use. In the real world threading is usually done by giving input, doing work and producing output. Global state is usually read only. Most programs keep global state as read only. The ones that don’t probably should.

Most of the time you take input, either from a key press, a web request, or command line arguments and it bubbles up the stack to run the program. I’ve never used the global keyword in a production program. Programs that write to a global state are very hard to multithread even with the GIL. You can never predict what order the OS will allow your threads to run. You can only lock other threads out of using resources.

I would love if we could get a new threading primitive that has no GIL and no ability to change global state unless the GIL is explicitly requested. It could modify global state by using a decorator that seized the GIL or maybe by using a keyword. It could do all of it’s number crunching on it’s own stack and the 3 lines that need to modify the UI would be the only ones that were decorated. Nothing would need to be pickled. You could pass real objects but any modifications to them would either throw an exception unless they were done in the GIL decorator. This would be a lot like what you get from multiprocessing today with the ability to modify global state when you need to, without the overhead of processes, and without the need to pickle everything.

Something like this appears to be coming with subinterpreters. This will allow developers to create a blank slate that won’t be bound by the GIL. Hopefully this evolves into something like what I’ve described above.

General Tech

God, I hate Lenovo

Almost a year ago I decided it was time for a new laptop. I found a great two-in-one on Lenovo’s site. Great reviews, nice 4k screen, a graphics card good enough for playing games, great processor – it had everything.

It came with a respectable 2Gb of graphics memory. People in other parts of the world said they were able to get this model with 4Gb of graphics memory. I really wanted to max it out. I got on the sales chat and was connected with Danny. He informed me that the 4Gb models were coming and they were right around the corner. I was happy, and prepared to wait as long as it took.


A few weeks later I went back on the sales chat to see if anyone had any idea when the models with 4Gb of graphics memory were coming out. The sales rep had no idea what I was talking about. Obviously, they weren’t very good at their job. A few days later I reconnected, told the person what Danny had said, but they too had no idea what I was talking about.

This went on for weeks. Eventually I did find someone who was able to pull up the data sheet that said this laptop was configurable with 4Gb of graphics memory. None were being sold in the US and there was no plan to sell them here. I was sure Danny must of knew something. Why else would he say that?

As luck would have it, the next time I connected with Danny. I started as I always did saying “some guy told me that the 4Gb models were coming out…” My next message was “hey wait, that was you”. He disconnected. He wouldn’t even talk to me. Now I could finally see no 4Gb models were never coming. Even now, almost a year later, there are no Yoga 720s for sale in the US with 4Gb of graphics memory.

I ordered the Yoga 720. It came in the mail and I was happy. I beat Doom and Fallout 4 on it. It was great. I forgot about Danny. I was happy with my new computer. Until one day I noticed there was a black bar in the top right corner of the screen.

I found a post with over 50 pages on Lenovo’s website with people having the same issue. I didn’t save any pictures of mine, but here is someone else’s. The problem was identical.

black line

Not even that big of an issue, right? But when you drop $15000 on a laptop you don’t want problems like these. In the forums I read how people who got their laptop repaired encountered a whole host of other issues. Techs would scrape the hinges taking the laptops apart. They would break and crack things. And when the screens came back eventually they would all get the black line again.

I called up support. I was told all would be ok. My precious Yoga 720 would be good as new. They’ll send me a box, I’ll send her away, and she’ll come back with a beautiful line free screen.

A few weeks later the laptop came back. Not a scratch. I powered it on and was greeted by white clouds all around the black boot screen. It seems Lenovo had procured some new screens. They had no black bar. They found the lowest quality screens they could find of the same resolution. Screens with insane amounts of light bleed. I tried to live with it. A few weeks went by. In the Lenovo forums everyone was getting these low quality light bleeding screens back from repair. I decided if I just waited maybe Lenovo would get their act together and have a way to deal with this issue.

I waited. I figured maybe some of the screens are ok. Maybe the forum only attracts complainers. I called up support and got another box to ship it back. Before I packaged it up, I set this as the background:


The replacement came a few weeks later. The box contained a letter stating the issue was resolved by replacing the screen. This is a video showing what they sent me.

Here is a peek from that video.


The next Monday I got on a chat and was told that they would replace my screen (again). They told me I could not exchange it for a different device. I could not exchange it for a refurbished device. The only thing I could do is get it repaired.

Here is how that conversation ended:

 Kyle Agronicki just know this will end badly
 Kyle Agronickthis is the third time
 Sashahere’s your case number *******
 Kyle Agronickand i always get assured everything is fine and i end up with the same problem everyone else who got one of these has
 SashaWe know that this is the third time.
 SashaNothing to worry as our technician are specialize to handle this kind of concer, 
 Kyle Agronickok
 Kyle Agronickthats what im talking about
 Kyle Agronickhave they gotten new screens that don’t have this issue?
 SashaI will have this endorsed to them 
 Kyle Agronickok
 Kyle Agronickwhat time will this be scheduled? 
 SashaWe will be contacting you first for the best time. 
 Kyle Agronickok
 SashaThis has been _____ at your service. We appreciate you taking time contacting us today. Please feel free to browse our website for self-help options and other exciting items, it’s

Yes, she literally didn’t bother to copy her name in.

And that is where the story leaves off. This just happened today. In a few weeks or days someone will come over my house a try to fix a screen when Lenovo hasn’t given them them a means to do so. The screens are defective by design. Lenovo is content to screw everyone who dropped a paycheck or three on one of these laptops and didn’t have the foresight to return it in within 30 days. No one at Lenovo cares. They have employees in the forums. They mark the posts about these issues as solved. Solved with these new terrible screens. We need a voice. We need screens without light bleed.

KDE, Linux, OpenSuse, Uncategorized

Compiz works great with KDE 5

Compiz was pretty much “the thing” that got me started with Linux. Whenever it comes up on online forums, there are usually a few people who say the same thing. When KDE 5 and Gnome 3 made their own compositors and window managers a lot the Compiz functionality was replaced. Most people seem to be content with this and Compiz was regarded as a thing of the past. When KDE 5 and Gnome 3 first came out Compiz was completely incompatible. Now it is possible to get Compiz running on both.

I’ve always felt that Compiz offered a lot more and was generally more useful than the KDE effects. Today I found out, it really isn’t that hard to get Compiz working in KDE 5 with Plasma.

I’m using Open SUSE but you should be able to find these packages in whatever distribution you are using.

Compiz is the window manager, Emerald is the window decorator, Compizconfig Settings Manager is the configuration tool, and Fusion Icon sets everything up.

You’ll want to disable KDE’s desktop effects. Search “Compositor” in your application menu and disable “Enable compositor on startup”.

Search “Autostart” in your application menu. Add fusion-icon to run on startup. Run fusion-icon from a terminal or the applications menu, and you should be able to change your window manager to Compiz. Right click the Fusion icon and choose “Select Window Manager”. Once that is set, it will replace your window manager on startup. From the Fusion Icon, you can set your Emerald Theme and Compiz settings.

If you log back in and you don’t have a desktop, or your desktop is blank, this is easy to fix. The blank desktop issue seems to happen if Compiz loads too fast. Replacing the autostart fusion-icon command with a script with the contents sleep 5; fusion-icon seems to give KDE enough time to load the desktop before Compiz loads. Compiz still starts while KDE is loading, so you don’t see a hacky switch in window managers 5 seconds into your desktop session.

If you want to use the “Windows Previews” plugin in Compiz, you may see two window previews when you hover over your task manager if KDE’s window previews are turned on. To disable this, right click your task manager, click “Task Manager Settings” and uncheck “Show Previews”.

So why even use Compiz? One of the main features for me is, just by holding down shift while switching desktops, I can bring the window with me while moving to different sides of the cube. I’ve never been able to find a way to do this in KDE. There are also a lot more features, plugins, and themes.

A lot of it seems frozen in time. A lot of the Emerald themes I remember from 10 years ago. But they still work fine

I think Compiz was good for the Linux community. It got a lot of people talking about Linux and a lot of people using Linux. It is kind of unfortunate that it was shut out by the big two desktop environments. When Compiz was popular, I remember seeing new plugins in the Compiz settings manager every few weeks. It has been years since KDE 5 was released and there are hardly any plugins for it’s “Desktop Effects”.

I know a lot of Linux users dismiss Compiz as pointless “bling”. Even if this was true, people were sharing Compiz videos and people were trying Linux just for Compiz. I think it would of been better if Gnome and KDE didn’t shut Compiz out.

Whatever the issue was with KDE 5, it seems to be fixed. With Gnome, even though most online posts say it is impossible to run Compiz, it has been reported to work if you start Gnome in “fallback mode”.

Code, Django, Uncategorized

Making django-sitetree’s Display Permissions Show Access Denied

I like django-sitetree for what it is. There aren’t any other modules with as many features. One thing that bothers me is the permissions options just hide links. Anyone with a URL can still go to pages they aren’t allowed to. If this is how you are doing security for your site this can be a huge security risk.

Using custom middleware you can use django-sitetree’s own methods to check if you should show a page.

First, if you have multiple sitetrees come up with some logic to decide what sitetree you should look up based off the path.

If have:
alias = 'control_panel' if 'control_panel/action' in request.path else 'main_menu'

This makes it use the ‘control_panel’ sitetree and the ‘main_menu’ sitetree if ‘control_panel/action’ is not in the path.

Next make a middleware class based of whats below. Pay attention to the alias line you made earlier and replace it:

from sitetree.sitetreeapp import SiteTree, get_sitetree

class CheckAccessMiddleware(object): 

def process_request(self, request):
tree = get_sitetree()
context = SiteTree.get_global_context()
context['user'] = request.user
context['request'] = request
------REPLACE THIS------
alias = 'control_panel' if 'control_panel/action' in request.path else 'main_menu'
------REPLACE THIS------
tree.init_tree(alias, context)
page = tree.get_tree_current_item(alias)
if page:
    access = tree.check_access(page, context)
    if not access:
         # This should happen very rarely. A user will not
         # be shown a URL they don't have access to
         from django.core.exceptions import PermissionDenied
         raise PermissionDenied

Add this class to you MIDDLEWARE_CLASSES in That should be it. If a path is not in your sitetree it won’t do anything. So make sure everything sensitive is in the sitetree. Don’t have items in your sitetree without a trailing slash and a trailing slash in your Django will just redirect to the URL with the trailing slash and this will be run on the URL that does not exist in your sitetree.

One other thing of note, when looking through django-sitetree’s code I noticed they put the requests in a global variable and access it though a singleton. It seems to me that doing that is a definite no no as requests could bleed from one user to the other. I’m not well versed enough in how Django splits up requests among processes to know. It just doesn’t feel particularly right.


Simple Django Module to Log Request Information to the Database

There are some solutions out there for logging analytics information to the database. I wanted something really simple and minimalistic. This set of scripts monkey patches each request to write some basic information out to the database after the request data is sent to the user. Because of this it should not impact the speed of your site. In a situation where you can’t or don’t want to use something like Google analytics this gets the job done. You can change it to capture any information that is useful to you.

You can find it on Github.

Elementary OS, Linux, Uncategorized

Updates to Relay

I pushed out some updates to Relay. You can find the changes on Github or Launchpad. Relay is an elegant and sleek IRC client designed for Elementary OS but will work on any Linux OS.

Relay will try to switch to a theme that looks good. You can now disable this by passing the -t option.

I also added better Unicode support and fixed an issue that was causing it to close prematurely.

Here is what Relay looks like. Its one of the nicest looking IRC clients out there.

Screenshot from 2015-07-04 13:52:24

Elementary OS, Linux, Ubuntu

Create BTRFS Snapshots With Each apt-get Transaction

So I took it upon myself to fork apt-btrfs-snapshot. It is a program that takes BTRFS snapshots after each apt transaction. I wanted it to use Snapper because Snapper has a GUI. Snapper also abstracts all of the functionality of working with BTRFS snapshots.

Here are some of the things its provides:

  • Management via a GUI
  • Rollbacks without mounting anything
  • A list of what files were changed and their filesizes
  • Tracking of what packages were installed
  • Pre and post snapshots
  • Automatic clean up

You can check it out on github:

64bit .deb

Ubuntu 14.04 Ubuntu 14.10 Ubuntu 15.04
32bit .deb 32bit .deb 32bit .deb
64bit .deb 64bit .deb 64bit .deb

You can use a tool called gdebi to grab all the dependencies you need, which are only really Python and Ssnapper. If you want this done for you run
gdebi apt-btrfs-snapper_0.4.1_all.deb

Management Via a GUI

You can check out this post on installing Snapper GUI on Ubuntu. As you can see below you get a list of all your snapshots and in the bottom you can see what packages were installed. If you hold down control you can select two snapshots and open up the changes view to see what files were changed.

Snapper-GUI on Ubuntu
Snapper-GUI on Ubuntu


To rollback to a previous version you just type:
sudo apt-btrfs-snapper --restore .
Replace <ID> with the snapshot ID or the snapshot name. This will delete, create, and modify your files to get your machine back in the state that it was in when that snapshot was created. You can then roll forward in time just by using a newer ID. You don’t need to restart anything.


You can get a list of snapshots with:
sudo apt-btrfs-snapper list
You can then see what files were changed between two snapshots with:
sudo apt-btrfs-snapper diff

Here is a sample of that output:

c   391 B      /usr/share/doc/maya-calendar-plugin-caldav/changelog.gz
c   391 B      /usr/share/doc/maya-calendar-plugin-google/changelog.gz
c   391 B      /usr/share/doc/maya-calendar/changelog.gz
c   542 B      /usr/share/doc/pantheon-files/changelog.gz
c   246 B      /usr/share/doc/pantheon-photos-common/changelog.Debian.gz
c   246 B      /usr/share/doc/pantheon-photos/changelog.Debian.gz
c   854 B      /usr/share/doc/plank/changelog.Debian.gz 

You can use snapper itself to restore an individual file to a specific state.

Tracking of what packages were installed

apt-btrfs-snapper saves the names of all the packages that were installed in the user data of each snapshot. The best place to view this is in snapper-gui. It can be viewed in the snapper command line tools but it is hard to read. You can see this in the bottom pane in the screenshot above.

Pre and post snapshots

apt-btrfs-snapper takes a snapshot before and after each transaction. They are grouped together in snapper-gui. You can easily see what changes took place between the two snapshots.

Automatic clean up

One of the best parts about snapper are the clean up algorithms built into it. apt-btrfs-snapper simply uses the configuration settings set for the number cleanup algorithm which is part of snapper.

So check it out. Its stable, works great, and makes taking and manipulating BTRFS snapshots a lot easier.