Today I Moved to Silverblue

Today I took the plunge replacing my aging Fedora 26 install with Fedora Silverblue. What is Silverblue you ask?

Silverblue is the new face of Fedora Atomic Workstation from Project Atomic. With good support for container-focused workflows, this variant of Fedora Workstation targets developer communities. If you want to emphasize that it is part of the Fedora project, calling it Fedora Silverblue is fine, too.

Source: https://docs.teamsilverblue.org/

giphy

As a long time member of Project Atomic I had always wanted to use Atomic Workstation for my work laptop. I was keenly aware of the advantages of using an ostree based system on servers and it only made sense to use the same technology locally. However, time generally kept me from making the change. Switching to Atomic Workstation was always the 3rd or 4th thing on my TODO list. Today there was time.

Step 1: Backing Up

Technically speaking, one should be able to install an operating system without the need to back up everything up. However, if backups don’t happen and something goes wrong you’re going to have a bad time. Today I spent multiple hours combing through my home directory looking for things that must be backed up just in case. After finding everything required and placing it on local removable encrypted disk I moved on to step 2.

Step 2: Creating the Media

I grabbed the ISO from the Silverblue site and created the media the old school way. The instructions on the download page are great and I’m sure they work fine, but I’m so used to using dd for media copies I just went with that:

$ sudo dd if=Fedora-AtomicWorkstation-ostree-x86_64-28-1.1.iso of=/dev/sdb

Warning: If you do end up using dd make double sure your output file (of=) is pointed at the disk you are expecting!!!

Step 3: Install

And here is where the excitement for me really began. The official installation guide is quite good. I ended up following the manual partition instructions as it was similar to how I normally install vanilla Fedora. I set up my disks, and kept my old /home partition and mounting it to /var/home per instructions. The only additional thing I did that wasn’t in the instructions was I went into the Network & Host Name where I connected to WiFi and changed my host name before clicking Begin Installation.

Boom

img_20180706_123733520797202021912045-e1530902678123.jpg

This gave me some anxiety because at this point I had blown away my root partition so I didn’t really have a workable system to use if the installation didn’t work. So I did what any sane person would do and I decided to reboot and try again.

Second time’s the charm

This time I followed the same pattern except I didn’t connect to WiFi. I did still set my host name though. Installation went without a hitch and within about 7 minutes it was time to reboot into my new system.

Step 4: Initial Set Up

The machine rebooted, I unlock my encrypted partition, and everything looks good. A greeter popped up to set up a user, etc.. Everything you expect from a modern GNU/Linux system occurred. I figured this is the best time to update the system to the newest image so I drop to a terminal and execute:

# rpm-ostree upgrade

For whatever reason pulling the update ended up being slow. Really slow. After 25 minutes the update exited with failure. I tried adding –check just to make sure there was literally an update, and it failed as well. For the heck of it I rebooted and did it again and upgrading succeeded. Again, the network download was really slow. This time it took around 35-40 minutes to pull the update but once it was pulled down the deployment happened perfectly.

Most things on Silverblue should be installed via Flatpak or executed in a container via podman or docker. There were a few tools I decided to overlay onto the deployment. These packages were:

  • tmux – terminal multiplexer
  • weechat – console IRC client
  • rpmdevtools – mainly for rpmdev-extract

In fairness weechat these could all easily run via container, but to get up and going quickly I decided to overlay them for the time being. The most important one (for me) was rpmdevtools as there were a few packages which refused to overlay and my best move was to extract the contents (which were config files) and move them to the right locations in /etc. Not ideal, but not Silverblue’s fault.

Step 4: Install Editor

For this I decided to go with flatpak. First I needed to add the flathub remote which can be found here. Next I needed to add it as a remote:

$ sudo flatpak remote-add flathub /etc/flatpak/remotes.d/flathub.flatpakrepo

Then install my editor:

$ sudo flatpak install flathub com.visualstudio.code
Required runtime for com.visualstudio.code/x86_64/stable (runtime/org.freedesktop.Sdk/x86_64/1.6) found in remote flathub
Do you want to install it? [y/n]: y
Installing in system:
org.freedesktop.Sdk/x86_64/1.6 flathub fd7d657c9a36
org.freedesktop.Platform.VAAPI.Intel/x86_64/1.6 flathub 82006efc71d3
org.freedesktop.Platform.ffmpeg/x86_64/1.6 flathub d757f762489e
org.freedesktop.Sdk.Locale/x86_64/1.6 flathub 346dd3511a8c
com.visualstudio.code/x86_64/stable flathub 6cba55350228
 permissions: ipc, network, pulseaudio, x11, dri
 file access: host
 dbus access: org.freedesktop.Notifications, org.freedesktop.secrets
Is this ok [y/n]: y
Installing: org.freedesktop.Sdk/x86_64/1.6 from flathub
[####################] 17 delta parts, 148 loose fetched; 324907 KiB transferred in 35 seconds
Now at fd7d657c9a36.
Installing: org.freedesktop.Platform.VAAPI.Intel/x86_64/1.6 from flathub
[####################] 1 delta parts, 2 loose fetched; 2623 KiB transferred in 1 seconds
Now at 82006efc71d3.
Installing: org.freedesktop.Platform.ffmpeg/x86_64/1.6 from flathub
[####################] 1 delta parts, 2 loose fetched; 2652 KiB transferred in 1 seconds
Now at d757f762489e.
Installing: org.freedesktop.Sdk.Locale/x86_64/1.6 from flathub
[####################] 4 metadata, 1 content objects fetched; 14 KiB transferred in 0 seconds
Now at 346dd3511a8c.
Installing: com.visualstudio.code/x86_64/stable from flathub
[####################] Downloading: 94.7 MB/94.4 MB (3.6 MB/s) 
Now at 6cba55350228.

Lastly, verify it installed as expected:

$ flatpak run com.visualstudio.code

For the heck of it I also checked to see if it would add the program to the Gnome Menu and it was there!

My next steps will be getting some dev containers for languages I tend to use the most frequently.

Step 5: Be Happy

The end result is that I’m sitting here using Silverblue writing a blog post on setting up Silverblue. Was it 100% smooth with no issues? No, but it was smooth enough. The change should force me to start doing things I should already be doing in containers … in containers.

If you are interested in running on Silverblue as well I recommend giving it a shot. Take a gander at the main website and, if you’re ready to truly move your workflow to containers and flatpaks, join in!

Advertisement

flask-track-usage 2.0.0

flask-track-usage 2.0.0 has been released! Thanks to all who helped provide patches and test. Note: 2.0.0 is the recommended upgrade version from 1.1.0. 1.1.1 was released for those who are unable to make the needed changes to move to 2.x. You can check out the latest docs over at readthedocs.

The changes include:

  • MANIFEST.in: Add alembic to dist
  • CONTRIBUTORS: Add John Dupuy
  • py3: Fix import issue with summarization
  • .travis: Change mysql driver
  • test: Fix summerize tests for py3
  • travis: Add 3.6
  • docs: Quick fixes
  • README.md: Update docs to rtd
  • Use parens for multilines
  • Update versions to 2.0.0
  • sql: Increase ip_info from 128 to 1024
  • alembic: Upgrade ip_info from 128 to 1024
  • alembic: Support for upgrading SQL schema
  • sql: Create table if it is not present
  • couchdb: Add track_var and username
  • redis: Add track_var and username
  • Adding user_defined variable field to storage
  • Hooks: add new hooks system
  • test: Skip mongoengine if connection can not be made
  • storage: Rename to PrinterWriter
  • output: Add OutputWriter
  • storage: Create base class and Writer
  • requirements: Added six
  • Copyright now a range
  • Add CONTRIBUTORS
  • doc: Add note about py2 and 3
  • py3: Fix most obvious offenders
  • Move mongoengine ref in Travis CI config
  • Update Travis CI config to include mongoengine lib
  • pep8 fixes
  • MongoEngineStorage: updated docs; added get_usage
  • added testing
  • moved MongoEngineStorage to mongo.py
  • doc: Minor updates for a future release
  • Initial support for multiple storage backends
  • Update versions to denote moving towards 2.0.0
  • Added MongoEngineStorage code; adding test next.
  • docs: Update version to 1.1.1
  • release: v1.1.1
  • Updates for freegeoip function
  • test: Update sqlalchemy test for updated flask
  • test: Update mongo test for updated flask
  • test: test_data works with current Flask results
  • travis: Force pymongo version for travis
  • storage: Minor doc and structure updates for new backends.
  • Redis support
  • Added CouchDB integration. (#30)

etcdobj: A Minimal etcd Object Mapper for Python

I didn’t have a lot on my agenda Friday. I wanted to review and return emails, do some reading, get some minor hacking on etcdobj done (more on that…), eat more calories then normal in an attempt to screw with my metabolism (nailed it!), catch up with a few coworkers, play some video games, and, apparently, accidentally order an air purifier from Amazon. I succeed in all of it. But on to this etcdobj thing…

While working on Commissaire I started to feel a bit dirty over storing json documents in keys. It’s not uncommon, but it felt like it would be so much better if a document was broken into three layers:

  • Python: Classes/Objects
  • Transport: For saving/retreiving objects
  • etcd: A single or series of keys

By splitting up what normally is json data into a series of keys and two clients change overlapping parts of an object there won’t be a collision or require the client to fail, fetch, update, then try saving again. I searched the Internet for a library that would provide this and came up wanting. It seems that either simple keys/values or shoving json into a key is what most people stick with.

etcdobj is truly minimal. Partly because it’s new, partly because being small should make it easier to build upon or even bundle (it’s got a very permissive license), and partly because I’ve never written an ORM-like library before and don’t want to build to much on what could be a shaky foundation. That’s why I’m hoping this post will encourage some more eyes and help with the code.

Current Example

To create a representation of data a class must subclass EtcdObj and follow a few rules.

  1. __name__ must be provided as it will be the parent in the key path.
  2. Fields are class level variables and must be set to an instance that subclasses etcdobj.Field.
  3. The name of a field is the next layer in the key path and do not need to be the same as the class level variable.
from etcdobj import EtcdObj, fields

class Example(EtcdObj):
    __name__ = 'example' # The parent key
    # Fields all take a name that will be used as their key
    anint = fields.IntField('anint')
    astr = fields.StrField('astr')
    adict = fields.DictField('adict')

Creating a new object and saving it to etcd is pretty easy.

server = Server()

ex = Example(anint=1, astr="hello", adict={"hi": "there"})
ex.anint = 100  # update the value of anint
server.save(ex)
# Would save like so:
# /example/anint = "100"
# /example/astr = "hello"
# /example/adict/hi = "there"

As is retrieving the data.

new_ex = server.read(Example())
# new_ex.anint = 100
# new_ex.astr = "hello"
# new_ex.adict = {"hi": "there"}

Ideas

Some ideas for the future include:

  • Object watching (if data changes on the server it changes in the local instance)
  • Object to json structure
  • Deep DictField value casting/validation
  • Library level logging

Lend a Hand

The code base is currently around 416 lines of code including documentation and license header. If etcdobj sounds like something you’d use come take a look and help make it something better than I can produce all by my lonesome.

From Gevent to CherryPy

I’ve been working on a project for the last few months on GitHub called Commissaire along with some other smart folks. Without getting to deep into what the software is supposed to do, just know it’s a REST service which needs to handle some asynchronous tasks. When prototyping the web service I started utilizing gevent for it’s WSGI server and coroutines but, as it turns out, it didn’t end up being the best fit. This is not a post about gevent sucking because it doesn’t suck. gevent is pretty awesome but it’s not for every use case.

The Problem

One of the asynchronous tasks we do in Commissaire utilizes Ansible. We use the Ansible python API to handle part of host bootstrapping of a new host. Under the covers Ansible uses the multiprocessing module when executing it’s work. Specifically, this occurs when the TaskQueueManager starts its run. Under normal circumstances this is no problem but when gevent is in use it’s monkey patching ends up causing some problems. As noted in the post using monkey.patch_all(thread=False, socket=False) can be a solution. What this ends up doing is patching everything except thread and socket. But even this wasn’t enough for us to get past problems we were facing between multiprocessing, gevent, and Ansible. The closest patch we found was to also disable os, subprocess and a few other things making most of gevents great features unavailable. At this point it seemed pretty obvious gevent was not going to be a good fit.

Looking Elsewhere

There are no lack of options when looking for a Python web application server. Here are the requirements that I figured we would need:

Requirements

  • Importable as a library
  • Supports WSGI
  • Supports TLS
  • Active user base
  • Active development
  • Does not require a reverse proxy
  • Does not require greenlets
  • Supports Python 2 and 3

Based on the name of this post you already know we chose CherryPy. It hit all the requirements and came with a few added benefits. The plugin system which allows for calls to be published over an internal bus let’s us decouple our data saving internals (though couples us with CherryPy as it is doing the abstraction). The server is also already available in many Linux distributions at new enough versions. That’s a big boon hoping to have software easily installed via traditional means.

The runner up was Waitress. Unlike CherryPy which assumes you are developing within the CherryPy web framework, Waitress assumes WSGI. Unfortunately, Waitress requires a reverse proxy for TLS. If it had first class support for TLS we would have probably have picked it.

Going back to a more traditional threading server is definitely not as sexy as utilizing greenlets/coroutines but it has provided a consistent result when paired with a multiprocessing worker process and that is what matters.

Porting Time

Porting to a different library can be an annoying task and can feel like busy work. Porting can be even worse when you liked the library in use in the first place as I did (and still do!) with gevent.

Initial porting of main functionality from gevent to CherryPy took roughly four hours. After that, porting it took about another 6 hours to iron out some rough edges followed by updating unit tests. Really, the unit testing updates ended up being more work, in terms of time, than the actual functionality. A lot of that was our fault in how we use mock, but I digress. That’s really not much time!

So What

So far I’m happy with the results. The application functionality works as expected, the request/response speeds are more than acceptable, and CherryPy as a server has been fun to work with. Assuming no crazy corner cases don’t crop up I don’t see use moving off CherryPy anytime soon.

Flask-Track-Usage 1.1.0 Released

A few years ago the initial Flask-Track-Usage release was announced via my blog. At the time I thought I’d probably be the one user. I’m glad to say I was wrong! Today I’m happy to announce the release of Flask-Track-Usage 1.1.0 which sports a number enhancements and bug fixes.

Unfortunately, some changes are not backwards compatible. However, I believe the backwards incompatible changes make the overall experience better. If you would like to stick with the previous version of Flask-Track-Usage make sure to version pin in your requirements file/section:

flask_track_usage==1.0.1

Version 1.1.0 has made changes requested by the community as well as a few bug fixes. These include:

  • Addition of the X-Forwarded-For header as xforwardedfor in storage. Requested by jamylak.
  • Configurable GeoIP endpoint support. Requested by jamylak.
  • Migration from pymongo.Connection to pymongo.MongoClient.
  • Better SQLStorage metadata handling. Requested by gouthambs.
  • SQLStorage implementation redesign. Requested and implemented by gouthambs.
  • Updated documentation for 1.1.0.
  • Better unittesting.

I’d like to thank Gouthaman Balaraman who has been a huge help authoring the SQLStorage based on the SQLAlchemy ORM and providing feedback and support on Flask-Track-Usage design.

As always, please report bugs and feature requests on the GitHub Issues Page.

Why I Chose NewsBlur

Not all that long ago Google Reader closed it doors pushing millions of users off the platform. Many users were frustrated to lose their long time place to get their news not all that different from someone in yesteryear losing their favorite newspaper.  The whole thing was far from ideal but did go to teach users that you can’t expect cloud services to last forever (which is a good wake up call). But in the fall of Google Reader came many possible replacements which added their own spins on how one reads news. Feedly, The Old Reader and NetVibes were a few of the popular replacements. But I settled on NewsBlur and eventually became a paid user.

NewsBlur is mainly written by Samuel Clay (more on why I say mainly later).  He seems like a friendly, hard working fellow. He responds to bug reports and is active in his products community.  While this may seem like common sense just take a few minutes to look at random SaaS products on the Internet. You’ll find many of the developers are hidden behind customer service groups who, at worst, are outsourced and are more of a dead end than a way to get things fixed. Long story short, it seems like Samuel really cares about his product.

It is possible to have a Free account on NewsBlur. While you are limited to a specific amount of feeds many people will find the limits are higher than the feed counts they had in Google Reader. At the time of writing the limit is 64 sites.

There are some social features provided by NewsBlur yet these features are not required nor forced into general workflow. For instance, there is a concept of the BlurBlog which looks like it could be fun. But I tend to read the news and share elsewhere. If I ever decide to use the BlurBlog functionality it’s there. Otherwise I can just use NewsBlur as a fantastic reader.

NewsBlur is Open Source under the MIT license (also known as the Expat License). This gives me peace of mind knowing if Samuel ever decided that he was done with NewsBlur I could export my feeds, setup my own instance, and continue using the product on my own infrastructure. Yeah, it’s not trivial but it’s possible which is a huge advantage given the last reader I used shut down.

No software is without it’s bugs but Samuel does a good job bug squashing. And if you are developer who wants to give a hand you can patch the issue yourself and submit the fix (another win for Open Source). At the time of writing there are 43 development contributors to NewsBlur. This is a much better solution than waiting for a customer service representative to reinterpret your bug submission to a developer so that the fix may be done someday in the future.

If you are still looking for a replacement for Google Reader give NewsBlur a chance even if it’s a second chance as the application seems to be enhanced weekly. If you like it, consider becoming a paid user. Can you can’t say no to Shiloh:

Introducing Flask-Track-Usage

A little while ago one of the guys on a project I work one was asking about how many people were using the projects public web service. My first thought was to go grepping through logs. After all, the requests are right there and pretty consumable with a bit of Unix command line magic. But after a little discussion it became clear that would get old after a while. What about a week from now? How about a month or year? Few people want to go run commands and then manually correlate them. This lead to us looking around for some common solutions. The most obvious one was Google Analytics. To be honest I don’t much care about those systems. While that one may not (or may be) intrusive on users I just don’t feel all that comfortable forcing people to be subjected to a third party of a third party unless there is no other good choice. Luckily, being that the metrics are service related, the javascript/cookie/pixel based transaction wouldn’t have worked very well anyway.

So it was off to look at what others have made with a heavy eye towards Flask based solutions so it matched the same framework we were already using. Flask-Analytics came up in a search. The simple design was something I liked but the extension was more so aimed at using cookies to track users through an application while we want to track overall usage. I figured it was time to roll something ourselves and provide it back out to the community if they could use it as well.

Here it is in all it’s simplistic glory: Flask-Track-Usage. It doesn’t use cookies nor javascript and can store the results into any system which you provide a callable or Storage object. There is also FreeGeoIP integration for those what want to track where users are coming from. The code comes with a MongoDB Storage object for those who want to store the content back into their MongoDB. Want to know a bit more of the technical details? Check out the README or the project page. Patches welcome!

Raspberry Pi and Arduino: Good Friends

I have a Raspberry Pi and it’s pretty great. I have an Arduino Esplora and an Arduino Micro and they are fun. Though, as I’ve played more with the Arduino I’ve found one totally understandable drawback: it’s more or less local only. I mean that the data that comes back through sensors or the items being controlled only send response back over serial or USB/serial. It makes sense but it also limits what can be done with it when used all by itself. One of my early ideas for a project was to use a light and a temperature sensor to keep an eye on aging homebrew beer. Nothing super fancy, just records the information for viewing and alerting when the sensors see data outside of the accepted norms. I could do this with some LED’s, buzzer or a display that would notify me when things were off but that isn’t really the type of alerting I’d like to see. That type of alerting would require me to go look at the box for information. I might as well do the sensor gathering manually with my eyes and feeling inside the box. It also means the data would be lost on every iteration. Data from 10 minutes ago could only be gathered if I was present 10 minutes back. Enter Raspberry Pi.

The Raspberry Pi is able to power the Arduino Esplora and, likely, the Arduino Micro. Since communication is over USB/serial the Raspberry Pi can collect the data from the sensors and provide a networked view into the data. For instance, a web interface showing temperature and light graphs. And, best yet, it’s simple to add a USB wireless adapter to the Pi to avoid running an ethernet cable back to the network.

Now, from what I read, it’s possible to use Raspberry Pi itself without an Arduino to collect data and control devices but it requires an ADC for analog input/output. But there is something that seems more proper about separating the physical logic (using C in Arduino) from the notification and reporting logic (using Python on a Raspberry Pi). It feels almost MVC like.

In any case, if you are looking at doing some analog and digital stuff with a Raspberry Pi do know that adding a small Arduino makes life easy and, if you decide to change to another device for providing network views it should be a simple switch over.