nikdoof.com

/posts/ 2014/flask-eve-and-no-persistence

Flask, EVE, and no persistence

Jun 27, 2014

Recently, another EVE related web-app idea popped into my mind and due to the generally low impact nature of the application I didn’t require a backend data store. For a long time i’ve used Django for nearly anything and everything due to the batteries included nature of the framework, but with this application I could throw it all away and start working with Flask; something i’ve been meaning to get my teeth into properly since I started my large Django based projects.

My tool is already a solved problem, but as is the way of development and EVE i’ve set re-inventing the wheel for the sake of “security” and “counter-intelligence”, well, I spin it up that way but really I just wanted to try and do it for myself. In the last few years EVE has had a small UI overhaul which now allows almost anything to be copied and pasted outside of the game, the bonus is that once inaccessible scans, inventory lists, and channel member lists are now sources of information to be parsed and worked with. A common tool to come out of all this is a “D-Scan” tool that allows quick parsing and overview of the scan results from your directional scanner, over the last few years a good scan parser has become an essential tool of any FC and scout.

In my app i’m taking a new twist on the tool, trying out a few new views and consolidating some of the loved features from other tools into one that I can use. In the process of developing this i’ve set myself a goal of not having this tool depend on a database in anyway, instead using Redis as a caching backend for the various APIs and data stores needed.

The first big problem you need to work with is the EVE SDE (Static Data Extract) and the “Inventory Types”, this table of around 50,000 rows is something the tool will need to categorize the scan correctly. The positive here is that the SDE doesn’t update that often, only when we see content releases will the SDE be updated by CCP and even then the world isn’t going to end by not having the latest and greatest SDE to work with. So my solution was to have a package data file populated with a JSON extract of the data I need and when the data is needed its loaded into memory, the relative memory increase of 1-2mb of RAM is nothing in the overall scheme of the application.

So what about the actual scans and results? Parsing the d-scan data is relatively quick, as its essentially a tab delimited file of a fixed format, combined with a few quick lookups of reference data which is all held in a dictionary in memory makes even a taxing Jita d-scan get processed in a few milliseconds without any major optimization. Once the initial parse is done then the results are dumped to JSON, compressed with zlib, then dumped to a unique key in Redis with a expiry of an hour. The view to show the scan results does nothing more than to take the key from the URL and attempt to grab the results from Redis, decompress, and pass the resulting parsed JSON to the template.

The deployment target is Heroku, and ideally the Heroku free tier, so this has dictated some of the design, for example the zlib compression of the resulting scan is there to shave off as many bytes as possible to get the maximum use out of the Redis 25mb services you have available, with the requests we’re CPU rich but storage poor, so the trade off works quite well in this case. So how would this work in a DoS issue? If one person keeps spamming large d-scans into the system would the Redis server fill up and stop working for all? Well, no, as the config will be set to expire the oldest keys in the case of low memory which would work perfectly for our tool.