<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Posts on Andrew Williams</title><link>https://nikdoof.com/posts/</link><description>Recent content in Posts on Andrew Williams</description><generator>Hugo -- gohugo.io</generator><language>en-gb</language><lastBuildDate>Sun, 28 Jan 2024 09:49:23 +0000</lastBuildDate><atom:link href="https://nikdoof.com/posts/index.xml" rel="self" type="application/rss+xml"/><item><title>A Bad Start to the Year</title><link>https://nikdoof.com/posts/2024/a-bad-start-to-the-year/</link><pubDate>Sun, 28 Jan 2024 09:49:23 +0000</pubDate><guid>https://nikdoof.com/posts/2024/a-bad-start-to-the-year/</guid><description>On the 30th of December, I sat on the sofa finishing a few bits of coding on the Hackspace API for Leigh Hackspace. I moved to put myself in a better position because I felt uncomfortable and felt an ache in my arm, much like banging your funny bone. Within a few seconds, it felt like my entire back had locked up and I couldn&amp;rsquo;t do anything to get in a good position again.</description><content type="html"><![CDATA[<p>On the 30th of December, I sat on the sofa finishing a few bits of coding on the <a href="">Hackspace API</a> for <a href="https://leighhack.org">Leigh Hackspace</a>. I moved to put myself in a better position because I felt uncomfortable and felt an ache in my arm, much like banging your funny bone. Within a few seconds, it felt like my entire back had locked up and I couldn&rsquo;t do anything to get in a good position again. After grumpily stomping through the house for a few minutes Jo suggested we head up to bed for an early night, but after a minute or so the ache suddenly moved to my chest.</p>
<p>I&rsquo;m usually stubborn when it comes to medical matters, refusing to go to the doctors if I&rsquo;m not in too much inconvenience, but thankfully this time I thought to myself that this wasn&rsquo;t normal, nothing I had experienced before, and decided to head into A&amp;E to get checked out.</p>
<p>As it turned out, I had a coronary incident, a heart attack in common lingo. Thankfully it was a &lsquo;small big heart attack&rsquo; in the words of the consultant. I didn&rsquo;t need any surgery, or a &lsquo;stent&rsquo; placed in my artery, as by the time I got to the Liverpool Heart &amp; Chest Hospital the block had cleared itself. I have also been diagnosed with heart failure; scary big words but all it means is that my heart is operating below expected.</p>
<p>As you can imagine this was the singular most frightening experience of my life. I&rsquo;m now approaching a month after the event and I think for the first time I can write about this without having some sort of anxiety attack over the thought of it. For a few moments in the hospital, I had thought it was the end, not because I was actually at any risk, but more that my unhealthy lifestyle had finally caught up to me and was presenting real, in your face, issues.</p>
<p>I have several risk factors, I&rsquo;m obese, Type 2 Diabetic, and have high blood pressure. I didn&rsquo;t know at the time that I also had high cholesterol. My blasé attitude about my health was down to not seeing any real impacts on my life, I was just a ticking time bomb waiting to pop.</p>
<p>&hellip;and it happened.</p>
<p>My old life is over. I can&rsquo;t do this to myself any more.</p>
<p>In the past month after leaving the hospital, I have started eating a healthy, balanced diet. I&rsquo;ve dropped nearly 10% of my body weight (which is insane to think of as I type this!), my T2D is under control, and my blood pressure is the best it&rsquo;s been in years.</p>
<p><strong>Don&rsquo;t be me</strong>. Everything that happened to me was easily preventable if only I cared to change. No doctor is going to dismiss you if you want to get an &lsquo;MOT&rsquo; and check yourself out, catching the contributors early can stop it.</p>
<p>Without these wonderful people, I might not be here today. At some point I&rsquo;ll be doing some fundraising for them, but that will be after I recover a bit more.</p>
<ul>
<li><a href="https://www.nwas.nhs.uk">Northwest Ambulance Service</a></li>
<li><a href="https://www.lhch.nhs.uk">Liverpool Heart and Chest Hospital</a></li>
</ul>
]]></content></item><item><title>Found This Week - #1</title><link>https://nikdoof.com/posts/2023/found-this-week-1/</link><pubDate>Wed, 05 Jul 2023 09:13:58 +0000</pubDate><guid>https://nikdoof.com/posts/2023/found-this-week-1/</guid><description>Sometimes I come across interesting content, but usually its read, and then forgotten within a few hours. In the past week i&amp;rsquo;ve decided to start making a note of anything particually interesting so I can post on my site. So, here we go&amp;hellip;
On the &amp;lsquo;Back To Work&amp;rsquo; podcast, Episode 617. Merlin discusses the upcoming tvOS 17 and how they&amp;rsquo;ve introduced Continuty to allow you to Facetime using your iPhone.</description><content type="html"><![CDATA[<p>Sometimes I come across interesting content, but usually its read, and then forgotten within a few hours. In the past week i&rsquo;ve decided to start making a note of anything particually interesting so I can post on my site. So, here we go&hellip;</p>
<ul>
<li>
<p>On the &lsquo;Back To Work&rsquo; podcast, <a href="https://backtowork.limo/617">Episode 617</a>. Merlin discusses the upcoming tvOS 17 and how they&rsquo;ve introduced Continuty to allow you to Facetime using your iPhone.</p>
</li>
<li>
<p>A modified Honda Monkey motorbike <a href="https://www.advpulse.com/adv-news/honda-monkey-breaks-world-record-covering-4183-km-on-single-tank/">breaks the world record</a> for the distance travelled on a single tank of fuel. Sure, the tank is 30 gallons, but impressive non the less, especially since they didn&rsquo;t run out of fuel when they got to their target location.</p>
</li>
<li>
<p><a href="https://cabel.com/">Cabel Sasser</a> bought and scanned a auction lot of &ldquo;Backstage Disneyland&rdquo;, which was a unofficial magazine targetted at Disneyland cast members. The full collection is available on <a href="https://archive.org/search?query=subject%3A%22backstage%22+subject%3A%22disneyland%22&amp;and%5B%5D=subject%3A%22magazine%22">archive.org</a>.</p>
</li>
<li>
<p>Linking to the last one, Jason Schultz has created an amazing archive of Disneyland assets called <a href="https://mediagraph.io/parkendium/">Parkendium</a>. He was also able to provide Cabel with the missing issues of Backstage Disneylad.</p>
</li>
<li>
<p><a href="https://saurabhs.org">Saurabh</a> made a post about <a href="https://saurabhs.org/advanced-macos-commands">Advanced macOS Commands</a> that you may not know about, which spurred an <a href="https://news.ycombinator.com/item?id=36491704">entire conversation</a> on HackerNews about other useful tools and commands.</p>
</li>
</ul>
<p>Thats it for now, other posts in this series will be tagged with <a href="/tags/found-this-week/">#found-this-week</a>, so check back later.</p>
]]></content></item><item><title>A Terrible User Experience</title><link>https://nikdoof.com/posts/2023/a-terrible-user-experience/</link><pubDate>Sat, 01 Jul 2023 07:07:32 +0100</pubDate><guid>https://nikdoof.com/posts/2023/a-terrible-user-experience/</guid><description>This is a bit of a vent, I may be wrong, I may be a little caught up in the moment, but a simple task should not be this difficult.
I&amp;rsquo;m currently trying to setup a Kiosk-mode Raspberry Pi for Leigh Hackspace to act as a &amp;lsquo;Hackspace Status&amp;rsquo; screen, showing some stats and a rotating list of artwork to show upcoming events.
I decided to just try and build something myself, but while I was using the Raspberry Pi Imager I spotted a section for purpose-specific OS installations, and within there was Anthias.</description><content type="html"><![CDATA[<p>This is a bit of a vent, I may be wrong, I may be a little caught up in the moment, but a simple task should not be <em>this difficult</em>.</p>
<p>I&rsquo;m currently trying to setup a Kiosk-mode Raspberry Pi for <a href="https://leighhack.org">Leigh Hackspace</a> to act as a &lsquo;Hackspace Status&rsquo; screen, showing some stats and a rotating list of artwork to show upcoming events.</p>
<p>I decided to just try and build something myself, but while I was using the Raspberry Pi Imager I spotted a section for purpose-specific OS installations, and within there was <a href="https://anthias.screenly.io">Anthias</a>. &ldquo;Hey, that looks good, Web UI and does what I need it to do&rdquo; I thought to myself. Anthias is the rebadged Screenly OSE, as to avoid confusion they renamed it to split it away from the commercial Screenly option. Open source versions of commercial products never really work well, but maybe it&rsquo;ll do the bare minimum I need.</p>
<p>I got it installed on a Raspberry Pi 3, booted it up and it sat on the boot-screen and did nothing else. Oh, did it need some pre-config or something? I plugged the SD card into another system and it mounted &lsquo;resin-boot&rsquo;, a sign of using BalenaOS rather than Raspbian. I headed back to the website and checked for documentation and nothing on the site?</p>
<p>So, I found the Github repository, &ldquo;General Documentation&rdquo;, great. First port of call in the <a href="https://github.com/Screenly/Anthias/blob/master/docs/README.md">document</a>: &ldquo;SSH into the host&rdquo;. Try to connect, &ldquo;connection refused&rdquo;. Oh, ok, do I need to enable SSH? The document links to the official Raspbian docs on how to do that but it isn&rsquo;t running Raspbian&hellip;</p>
<p>All official documentation has been exhausted, time to consult the forums. Many people seem to report the same as me:</p>
<ul>
<li><a href="https://forums.screenly.io/t/i-simply-cant-get-anthias-to-work/1005">I simply can’t get Anthias to work!</a></li>
<li><a href="https://forums.screenly.io/t/installing-anthias-pi3-from-the-raspberry-pi-imager-v1-7-4-just-hangs-at-the-logo/1046">Installing Anthias (pi3) from the Raspberry Pi Imager v1.7.4 just hangs at the logo</a></li>
<li><a href="https://forums.screenly.io/t/only-shows-the-anthias-logo/935">Only shows the Anthias Logo</a></li>
</ul>
<p>But, it quickly became obvious what the issue was: Using the official image that uses BalenaOS.</p>
<p>As it turns out, it straight up doesn&rsquo;t work, to the point that every response seems to be &lsquo;I used Raspberry Pi OS Lite and installed it via the script, and it works there&rsquo;, even the developer of the product posts the same.</p>
<p>I understand that Screenly has no real investment in this open-source version of their product, but straight-up shipping a broken version of your product into the official Raspberry Pi imaging tool seems dumb on a new level.</p>
<p>I&rsquo;ve given up and moved on to trying FullPageOS, it might not have the flashy UI but I can manage that via Ansible.</p>
]]></content></item><item><title>Using an old webcam on modern Linux</title><link>https://nikdoof.com/posts/2023/old-webcam-on-modern-linux/</link><pubDate>Wed, 28 Jun 2023 08:28:12 +0100</pubDate><guid>https://nikdoof.com/posts/2023/old-webcam-on-modern-linux/</guid><description>Linux is known for its good hardware support of even ancient pieces of kit. Sure, from time to time the kernel devs deprecate drivers and architectures that are no longer supported, but for the most part, you can pick-up an old piece of hardware and get it running on a major release or two from the current release.
Yesterday I attempted this. At Leigh Hackspace we wanted a webcam on our out-of-band access box, just so we (as in the infra guys) can look at the status of the rack without visiting the space.</description><content type="html"><![CDATA[<p>Linux is known for its good hardware support of even ancient pieces of kit. Sure, from time to time the kernel devs deprecate drivers and architectures that are no longer supported, but for the most part, you can pick-up an old piece of hardware and get it running on a major release or two from the current release.</p>
<p>Yesterday I attempted this. At <a href="https://leighhack.org">Leigh Hackspace</a> we wanted a webcam on our out-of-band access box, just so we (as in the infra guys) can look at the status of the rack without visiting the space. After having a dig around our electronics area I found a box of old Trust WB-1200P webcams, they&rsquo;re from 2009 and barely scrape out at 0.1 megapixel, but they should be good enough to see the blinkenlights in the rack to see if we have an issue (e.g. a server doesn&rsquo;t have power).</p>
<p>I grab one, plug it into the OOB Raspberry Pi running the latest, Debian 11, Raspian and it is detected and loads the drivers! It <em>works</em>??? Wow.</p>
<p>The Trust WB-1200P is a rebadged &lsquo;Pixart Imaging Inc.&rsquo; PAC207, a pre-UVC webcam that is part of the <code>gspca</code> driver set in the kernel. Support for it is included in the current upstream <code>main</code>, and support V4L2. But, getting it to load a driver is only the first step of the battle. Pre-UVC devices have their own special quirks and issues, usually related to the pixel and video formats they output.</p>
<p>In this case, the PAC207 has its own special pixel format that the application you wish to use needs to understand and handle. While backward-compatible kernel drivers are there, application support moves on. Most of the V4L2 tools I attempted to use would either throw an error, crash or cause the driver to spit out some errors into <code>dmesg</code></p>
<pre tabindex="0"><code>[39011.323706] gspca_main: set alt 0 err -32
[39011.323771] pac207 1-1.2:1.0: submit int URB failed with error -2
[39181.529143] Transfer to device 5 endpoint 0x5 frame 1542 failed - FIQ reported NYET. Data may have been lost.
[39191.828144] Transfer to device 5 endpoint 0x5 frame 1601 failed - FIQ reported NYET. Data may have been lost.
[39450.446323] gspca_main: set alt 0 err -22
[39454.058295] gspca_main: pac207-2.14.0 probing 093a:2468
</code></pre><p>I suspected it may be due to the output format of the camera, most applications expect <code>YUV</code> or Motion JPEG, neither of which this camera supported. I attempted to Google the camera&rsquo;s model to try and find some more information, then tried Github to see if anything caught my eye.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-c" data-lang="c"><span class="line"><span class="cl"><span class="cp">#define V4L2_PIX_FMT_SPCA508  v4l2_fourcc(&#39;S&#39;, &#39;5&#39;, &#39;0&#39;, &#39;8&#39;) </span><span class="cm">/* YUVY per line */</span><span class="cp">
</span></span></span><span class="line"><span class="cl"><span class="cp">#define V4L2_PIX_FMT_SPCA561  v4l2_fourcc(&#39;S&#39;, &#39;5&#39;, &#39;6&#39;, &#39;1&#39;) </span><span class="cm">/* compressed GBRG bayer */</span><span class="cp">
</span></span></span><span class="line"><span class="cl"><span class="cp">#define V4L2_PIX_FMT_PAC207   v4l2_fourcc(&#39;P&#39;, &#39;2&#39;, &#39;0&#39;, &#39;7&#39;) </span><span class="cm">/* compressed BGGR bayer */</span><span class="cp">
</span></span></span><span class="line"><span class="cl"><span class="cp">#define V4L2_PIX_FMT_MR97310A v4l2_fourcc(&#39;M&#39;, &#39;3&#39;, &#39;1&#39;, &#39;0&#39;) </span><span class="cm">/* compressed BGGR bayer */</span><span class="cp">
</span></span></span><span class="line"><span class="cl"><span class="cp">#define V4L2_PIX_FMT_JL2005BCD v4l2_fourcc(&#39;J&#39;, &#39;L&#39;, &#39;2&#39;, &#39;0&#39;) </span><span class="cm">/* compressed RGGB bayer */</span><span class="cp">
</span></span></span></code></pre></div><p>Hello, <code>V4L2_PIX_FMT_PAC207</code>? That is a special format that so happens to have the same name as the model of the webcam.</p>
<p>Further splunking using this value directed me to a configuration file for <a href="https://motion-project.github.io">Motion</a>, a relatively old application for streaming webcams</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl"><span class="c1"># V4L2_PIX_FMT_MJPEG then motion will by default use V4L2_PIX_FMT_MJPEG.</span>
</span></span><span class="line"><span class="cl"><span class="c1"># Setting v4l2_palette to 2 forces motion to use V4L2_PIX_FMT_SBGGR8</span>
</span></span><span class="line"><span class="cl"><span class="c1"># instead.</span>
</span></span><span class="line"><span class="cl"><span class="c1"># V4L2_PIX_FMT_SGRBG8  : 5  &#39;GRBG&#39;</span>
</span></span><span class="line"><span class="cl"><span class="c1"># V4L2_PIX_FMT_PAC207  : 6  &#39;P207&#39;</span>
</span></span><span class="line"><span class="cl"><span class="c1"># V4L2_PIX_FMT_PJPG    : 7  &#39;PJPG&#39;</span>
</span></span></code></pre></div><p>Motion is still maintained, and still available in most OS repositories, and it did exactly what I needed it to do; export a camera to a URL so we can look at it. Win-win. I installed Motion, added <code>v4l2_palette 6</code> to the configuration file, started up the daemon and hit the URL, it loaded and&hellip;. showed a green image and more <code>dmesg</code> errors.</p>
<p>As it turns out, not everything is still compatible. the <code>pac207</code> driver is a V4Lv1 type driver and requires a small amount of wrangling to get working correctly. Thankfully, the V4L2 developers provide a compatibility library that can be loaded using <code>LD_PRELOAD</code> to fix the major issues.</p>
<p>All thats needed is the <code>libv4l-0</code> library on Debian, then using a <code>systemd</code> service override I was able to insert the environment variable into the startup:</p>
<p><strong>/etc/systemd/system/motion.service.d/overrides.conf</strong></p>
<pre tabindex="0"><code>[Service]
Environment=LD_PRELOAD=/usr/lib/arm-linux-gnueabihf/libv4l/v4l1compat.so
</code></pre><p>Restart the service and&hellip;</p>
<p><img src="stream.jpeg" alt="A blurry image from the webcam"></p>
<p>Right, 0.1MP, and probably not focused correctly, autofocus wasn&rsquo;t a thing on this!</p>
<hr>
<p>I found out later that Trust still has this device <a href="https://www.trust.com/en/product/13405-mini-webcam-wb-1200p">listed on their website&rsquo;s support section</a>, from 2009! It also has Windows 8 drivers!</p>
]]></content></item><item><title>Adopting a Ubiquiti USW-Mini via DHCP</title><link>https://nikdoof.com/posts/2023/adopting-a-usw-mini/</link><pubDate>Sat, 04 Mar 2023 23:03:22 +0000</pubDate><guid>https://nikdoof.com/posts/2023/adopting-a-usw-mini/</guid><description>The USW Mini is a tiny, 5 port, PoE or USB-C powered switch from Ubiquiti. As part of my homelab reorganization, I picked two up to serve as small spur switches for the downstairs media hub and my office desk. The problem is that, unlike the rest of Ubiquiti&amp;rsquo;s switching range, these devices are relatively dumb and can be a bit of a pain to get adopted into the Unifi console if you have anything unusual on your network.</description><content type="html"><![CDATA[<p>The USW Mini is a tiny, 5 port, PoE or USB-C powered switch from Ubiquiti. As part of my homelab reorganization, I picked two up to serve as small spur switches for the downstairs media hub and my office desk. The problem is that, unlike the rest of Ubiquiti&rsquo;s switching range, these devices are relatively dumb and can be a bit of a pain to get adopted into the Unifi console if you have anything unusual on your network.</p>
<p>In my case, my controller is a pod on my Kubernetes cluster and doesn&rsquo;t have much of a Layer 2 presence on the network. The USW Mini depends on the L2 features of the controller to enable adoption, but with a little bit of work, you can get them added. The switches support using DHCP to gather their configuration information, but information on how to configure this outside of Ubiquiti&rsquo;s gateway devices is difficult to find.
First of all, they make use of DHCP Option 43, and the value in this option should be an encoded version of your controller&rsquo;s IP. I use pfSense as my DHCP server, so here is what I did:</p>
<ul>
<li>Go to the <strong>Services</strong> -&gt; <strong>DHCP Server</strong> in pfSense</li>
<li>On the interface, your switch is on, scroll to the bottom of the page and click to expand <strong>Additional BOOTP/DHCP Options</strong>
<ul>
<li>In the <strong>Option</strong> box, put <code>43</code></li>
<li>Set the <strong>Type</strong> value to <code>String</code></li>
<li>Convert the IP address of your controller into Hex format</li>
<li>Add <code>01:04</code> to the *<em>Value</em> field, followed by the Hex IP address, e.g. <code>01:04:FF:FF:FF:FF</code></li>
</ul>
</li>
<li>Click Save</li>
</ul>
<p>Now, you can restart your USW-Mini, and it should appear as adoptable in the controller interface.</p>
]]></content></item><item><title>Fixing Mastodon's Cached Media</title><link>https://nikdoof.com/posts/2023/fixing-mastodon-cached-media/</link><pubDate>Sun, 26 Feb 2023 12:59:40 +0000</pubDate><guid>https://nikdoof.com/posts/2023/fixing-mastodon-cached-media/</guid><description>Losing your cached media folder on Mastodon can be a pain. For days your instance will show broken images and issues to your end users. I operate a small instance at incognitus.net for a small community of friends, and recently I needed to move the storage from one NAS to another, to save the pain of the transfer speed I decided on nuking the cache and sorting it out after the fact.</description><content type="html"><![CDATA[<p>Losing your cached media folder on Mastodon can be a pain. For days your instance will show broken images and issues to your end users. I operate a small instance at <a href="https://mastodon.incognitus.net">incognitus.net</a> for a small community of friends, and recently I needed to move the storage from one NAS to another, to save the pain of the transfer speed I decided on nuking the cache and sorting it out after the fact. As it turns out this was a terrible mistake and not easy to remedy.</p>
<p>After trawling Github and some discussions around Mastodon, I came across a useful <a href="https://github.com/mastodon/mastodon/issues/14681#issuecomment-1364721824">post</a> by <a href="https://github.com/keskival">Tero Keski-Valkama</a> detailing how to use the Rails console to run some fixes over your data. I noticed that some were a little heavy-handed and caused a lot of media to be re-downloaded, so I used by very basic Ruby knowledge to try and reduce the workload on the DB.</p>
<p>So here is the snippet I came up with:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-ruby" data-lang="ruby"><span class="line"><span class="cl"><span class="no">MediaAttachment</span><span class="o">.</span><span class="n">cached</span><span class="o">.</span><span class="n">where</span><span class="o">.</span><span class="n">not</span><span class="p">(</span><span class="ss">remote_url</span><span class="p">:</span> <span class="s1">&#39;&#39;</span><span class="p">)</span><span class="o">.</span><span class="n">each</span> <span class="k">do</span> <span class="o">|</span><span class="n">attachment</span><span class="o">|</span>
</span></span><span class="line"><span class="cl">  <span class="k">def</span> <span class="nf">save</span> <span class="o">=</span> <span class="kp">false</span>
</span></span><span class="line"><span class="cl">  <span class="k">if</span> <span class="n">attachment</span><span class="o">.</span><span class="n">file?</span> <span class="o">&amp;&amp;</span> <span class="o">!</span><span class="n">attachment</span><span class="o">.</span><span class="n">file</span><span class="o">.</span><span class="n">exists?</span>
</span></span><span class="line"><span class="cl">    <span class="n">attachment</span><span class="o">.</span><span class="n">file</span><span class="o">.</span><span class="n">destroy</span> 
</span></span><span class="line"><span class="cl">    <span class="n">save</span> <span class="o">=</span> <span class="kp">true</span>
</span></span><span class="line"><span class="cl">  <span class="k">end</span>
</span></span><span class="line"><span class="cl">  <span class="k">if</span> <span class="n">attachment</span><span class="o">.</span><span class="n">thumbnail?</span> <span class="o">&amp;&amp;</span> <span class="o">!</span><span class="n">attachment</span><span class="o">.</span><span class="n">thumbnail</span><span class="o">.</span><span class="n">exists?</span>
</span></span><span class="line"><span class="cl">    <span class="n">attachment</span><span class="o">.</span><span class="n">thumbnail</span><span class="o">.</span><span class="n">destroy</span>
</span></span><span class="line"><span class="cl">    <span class="n">save</span> <span class="o">=</span> <span class="kp">true</span>
</span></span><span class="line"><span class="cl">  <span class="k">end</span>
</span></span><span class="line"><span class="cl">  <span class="k">if</span> <span class="n">save</span>
</span></span><span class="line"><span class="cl">    <span class="nb">puts</span> <span class="n">attachment</span>
</span></span><span class="line"><span class="cl">    <span class="n">attachment</span><span class="o">.</span><span class="n">save</span>
</span></span><span class="line"><span class="cl">  <span class="k">end</span>
</span></span><span class="line"><span class="cl"><span class="k">end</span>
</span></span></code></pre></div><p>This will check every cached media attachment, and if the file or thumbnail doesn&rsquo;t exist it&rsquo;ll clear them and save the record. The next time Mastodon looks at that MediaAttachment it should identify that the file needs re-downloading from the original source.</p>
<p>Hope this is useful for someone, it took about 5-10 minutes to run on my small instance with about 50GB of cached files (according to the database).</p>
<p><strong>[Edit - 2023/02/07]</strong></p>
<p>As it turns out, this still isn&rsquo;t the magic bullet to resolve missing media, some items are still showing as dead images, but it did take care of the vast majority. I should spend more time investigating why they&rsquo;ve been missed, but by that time my cronjob to cleanup remote media would of already nuked them.</p>
]]></content></item><item><title>Bring Back Blogging!</title><link>https://nikdoof.com/posts/2022/bring-back-blogging/</link><pubDate>Sat, 31 Dec 2022 08:51:41 +0000</pubDate><guid>https://nikdoof.com/posts/2022/bring-back-blogging/</guid><description>No, really, that&amp;rsquo;s all of it. Ash Huang and Ryan Putnam have started an experiment to try and get the ball rolling, so to speak. The aim is to create three blog posts during January to promote blogging, RSS, and the &amp;lsquo;old way&amp;rsquo; of doing things before social media and their ilk.
The site at Bring Back Blogging holds a list of everyone involved, the barrier for entry is small, and if you have your own publishing space then I&amp;rsquo;d suggest you join in!</description><content type="html"><![CDATA[<p>No, really, that&rsquo;s all of it. <a href="https://ashsmash.com">Ash Huang</a> and <a href="https://ryanputn.am">Ryan Putnam</a> have started an experiment to try and get the ball rolling, so to speak. The aim is to create three blog posts during January to promote blogging, RSS, and the &lsquo;old way&rsquo; of doing things before social media and their ilk.</p>
<p>The site at <a href="https://bringback.blog">Bring Back Blogging</a> holds a list of everyone involved, the barrier for entry is small, and if you have your own publishing space then I&rsquo;d suggest you join in!</p>
]]></content></item><item><title>Patching Mastodon's 'toolctl' in Kubernetes</title><link>https://nikdoof.com/posts/2022/patching-mastodon-tootctl-in-kubernetes/</link><pubDate>Wed, 28 Dec 2022 16:09:40 +0000</pubDate><guid>https://nikdoof.com/posts/2022/patching-mastodon-tootctl-in-kubernetes/</guid><description>Mastodon has a storage problem. The current version (which is 4.0.2 at time of writing) currently stores remote users&amp;rsquo; avatars and header images in the cache, but no function exists to clean down these cached images. For the last month, the instance I manage has ballooned to 73GB of these files. The &amp;lsquo;fix&amp;rsquo; was to manually remove the files and then run a &amp;lsquo;remove orphans&amp;rsquo; job to clean up the mess, Not exactly ideal.</description><content type="html"><![CDATA[<p>Mastodon has a storage problem. The current version (which is <code>4.0.2</code> at time of writing) currently stores remote users&rsquo; avatars and header images in the cache, but no function exists to clean down these cached images. For the last month, the instance I manage has ballooned to 73GB of these files. The &lsquo;fix&rsquo; was to manually remove the files and then run a &lsquo;remove orphans&rsquo; job to clean up the mess, Not exactly ideal.</p>
<p>Thankfully, <a href="https://symboli.cyou/@evan">Evan Philip</a> submitted a <a href="https://github.com/mastodon/mastodon/pull/22149">PR</a> to clean them up using the existing <code>tootctl</code> command. The change consisted of a single file and it is easy to patch into an existing installation while awaiting <code>4.0.3</code> or <code>4.1</code>. For me, our Mastodon instance is run in a home Kubernetes cluster, so it&rsquo;s not a simple case of replacing a file on a filesystem. This type of patching can be done in Kubernetes, it&rsquo;s just not as obvious how to do it.</p>
<p><code>ConfigMap</code> and <code>Secret</code> objects can be mounted into a Pod much like a PVC. So by creating a <code>ConfigMap</code> and defining it as a volume in a Pod you can override any files within the container itself:</p>
<p>First of all, you want to create a <code>ConfigMap</code> with the new file. The filename here doesn&rsquo;t matter, but for ease, you can keep it consistent:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">ConfigMap</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">web</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">data</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">media_cli.rb</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    # frozen_string_literal: true
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    require_relative &#39;../../config/boot&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">    require_relative &#39;../../config/environment&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">    require_relative &#39;cli_helper&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    module Mastodon
</span></span></span><span class="line"><span class="cl"><span class="sd">      class MediaCLI &lt; Thor
</span></span></span><span class="line"><span class="cl"><span class="sd">        include ActionView::Helpers::NumberHelper
</span></span></span><span class="line"><span class="cl"><span class="sd">        include CLIHelper
</span></span></span><span class="line"><span class="cl"><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">    ...</span><span class="w">    
</span></span></span></code></pre></div><p>Then, you need to update your Pod definition to load in the <code>ConfigMap</code> object as a volume. First, add it to the <code>volumes</code> definition and give it a name:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">persistentVolumeClaim</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">claimName</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">configMap</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span></code></pre></div><p>Then, add it to your <code>volumeMounts</code> section:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">  </span><span class="nt">volumeMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/opt/mastodon/public/system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/opt/mastodon/lib/mastodon/media_cli.rb</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">subPath</span><span class="p">:</span><span class="w"> </span><span class="l">media_cli.rb</span><span class="w">
</span></span></span></code></pre></div><p>The <code>mountPath</code> is the full path within the Pod where it is to be mounted, and the <code>subPath</code> value refers to the filename you gave it in the <code>ConfigMap</code>. Push the resource to the cluster and you&rsquo;ll have a pod running with the modified file. I use a <code>CronJob</code> to run these routine clean-up jobs, the Web and Sidekiq instances don&rsquo;t need the patched file as it only applies to commands run via <code>tootctl</code>.</p>
<p>Here is my full <code>CronJob</code> for context:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">batch/v1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">CronJob</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-cron</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">web</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">schedule</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;0 * * * *&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">successfulJobsHistoryLimit</span><span class="p">:</span><span class="w"> </span><span class="m">0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">failedJobsHistoryLimit</span><span class="p">:</span><span class="w"> </span><span class="m">1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">concurrencyPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Forbid</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">jobTemplate</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">template</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">containers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-cleanup</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">image</span><span class="p">:</span><span class="w"> </span><span class="l">tootsuite/mastodon:v4.0.2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">command</span><span class="p">:</span><span class="w"> </span><span class="p">[</span><span class="s2">&#34;/bin/sh&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">args</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="p">[</span><span class="s2">&#34;-c&#34;</span><span class="p">,</span><span class="w"> </span><span class="s2">&#34;tootctl media remove --days=1 --prune-profiles &amp;&amp; tootctl preview_cards remove --days=14&#34;</span><span class="p">]</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">envFrom</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span>- <span class="nt">configMapRef</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                    </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-config</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span>- <span class="nt">secretRef</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                    </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-secrets</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span>- <span class="nt">secretRef</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                    </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-postgresql-auth</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">volumeMounts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/opt/mastodon/public/system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">mountPath</span><span class="p">:</span><span class="w"> </span><span class="l">/opt/mastodon/lib/mastodon/media_cli.rb</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                  </span><span class="nt">subPath</span><span class="p">:</span><span class="w"> </span><span class="l">media_cli.rb</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">restartPolicy</span><span class="p">:</span><span class="w"> </span><span class="l">Never</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">volumes</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">persistentVolumeClaim</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">claimName</span><span class="p">:</span><span class="w"> </span><span class="l">mastodon-system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">configMap</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">                </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">media-cli-patch</span><span class="w">
</span></span></span></code></pre></div>]]></content></item><item><title>Closing My Workshop</title><link>https://nikdoof.com/posts/2022/closing-my-workshop/</link><pubDate>Tue, 03 May 2022 07:44:37 +0100</pubDate><guid>https://nikdoof.com/posts/2022/closing-my-workshop/</guid><description>Over the bank holiday weekend I had the sad job of closing my workshop.
My workshop started as a small hobby hole, I had been gifted a wood turning course in 2016 by my wife, I was unsure if its something i&amp;rsquo;d be interested in but I did have the drive to try something different from IT. In conversation I had brought up the idea of doing blacksmithing one day, and my wife did look for something but couldn&amp;rsquo;t find anywhere local that runs a from basics course.</description><content type="html"><![CDATA[<p>Over the bank holiday weekend I had the sad job of closing my workshop.</p>
<p>My workshop started as a small hobby hole, I had been gifted a wood turning course in 2016 by my wife, I was unsure if its something i&rsquo;d be interested in but I did have the drive to try something different from IT. In conversation I had brought up the idea of doing blacksmithing one day, and my wife did look for something but couldn&rsquo;t find anywhere local that runs a from basics course. Instead, she found a <a href="https://www.cheshirewoodworking.co.uk/two-day-basic-woodturning">wood turning course</a> run by Cheshire Wood Working. I was initially unsure, but after the two day course I was transfixed.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/closing-my-workshop/course_hud3028cbe5a369b846f62ba4f05e0e3eb_1638601_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>I had been working on the idea of buying a lathe, pricing up a few options, but was always stuck of where I could put it. I live in a relatively large town house, but I have no garage or space in the garden to place a shed. I&rsquo;m not sure how I come to look at storage units but I found a local company that offered powered workshop spaces, after a quick conversation I jumped at the opportunity and signed a lease. It was a 150 sqft storage unit at around £320/month, exceedingly expensive but it gave me the space I needed. The same day I headed down to <a href="https://www.axminstertools.com/">Axminster Tools</a> and purchased my lathe.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/closing-my-workshop/lathe_hu2a4b78c84e15d90282c7c1186d9fc6b2_1529509_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>After paying a substantial amount of money for the last year for what was essentially a hobby I decided I needed to make at least some income to subsidize the overall cost. One day while I was watching <a href="https://www.youtube.com/c/boardgamegeek">Game Night!</a> on the BoardGameGeek channel I spotted a large meeple in the background of the video. A meeple is a name for a common token used across a lot of modern boardgames, but first came about from <a href="https://boardgamegeek.com/boardgame/822/carcassonne">Carcassonne</a>. I searched eBay and Etsy and discovered that very few people were making large meeples, as in bigger than three inches. I thought maybe I had found a market.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/closing-my-workshop/meeple_hub090fefad8f82c0d35f1c0e707dc3c18_2272681_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>I produced a selection of solid wood meeples, plywood painted meeples, and some other gaming related items. Meeples became a major income source over the next few years and it subsidised the workshop and my hobbies for three years. With that my tools and workshop expanded, learning new skills to create interesting an different <em><a href="https://en.wikipedia.org/wiki/Treen_(object)">treen</a></em>. Working in the workshop was something so different from IT, when people asked why I always used this to explain:</p>
<blockquote>
<p>In IT you can work hard all week and produce nothing, with wood working you can work hard and you always end up with something, even if it is a pile of scrap wood and sawdust.</p>
</blockquote>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/closing-my-workshop/working_hu570ddffe8741221e5a9accac9de95534_2254807_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>Unfortunately COVID-19 started up in early 2020 and kept me locked out of the workshop for a few months, orders dropped off and it became problematic to sustain the workshop. Orders did pickup mid 2020 and it did end up as my most successful year, 2021 had a slight drop off from there, but early 2022 I hit zero, zero orders for three months. COVID had been quite hard on my fiances for numerous reasons, and as long as the workshop contributed some money back into the pot I could take the hit of the remaining rental, but it wasn&rsquo;t sustainable. At zero sales I had to make the hard decision to close it.</p>
<p>I&rsquo;m still not 100% sure what has happened, but I assume that 2020/2021 was so successful due to the lockdowns being experienced worldwide, people couldn&rsquo;t get out to spend their money on other things so gifs and oddities was a easy spend. Now that travel has resumed and the world is opening up again in 2022 I guess that everyone has much better things to spend on. Maybe like me, the pandemic finances finally caught up to people, and with the rising fuel costs and inflation we&rsquo;ve just hit a breaking point. I don&rsquo;t begrudge anyone, my customers were subsidizing a hobby and side business after all.</p>
<p>As my wife says, everything is temporary, maybe in a year or so I can start again. For the moment, thank you Unit G/055 - you will be missed.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/closing-my-workshop/empty_hu19cf6bd029061ddc4ef75e71707cfe44_3129939_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
]]></content></item><item><title>Automating a Blogroll in Hugo</title><link>https://nikdoof.com/posts/2022/automating-a-blogroll-in-hugo/</link><pubDate>Sat, 30 Apr 2022 08:49:36 +0100</pubDate><guid>https://nikdoof.com/posts/2022/automating-a-blogroll-in-hugo/</guid><description>Recently i&amp;rsquo;ve spotted quite an up-tick in discussions about blogging and content generation in this quickly evolving federated and user hosted future. Blogs are &amp;ldquo;cool&amp;rdquo; again, tried and tested RSS is the tool to subscribe with, and OPML is back as the method to share your feeds.
For me, RSS never really went anywhere. People like to call it dead after Google Reader shut down, if anything it unified the remaining users into generating new tools and applications to consume RSS.</description><content type="html"><![CDATA[<p>Recently i&rsquo;ve spotted quite an up-tick in discussions about blogging and content generation in this quickly evolving federated and user hosted future. Blogs are &ldquo;cool&rdquo; again, tried and tested RSS is the tool to subscribe with, and OPML is back as the method to share your feeds.</p>
<p>For me, RSS never really went anywhere. People like to call it dead after Google Reader shut down, if anything it unified the remaining users into generating new tools and applications to consume RSS. On the website front, RSS is still there just not front and centre, and most large websites still publish their RSS feeds. I&rsquo;ve personally been using <a href="https://miniflux.app">Miniflux</a> as my primary RSS feed consumption tool for a couple of years now, its incredibly easy to self host with only Postgresql as a dependency, and it has a lot of tools built in to manage even difficult RSS feeds that mangle the output.</p>
<p>Tom Critchlow popped up on HackerNews with a post about <a href="https://tomcritchlow.com/2022/04/21/new-rss/">Increasing the surface area of blogging</a> which discusses RSS, OPML, and personal feeds of news and information. Sharing is a key component of this ecosystem, so I thought i&rsquo;d take a crack at showing my Miniflux feeds out on my Hugo website.</p>
<h2 id="the-script">The Script</h2>
<p>First of all, I needed to get the OPML data out of Miniflux. This is quite simple to do as it provides an API which is amazingly simple to use. Using Python 3, Requests and no other external modules I put together a small script that would extract out the OPML from the API and write out the raw XML, as well as an optional JSON formatted version.</p>
<p><a href="https://gist.github.com/nikdoof/8bb9de7f91aad8dcca8fb69c1f70f6ac">miniflux2opml.py</a></p>
<p>To run it, all you need to do is provide your instance URL (or the hosted version at <a href="https://reader.miniflux.app">https://reader.miniflux.app</a>) and your API token. It&rsquo;ll then connect to Miniflux and pull down the OPML to your console. If you want to write it out to a file, use <code>-o</code>, and if you want to write out the JSON formatted version, use <code>-j</code>.</p>
<h2 id="hugo-shortcode">Hugo Shortcode</h2>
<p>Now that we&rsquo;ve got a script extract the data into a JSON format, we need to parse it in Hugo. Thankfully, Hugo supports arbitrary data files stored in <code>data</code> of your site root, these can then be accessed via <code>.Site.Data</code> values within your templates.</p>
<p>Using that method, I created a shortcode under <code>layouts/shortcodes/blogroll.html</code>:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-html" data-lang="html"><span class="line"><span class="cl"><span class="p">&lt;</span><span class="nt">p</span><span class="p">&gt;&lt;</span><span class="nt">small</span><span class="p">&gt;</span>Last Updated: <span class="p">&lt;</span><span class="nt">b</span><span class="p">&gt;</span>{{ dateFormat &#34;2006-01-02 03:04&#34; (.Site.Data.feeds.opml.head.dateCreated | time) }}<span class="p">&lt;/</span><span class="nt">b</span><span class="p">&gt;&lt;/</span><span class="nt">small</span><span class="p">&gt;&lt;/</span><span class="nt">p</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">{{ $opmlJson := index .Site.Data.feeds &#34;opml&#34; &#34;body&#34; &#34;outline&#34; }}
</span></span><span class="line"><span class="cl">{{ range sort $opmlJson &#34;_text&#34; &#34;asc&#34; }}
</span></span><span class="line"><span class="cl">    <span class="p">&lt;</span><span class="nt">h2</span><span class="p">&gt;</span>{{ ._text }}<span class="p">&lt;/</span><span class="nt">h2</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">    {{ if gt (len .outline) 0 }}
</span></span><span class="line"><span class="cl">    {{ range .outline }}
</span></span><span class="line"><span class="cl">    <span class="p">&lt;</span><span class="nt">li</span><span class="p">&gt;&lt;</span><span class="nt">a</span> <span class="na">href</span><span class="o">=</span><span class="s">&#34;{{ ._htmlUrl }}&#34;</span> <span class="na">target</span><span class="o">=</span><span class="s">&#34;_blank&#34;</span><span class="p">&gt;</span>{{ ._title }}<span class="p">&lt;/</span><span class="nt">a</span><span class="p">&gt;&lt;/</span><span class="nt">li</span><span class="p">&gt;</span>
</span></span><span class="line"><span class="cl">    {{ end }}
</span></span><span class="line"><span class="cl">    {{ end }}
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">{{ end }}
</span></span></code></pre></div><p>When then can be used in a simple post:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-markdown" data-lang="markdown"><span class="line"><span class="cl">---
</span></span><span class="line"><span class="cl">title: &#34;Blog Roll&#34;
</span></span><span class="line"><span class="cl">date: 2022-04-30
</span></span><span class="line"><span class="cl">draft: false
</span></span><span class="line"><span class="cl">toc: false
</span></span><span class="line"><span class="cl">images:
</span></span><span class="line"><span class="cl">tags:
</span></span><span class="line"><span class="cl">    <span class="k">-</span> blogroll
</span></span><span class="line"><span class="cl">referral: false
</span></span><span class="line"><span class="cl">---
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">My blogroll is a list of sites and feeds I follow using [<span class="nt">Miniflux</span>](<span class="na">https://miniflux.app</span>). This list is auto generated on a weekly basis using a small Python tool that wrangles the XML OPML format into JSON that Hugo can use nicely.
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl">{{<span class="p">&lt;</span> <span class="nt">blogroll</span> <span class="p">&gt;</span>}}
</span></span></code></pre></div><p>Can you can see the result <a href="https://nikdoof.com/blogroll/">here</a></p>
<h2 id="automating-the-updates">Automating the updates</h2>
<p>I don&rsquo;t want to manually run this command every so often, ideally it should be automated and left on auto pilot. I use Github for my repository so i&rsquo;m able to make use of Github Workflows to regularly run the script and commit it to the repository.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Pull OPML from Miniflux</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">on</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">workflow_dispatch</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">schedule</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="c"># * is a special character in YAML so you have to quote this string</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">cron</span><span class="p">:</span><span class="w">  </span><span class="s1">&#39;0 5 * * 6&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">jobs</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">pull_opml</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">runs-on</span><span class="p">:</span><span class="w"> </span><span class="l">ubuntu-latest</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">steps</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">uses</span><span class="p">:</span><span class="w"> </span><span class="l">actions/checkout@v2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Download and Commit OPML</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">run</span><span class="p">:</span><span class="w"> </span><span class="p">|</span><span class="sd">
</span></span></span><span class="line"><span class="cl"><span class="sd">          tools/miniflux2opml.py -u https://rss.doofnet.uk -t &#39;${{ secrets.MINIFLUX_API_KEY }}&#39; -o static/feeds.opml -j data/feeds.json
</span></span></span><span class="line"><span class="cl"><span class="sd">          git config --global user.name &#39;Andrew Williams&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">          git config --global user.email &#39;nikdoof@users.noreply.github.com&#39;
</span></span></span><span class="line"><span class="cl"><span class="sd">          git add static/feeds.opml data/feeds.json
</span></span></span><span class="line"><span class="cl"><span class="sd">          git commit -am &#34;Update OPML&#34;
</span></span></span><span class="line"><span class="cl"><span class="sd">          git push</span><span class="w">          
</span></span></span></code></pre></div><p>With this the feed is pulled and updated on a weekly basis, and my OPML file is also put into the <code>static</code> folder to ensure its <a href="/feeds.opml">available to the public</a>. Now people an be subjected to the weird content I read in a automated way.</p>
]]></content></item><item><title>Two weeks with Supernotes</title><link>https://nikdoof.com/posts/2022/two-weeks-with-supernotes/</link><pubDate>Fri, 29 Apr 2022 12:06:05 +0100</pubDate><guid>https://nikdoof.com/posts/2022/two-weeks-with-supernotes/</guid><description>The Promise In my previous post I discussed switching over to Obsidian for my &amp;ldquo;PKM&amp;rdquo; with a plan to create a set of tooling around migrating from LogSeq. If anyone had checked out the promised repository then you would of seen that it is empty and has been untouched since the original post.
The issue I had is that I had no easy way to actually migrate my data over, every method was messy and ended up with me having to rewrite any notes into a handy format.</description><content type="html"><![CDATA[<h2 id="the-promise">The Promise</h2>
<p>In my <a href="https://nikdoof.com/posts/2021/note-taking/">previous post</a> I discussed switching over to Obsidian for my &ldquo;PKM&rdquo; with a plan to create a set of tooling around migrating from LogSeq. If anyone had checked out the promised repository then you would of seen that it is empty and has been untouched since the original post.</p>
<p>The issue I had is that I had no easy way to actually migrate my data over, every method was messy and ended up with me having to rewrite any notes into a handy format. I attempted this for a few days but eventually gave up and went back to LogSeq. The problems I had discussed in the previous post are still there, while a mobile application is in the process of being developed we still have no simple way to sync between multiple systems. In the past month, LogSeq finally removed the last parts of the Github integration that allowed for easy pushing of changes into a git repo, for me that killed my last benefit.</p>
<p>So, I made the jump to Supernotes.</p>
<h2 id="supernotes">Supernotes</h2>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/two-weeks-with-supernotes/screen-width_hua8ab8ba1ed0ea72e3316c94b8c455214_585907_900x0_resize_box_3.png" width="900" height="563">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>I still don&rsquo;t know how I came across <a href="https://supernotes.app">Supernotes</a>, but it was a note-taking application directed more towards research than the daily journaling / PKM method that I use, but it had them few key features that pulled me in:</p>
<ul>
<li>Foward and Back linking.</li>
<li>A mobile application for iOS.</li>
<li>Markdown writing style.</li>
<li>A REST API.</li>
</ul>
<p>Supernotes is a freemium model, you can have 50 notes free, and a few extra for each person you recommend, or you can pay £8/month for unlimited notes and access to the mobile apps on iOS and Android. I&rsquo;ve stumped up for the premium option out of the gate, while I could of worked with 50 notes the mobile application was the killer feature for me, to which at the moment is still in Test Flight and requires a subscription.</p>
<p>After my migration frustrations with Obsidian I decided to just drop everything and start again, while picking and converting any particularly useful notes over as soon as I need them. This caused a few days of frustration having to switch back to LogSeq every so often, but I got to the point where I didn&rsquo;t need to open it anymore.</p>
<p>I&rsquo;ve been working with Supernotes as my primary tool for two weeks now, and here is what i&rsquo;ve found. I&rsquo;ll work with the common format of the good, the bad, and the ugly.</p>
<p><strong>NOTE</strong>: This list was made 2022-04-29, things may of changed or improved since then.</p>
<h3 id="the-good">The Good</h3>
<p>Supernotes note card based layout makes it incredibly easy to produce small snippets of useful information, almost directing to a Zettlekasten style of note taking without much active thinking. The notes themselves provide a small guide at the bottom to the number of characters remaining, this isn&rsquo;t some hard or fast limit, more a Supernotes suggestion of what they think would produce a concise and useful note.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/two-weeks-with-supernotes/characters_hu75dad1b404877321a311440a7b2c8258_30402_900x0_resize_box_3.png" width="900" height="200">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>Interlinking notes is easy, with a super useful command palette that is accessed with the <code>/</code> key. From there you can link, add a parent, give the card an icon or colour, and even insert simple templates. The palette is basic but hopefully in the future they&rsquo;ll introduce a few more advanced features (more of that in the Bad).</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/two-weeks-with-supernotes/daily_hu119d13af59f06bfbe1d8aaa141f2dfd5_108779_900x0_resize_box_3.png" width="900" height="220">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>By default Supernotes includes two views, &lsquo;Daily&rsquo; and &lsquo;Thoughts&rsquo;. Daily will show you all notes produced on a set day, the top has a small calendar that allows you to browse back and the dates are colour coded indicating the number of notes made on that day. Thoughts is the place for lost notes, ones that don&rsquo;t have content, a tag, a parent or other incomplete information. If you throw a quick note into the application you don&rsquo;t have to worry about where it is stored, usually Thoughts will give you quick access to those snippets that need to be organised.</p>
<p>The outline on the left panel allows for quick access to any notes flagged as Priority, each note has a visibility; Priority shows the note on the outline, Visible makes it appear in search results and as child notes, and Invisible does exactly what it says and is only visible to very specific searches for it. Using the visibility you can build a quick reference structure in the outline to allow quick access to groups of notes.</p>
<p>If you wish to export your notes you have a selection of formats, from basic Markdown, PDF, and also a simple JSON format that includes all the metadata you have with your notes. Outstanding, especially when compared to some other services out there.</p>
<h3 id="the-bad">The Bad</h3>
<p>Going back to the outline and those visibility settings. Visibility is a great tool to build a note structure on your outline, but Supernotes also has a &lsquo;Note board&rsquo; view, where it&rsquo;ll show that note and all its child notes in a quick reference view. Say for example you had all your notes for to Ansible as children under a &lsquo;Ansible&rsquo; note, the Ansible note itself is set as visible as its also a child of &lsquo;Technology&rsquo; which is visible in the outline. By entering note board view on the Ansible note it&rsquo;ll adjust that notes visibility from Visible to Priority, making it appear on the outline.</p>
<p>Notes can be pinned to a right side panel, making them always visible, but the panel itself is width restricted. In fact, the app itself shows a lot of whitespace. While I understand they&rsquo;re styling the notes as note cards, it would be nice to allow true fluid width on them as an option.</p>
<p>The screenshot at the top of this post doesn&rsquo;t really show as a good example due to my Macbook&rsquo;s screen, but on my 4k screen running at 125% the amount of whitespace real estate on full screen is crazy. I think this could be improved, maybe with a few options to allow some notes to be fully expanded to show the full note content as well rather than the &lsquo;See more&rsquo; text.</p>
<p>Supernotes includes an export to PDF feature, select a note then you can export the result as a PDF for easy reference outside of Supernotes, the problem is that it only exports only direct child notes, not the grandchildren. I would like it to export all of them.</p>
<h3 id="the-ugly">The Ugly</h3>
<p>A global search is a frustrating omission, the search bar only searches the current context so if you&rsquo;re in &lsquo;Daily&rsquo; it&rsquo;ll only show notes that would appear there. To search all your (visible) notes you need to click the &lsquo;Home&rsquo; tab first, then use the search bar.</p>
<p>Lastly, filters, while they are useful they usually end up causing confusion. At the time of writing i&rsquo;m hitting 240 notes in my system and i&rsquo;ve yet to find where they are useful. If you accidentally hit on a filter for one reason or another it&rsquo;ll affect all views until its disabled, only a small coloured dot on the filter button is your clue that they&rsquo;re enabled. While I don&rsquo;t think anything is massively wrong with the idea, I feel the implementation is more a hindrance than a help.</p>
<h2 id="the-future">The Future</h2>
<p>Supernotes has only just hit version 2.0, and the developers are very active in the community forums. Their last patch to the application had nearly 30 tweaks and fixes, so it shows they really care about this product. For now i&rsquo;ll be sticking to Supernotes as my primary note taking tool, and i&rsquo;ll write a further review maybe at a year into it.</p>
<p>I&rsquo;d be interested in hearing about other Supernotes user&rsquo;s workflows, what works for you and what doesn&rsquo;t, so consider sending me a message on <a href="https://mastodon.social/web/@nikdoof">Mastodon</a>.</p>
]]></content></item><item><title>The future of dimension.sh</title><link>https://nikdoof.com/posts/2022/the-future-of-dimension/</link><pubDate>Sat, 09 Apr 2022 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2022/the-future-of-dimension/</guid><description>dimension.sh is nearly 2 years old. I initially started dimension as I wanted to rekindle a love of an old &amp;lsquo;pubnix&amp;rsquo; I used to be a member of back when I was in college, it ended up being a small community where I met a few good friends to which i&amp;rsquo;m still in contact with twenty years later. I happened to come across the the tilde community one day and was reminded of this fun part of my life.</description><content type="html"><![CDATA[<p><a href="https://dimension.sh">dimension.sh</a> is nearly 2 years old. I initially started dimension as I wanted to rekindle a love of an old &lsquo;pubnix&rsquo; I used to be a member of back when I was in college, it ended up being a small community where I met a few good friends to which i&rsquo;m still in contact with twenty years later. I happened to come across the the tilde community one day and was reminded of this fun part of my life.</p>
<p>Now approaching its second year I think its time for a little bit of an update on whats going on. In the last year we&rsquo;ve made some changes:</p>
<ul>
<li>Updated from CentOS 8 to CentOS Stream</li>
<li>Upped the resources from 1 CPU / 1GB to 2 CPU / 2GB</li>
<li>Reworked the website into a Hugo site</li>
<li>Setup true mirrors of the HTTP website in Gemini and Gopher (including all the functions!)</li>
</ul>
<p>In terms of uptime, we&rsquo;ve had no unexpected downtime outside of scheduled updates and patches. Thankfully, Digital Ocean provides a very stable platform to run a pubnix.</p>
<p>I&rsquo;ve got a few aims for our second year:</p>
<ol>
<li>Investigate a few extra services - NNTP with NNCP, Matrix, or XMPP.</li>
<li>Extend out <code>/home</code> with a new disk.</li>
<li>Update the wiki with more details.</li>
<li>Grow the community.</li>
</ol>
<p>As always we&rsquo;re always taking feedback from members for any improvements. Thank you to the community that has grown around dimension, without you it wouldn&rsquo;t be worth running.</p>
]]></content></item><item><title>Home Assistant Power Monitoring</title><link>https://nikdoof.com/posts/2022/homeassistant-power-monitoring/</link><pubDate>Wed, 05 Jan 2022 12:07:08 +0000</pubDate><guid>https://nikdoof.com/posts/2022/homeassistant-power-monitoring/</guid><description>I&amp;rsquo;ve operated a relatively simple Home Assistant installation for the past 3 or so years, for the longest time it comprised of a few Hue bulbs, a hub, and a very slow Optiplex FX160 running Home Assistant and a few other services. Much like any small pet project the scope creeped into its current form:
20 Hue devices 9 IKEA Tradfri devices. 3 Generic Zigbee sensors. 6 ESPHome Smart Plugs.</description><content type="html"><![CDATA[<p>I&rsquo;ve operated a relatively simple <a href="https://www.home-assistant.io">Home Assistant</a> installation for the past 3 or so years, for the longest time it comprised of a few <a href="https://www.philips-hue.com/en-gb">Hue</a> bulbs, a hub, and a very slow <a href="https://www.parkytowers.me.uk/thin/dell/fx160/">Optiplex FX160</a> running Home Assistant and a few other services. Much like any small pet project the scope creeped into its current form:</p>
<ul>
<li>20 Hue devices</li>
<li>9 IKEA Tradfri devices.</li>
<li>3 Generic Zigbee sensors.</li>
<li>6 ESPHome Smart Plugs.</li>
<li>1 ESPHome &ldquo;busylight&rdquo;.</li>
<li>2 ESPHome hacked IKEA air quality sensors.</li>
<li>3 Apple Home Pods (recently ousting some Echo devices).</li>
</ul>
<p>My home is <em>quite</em> automated. While people may think its a &ldquo;<a href="https://mobile.twitter.com/internetofshit?lang=en">Internet of Shit</a>&rdquo; for bored techies, it actually keeps us from doing dumb stuff like leaving the 3KW heater in the living room on all night. This simple automation actually pushed me to investigate what other power savings could be had using Home Assistant and a few extra items.</p>
<p>In August 2021 the Home Assistant team added the <a href="https://www.home-assistant.io/blog/2021/08/04/home-energy-management/">Energy Dashboard</a>, a tool to give a single dashboard view of energy consumption and generation. The dashboard is primarily designed for those people who have home generation setups, such as a PV installation, but it is useful for tracking energy usage of the average home. After my experience with taming the living room heater I decided to try and get as many items in the house into this dashboard, and then cry at the true energy usage of the household.</p>
<p>The update and Energy dashboard makes use of sensors to track power usage, the key one is the <code>energy</code> device class, and the unit of <code>kwh</code>. If your device or integration produces a sensor with those two attributes then it will be available on the energy dashboard. Unfortunately, very few devices provide this and some adjustments and additional tweaks to Home Assistant are required to make them visible.</p>
<h2 id="esphome-devices">ESPHome Devices</h2>
<p>I currently use two types of smart plugs in the home, but both types are ESP8266 based Sonoff clones. They include a <code>hlw8012</code> chipset that allows for power sensing and provide amps, watts, and voltage to ESPHome without much configuration. The ESPHome framework has a software sensor called <a href="https://esphome.io/components/sensor/total_daily_energy.html">&ldquo;Total Daily Energy&rdquo;</a> that can use these values and output a sum total kWh sensor that Home Assistant can use for the dashboard.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">sensor</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="l">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">platform</span><span class="p">:</span><span class="w"> </span><span class="l">total_daily_energy</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${friendly_name} kWh&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">power_id</span><span class="p">:</span><span class="w"> </span><span class="l">power_consumption_watt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">filters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">multiply</span><span class="p">:</span><span class="w"> </span><span class="m">0.001</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">unit_of_measurement</span><span class="p">:</span><span class="w"> </span><span class="l">kWh</span><span class="w">
</span></span></span></code></pre></div><p>In this case, <code>power_consumption_watt</code> is a common name I use for a sensor that outputs the current usage in watts. For the plugs it uses the <code>hlw8012</code>, but for other devices that don&rsquo;t have power sensing I make use of a static sensor:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">interval</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">interval</span><span class="p">:</span><span class="w"> </span><span class="l">1min</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">then</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">sensor.template.publish</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">id</span><span class="p">:</span><span class="w"> </span><span class="l">power_consumption_watt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">state</span><span class="p">:</span><span class="w"> </span><span class="l">$power_consumption</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">sensor</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span>- <span class="nt">platform</span><span class="p">:</span><span class="w"> </span><span class="l">template</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;${friendly_name} Power Usage&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">id</span><span class="p">:</span><span class="w"> </span><span class="l">power_consumption_watt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">device_class</span><span class="p">:</span><span class="w"> </span><span class="l">power</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">state_class</span><span class="p">:</span><span class="w"> </span><span class="l">measurement</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">unit_of_measurement</span><span class="p">:</span><span class="w"> </span><span class="l">W</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">accuracy_decimals</span><span class="p">:</span><span class="w"> </span><span class="m">4</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">filters</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">heartbeat</span><span class="p">:</span><span class="w"> </span><span class="l">60s</span><span class="w">
</span></span></span></code></pre></div><p>Every minute the interval timer publishes the value defined as <code>power_consumption</code> to the ID <code>power consumption_watt</code>, and that ID is defined as a Template sensor providing watts. The value of <code>power_consumption</code> is defined in the per device configuration as a <a href="https://esphome.io/guides/configuration-types.html#substitutions">substitution</a> and is based on monitoring the device with a external power monitor (or another smart plug).</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">substitutions</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">devicename</span><span class="p">:</span><span class="w"> </span><span class="l">bedroom_air_quality</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">friendly_name</span><span class="p">:</span><span class="w"> </span><span class="l">Bedroom Air Quality</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">device_description</span><span class="p">:</span><span class="w"> </span><span class="l">Bedroom AQ Monitor</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">power_consumption</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;0.5696&#34;</span><span class="w">
</span></span></span></code></pre></div><h2 id="ipmi-servers">IPMI Servers</h2>
<p>For my sins I run a two cluster vSphere homelab and a TrueNAS storage server, this small stack likes to guzzle power and getting it visible within Home Assistant was a small challenge. I use Prometheus as my metrics tracking and I currently harvest IPMI sensor information into the database already, as part of the sensors it includes a current watt usage of the system. The issue I had is that Home Assistant doesn&rsquo;t have any in-built component to pull this data from Prometheus.</p>
<p>Thankfully, the community provided one. A user on Github, lfasci, created <a href="https://github.com/lfasci/homeassistant-prometheus-query">homeassistant-prometheus-query</a> which gives the ability to run simple PromQL against your Prometheus instance and import it into Home Assistant. The component has a few issues and is very much developed for the &ldquo;happy path&rdquo;, but after some trial and error I was able to create a sensor in Home Assistant with the following:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl">- <span class="nt">platform</span><span class="p">:</span><span class="w"> </span><span class="l">prometheus_query</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Anshar Power Usage</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">unique_id</span><span class="p">:</span><span class="w"> </span><span class="l">anshar_power_usage</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">prometheus_url</span><span class="p">:</span><span class="w"> </span><span class="l">http://prometheus-server.monitoring</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">prometheus_query</span><span class="p">:</span><span class="w"> </span><span class="l">ipmi_sensor_value{name=&#34;pwr_consumption&#34;,server=&#34;anshar-idrac.int.doofnet.uk&#34;}</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">unit_of_measurement</span><span class="p">:</span><span class="w"> </span><span class="l">W</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">state_class</span><span class="p">:</span><span class="w"> </span><span class="l">measurement</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">device_class</span><span class="p">:</span><span class="w"> </span><span class="l">power</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span>- <span class="nt">platform</span><span class="p">:</span><span class="w"> </span><span class="l">integration</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">source</span><span class="p">:</span><span class="w"> </span><span class="l">sensor.anshar_power_usage</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">Anshar Power Usage kWh</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">unit_prefix</span><span class="p">:</span><span class="w"> </span><span class="l">k</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">unit_time</span><span class="p">:</span><span class="w"> </span><span class="l">h</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">round</span><span class="p">:</span><span class="w"> </span><span class="m">2</span><span class="w">
</span></span></span></code></pre></div><p>The second part of the puzzle is the <a href="https://www.home-assistant.io/integrations/integration/">&ldquo;integration&rdquo; platform</a>. This will take the tracking Watt value and covert it to kWh. This works by calculating the total value when source sensor changes, so if your sensor is very stable and doesn&rsquo;t change too much you could have very spikey results from the platform.</p>
<p>One issue to point out, if your experimenting with your sensor query and it returns null or invalid data, you may have issues using the integration plugin later. Once the data is stored in the <code>recorder</code> integration it&rsquo;ll base its data off what is available in there, so if for example your unit of measurement is wrong then the resulting sensor will also be wrong.</p>
<h2 id="hue--ikea-tradfri">Hue &amp; IKEA Tradfri</h2>
<p>At the time of writing (2022-01-05), the configuration I have (using zigbee2mqtt) doesn&rsquo;t support providing any sort of power usage stats. I don&rsquo;t expect this to change in anyway in the near future, but it may be possible to add fake sensors, much like the hacked air quality sensors, to Home Assistant to produce the required kWh output. You could potentially measure a bulb&rsquo;s wattage using an external tool, then publishing watt values depending if the bulb is on or off in Home Assistant. This feels like it would be best done by a Z2M Extension, or even something that uses MQTT to pull the state and publish the wattage values.</p>
<p>For the moment, that gives some per device statistics and integrates it nicely into the Home Assistant energy panel.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2022/homeassistant-power-monitoring/power_hu5fbe2ab16244295af66cb1aefb50a1d9_152425_900x0_resize_box_3.png" width="900" height="547">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>My next project is to look into monitoring the meter itself to get a total household usage into Home Assistant, but that will be another post.</p>
]]></content></item><item><title>Home Assistant, Kubernetes, and Traefik</title><link>https://nikdoof.com/posts/2021/homeassistant-kubernetes-and-traefik/</link><pubDate>Fri, 03 Dec 2021 15:02:21 +0000</pubDate><guid>https://nikdoof.com/posts/2021/homeassistant-kubernetes-and-traefik/</guid><description>I&amp;rsquo;ve hosted my Home Assistant install on Kubernetes for quite a while, using a basic network setup of Kube Router, MetalLB, and Traefik. As part of a upgrade cycle I decided to build out a new cluster making use of a CSI plugin for iSCSI provisioning on FreeNAS, and also HAProxy hosted on a pfSense instance.
I copied my configs over to the new cluster, and Home Assistant got in a bit of a tizz.</description><content type="html"><![CDATA[<p>I&rsquo;ve hosted my Home Assistant install on Kubernetes for quite a while, using a basic network setup of Kube Router, MetalLB, and Traefik. As part of a upgrade cycle I decided to build out a new cluster making use of a CSI plugin for iSCSI provisioning on FreeNAS, and also HAProxy hosted on a pfSense instance.</p>
<p>I copied my configs over to the new cluster, and Home Assistant got in a bit of a tizz. Not matter what I did I couldn&rsquo;t login to the UI. All that was coming up was the prompt of <strong>“Login aborted: Your computer is not allowed”</strong></p>
<p>My Home Assistant config was correct, as far as I knew:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nt">homeassistant</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="l">...</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">auth_providers</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="nt">type</span><span class="p">:</span><span class="w"> </span><span class="l">trusted_networks</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">trusted_networks</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="m">10.101.0.0</span><span class="l">/16</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">http</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">server_host</span><span class="p">:</span><span class="w"> </span><span class="m">0.0.0.0</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">ip_ban_enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">false</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">login_attempts_threshold</span><span class="p">:</span><span class="w"> </span><span class="m">100</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">use_x_forwarded_for</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">trusted_proxies</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="m">10.85.0.0</span><span class="l">/16</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="m">10.101.10.0</span><span class="l">/24</span><span class="w">
</span></span></span></code></pre></div><p>The nodes are on <code>10.101.10.0/24</code>, the Pod network is <code>10.85.0.0/16</code>, and the clients are in <code>10.101.0.0/16</code>. !?!?</p>
<p>Using the good ol&rsquo; inspector within Safari I was able to pull out the response from the request, <code>login_flow</code> returned:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-json" data-lang="json"><span class="line"><span class="cl"><span class="p">{</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;type&#34;</span><span class="p">:</span> <span class="s2">&#34;abort&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;flow_id&#34;</span><span class="p">:</span> <span class="s2">&#34;4c8575ddc18b4d5b83565c08420d3093&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;handler&#34;</span><span class="p">:</span> <span class="p">[</span>
</span></span><span class="line"><span class="cl">        <span class="s2">&#34;trusted_networks&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">        <span class="kc">null</span>
</span></span><span class="line"><span class="cl">    <span class="p">],</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;reason&#34;</span><span class="p">:</span> <span class="s2">&#34;not_allowed&#34;</span><span class="p">,</span>
</span></span><span class="line"><span class="cl">    <span class="nt">&#34;description_placeholders&#34;</span><span class="p">:</span> <span class="kc">null</span>
</span></span><span class="line"><span class="cl"><span class="p">}</span>
</span></span></code></pre></div><p>Not a massively useful error. Digging into <a href="https://github.com/home-assistant/core/blob/e30e4d5c6d7e560ceb7c3eca6d1d4d3b14b7b356/homeassistant/auth/providers/trusted_networks.py#L241">the code</a> shows that its catching an exception and throwing an error. I adjusted the code in the container to throw a bit more details of what is going on, by passing through the exception itself.</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl">        <span class="k">except</span> <span class="n">InvalidAuthError</span> <span class="k">as</span> <span class="n">exc</span><span class="p">:</span>                                                                                      
</span></span><span class="line"><span class="cl">            <span class="k">return</span> <span class="bp">self</span><span class="o">.</span><span class="n">async_abort</span><span class="p">(</span><span class="n">reason</span><span class="o">=</span><span class="nb">str</span><span class="p">(</span><span class="n">exc</span><span class="p">))</span>    
</span></span></code></pre></div><p>Now, it was complaining of <strong>&ldquo;Can&rsquo;t allow access from a proxy server&rdquo;</strong>, huh? <code>X-Forwarded-For</code> is switched on in the Home Assistant, but the client IP address was showing as one of the node&rsquo;s IPs, not the end user IP. Then it suddenly dawned on me.</p>
<p>On my previous cluster, Traefik had been the edge taking connections directly from client systems, but now it was getting requests from HAproxy that knew what the client IPs are. Traefik by default will not trust certain headers it receives from the client and will re-write them to what it understands is the correct value.</p>
<p>The fix? Allowing Traefik to trust the headers. This can be done with the following additions onto the Helm chart:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="w">    </span><span class="nt">additionalArguments</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="s2">&#34;--entryPoints.web.proxyProtocol.insecure&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="s2">&#34;--entryPoints.web.forwardedHeaders.insecure&#34;</span><span class="w">
</span></span></span></code></pre></div><p>Further information can be found on the <a href="https://doc.traefik.io/traefik/routing/entrypoints/#forwarded-headers">Traefik documentation site</a></p>
]]></content></item><item><title>Note Taking</title><link>https://nikdoof.com/posts/2021/note-taking/</link><pubDate>Fri, 16 Jul 2021 09:42:00 -0100</pubDate><guid>https://nikdoof.com/posts/2021/note-taking/</guid><description>For some time i&amp;rsquo;ve been using LogSeq as my primary note taking tool, essentially an open source version of Roam Research it quick became a prime choice due to its offline first design. Over the past year it has been my source of reference and a method to store fleeting knowledge in a way I can easily access.
Like all good software tools, over time everything develops competitors. Roam gave birth to multiple tools in its wake, and many other tools gave birth to Roam.</description><content type="html"><![CDATA[<p>For some time i&rsquo;ve been using <a href="https://logseq.com">LogSeq</a> as my primary note taking tool, essentially an open source version of <a href="https://roamresearch.com">Roam Research</a> it quick became a prime choice due to its offline first design. Over the past year it has been my source of reference and a method to store fleeting knowledge in a way I can easily access.</p>
<p>Like all good software tools, over time everything develops competitors. Roam gave birth to multiple tools in its wake, and many other tools gave birth to Roam. All have been missing some sort of killer feature for me, that was until last week.</p>
<p><a href="https://obsidian.md">Obsidian</a> has been on my radar for some time, primarily desktop based and more of a general note taking tool than outliner, I had always played a little with it and then moved back to LogSeq. Obsidian&rsquo;s big selling point is its modularity and a wealth of community plugins that bring new and exciting features into a normal note taking tool. Since my workflow mostly consisted around daily journals and outlining I was happy to see that Obsidian finally had plugins to support it.</p>
<p>Last week, Obsidian announced the final missing piece of the jigsaw: a <a href="https://obsidian.md/mobile">mobile application</a>. Combined with their paid-for sync service you can now have access to your notes on the go! I think its finally time for me to move away from LogSeq.</p>
<p>The problem is, i&rsquo;ve got nearly a year of notes in my LogSeq; 3090 commits to the repo, hundreds of journals, hundreds of pages. Thankfully, a recent LogSeq update allowed the conversion from their &ldquo;LogSeq Markdown&rdquo; to a more standard one, but Obsidian doesn&rsquo;t support Frontmatter and it&rsquo;ll require some sort of manipulation to convert between the two systems easily.</p>
<p>So today, i&rsquo;ve decided that i&rsquo;ll work on a set of tooling to allow for easy migration, quick i&rsquo;ve dubbed as &ldquo;ObsSeq&rdquo;. The repository is created and the license is set to MIT, and hopefully over the next few weeks I can provide some decent tooling to assist with the migration.</p>
<p><a href="https://github.com/nikdoof/obsseq">ObsSeq</a></p>
]]></content></item><item><title>Fixing a Game Gear - Part 1</title><link>https://nikdoof.com/posts/2021/fixing-a-game-gear-part-1/</link><pubDate>Wed, 30 Jun 2021 22:33:00 -0100</pubDate><guid>https://nikdoof.com/posts/2021/fixing-a-game-gear-part-1/</guid><description>I have a need to re-live my youth through the purchase of expensive old technology.
When I was younger, much, much younger, my prized possession was a Sega Game Gear. At the time it was the pinnacle of gaming technology; a colour backlit screen, a &amp;lsquo;powerful&amp;rsquo; processor equal to the Master System. The device went along everywhere with me and I chewed through batteries like there was no tomorrow.
One day, I loaned it to a friend of mine, as he wanted to play through Crystal Warriors, and I never saw it again.</description><content type="html"><![CDATA[<p>I have a need to re-live my youth through the purchase of expensive old technology.</p>
<p>When I was younger, much, much younger, my prized possession was a <a href="https://en.wikipedia.org/wiki/Game_Gear">Sega Game Gear</a>. At the time it was the pinnacle of gaming technology; a colour backlit screen, a &lsquo;powerful&rsquo; processor equal to the <a href="https://en.wikipedia.org/wiki/Master_System">Master System</a>. The device went along everywhere with me and I chewed through batteries like there was no tomorrow.</p>
<p>One day, I loaned it to a friend of mine, as he wanted to play through <a href="https://en.wikipedia.org/wiki/Crystal_Warriors">Crystal Warriors</a>, and I never saw it again. Turns out that his father had got addicted to <a href="https://en.wikipedia.org/wiki/Columns_(video_game)">Columns</a> and decided to take the Game Gear on his work trips with him. After a while I heard that it was mangled in accident, but I never got to see the remains.</p>
<p>In the last few years i&rsquo;ve been collecting Gameboy consoles, mostly due to my love of Pokemon, and last week I came upon a extremely cheap Game Gear on eBay, just £30. Game Gears have a terrible reputation for quality, the capacitors don&rsquo;t survive well over the years, and its well known that if you find a working Game Gear without modifications, then you&rsquo;ve hit gold. I took my chances and purchased it, with no idea what I was getting into.</p>
<p><a href="https://www.ifixit.com/Guide/Sega+Game+Gear+Capacitor+Replacement/113655">Game Gear Capacitor Replacement</a></p>
<p>First of all, they listed the item as powers on but the screen is &ldquo;broken&rdquo;, again, this could be due to many reasons and not something to write off. My first concern was when I opened the box the first thing I saw was Japanese newspaper, and this device had been bought from within the UK. Maybe, they bought it from Japan then released that its either beyond repair, or their expertise, or just too expensive to get fixed.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2021/fixing-a-game-gear-part-1/gamegear1_hufddec1286e0791bf0ad89b67d8805c94_2381313_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>The outside looks OK, its had some heavy use but nothing too drastic. Then I opened the battery compartments:</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2021/fixing-a-game-gear-part-1/gamegear2_hu72b09871a549b8f2a77b2485b00d0f15_2385642_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>Oh no, that doesn&rsquo;t look good. Thankfully this was the worst damage in the entire device, it seems that it had been left with some rotting batteries for quite some time and they had rusted away the battery connectors. The Game Gear requires six AA batteries to be ran in series to get the 9v required by the console, so these pads are not the main connectors to the power board, phew.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2021/fixing-a-game-gear-part-1/gamegear3_hu45b3bc9593412956c0a5ab43a2eb9ec4_3292454_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>The power board didn&rsquo;t look bad at all, it has some signs of water marks but no obvious damage. I don&rsquo;t intend on keeping this board, rather replacing it with a <a href="https://retrosix.co.uk/CleanPower-GG-USB-C-Game-Gear-Power-Regulator-p241798108">CleanPower GG</a> which is a drop in replacement for the board and has a USB-C port for easy connection up to a power bank, rather than lugging a 9v wall wart around or six AA batteries and spares. I ordered mine from RetroSix which is a UK retro console parts supplier in the UK, the delivery took 2 days which is great for a smaller company.</p>
<p>After I received the new power board I wasted no time getting it in the device, the Game Gear&rsquo;s boards are all connected via simple connectors so it was a quick swap out. Then, it came time to power the device on for the first time.</p>
<p>I flicked the switch, and the backlight switched on, nothing on the screen. I noticed the power LED hadn&rsquo;t lit up at all, so after a little bit of poking it sprung into life, the screen was on and the LED, then as quickly as it came on it died again. Capacitors, must be.</p>
<p>So that is where i&rsquo;m up to at the moment, i&rsquo;m currently deciding on spending £50 on a repair service to replace the caps, or £90 on a good soldering iron and to try it myself.</p>
]]></content></item><item><title>Antenna</title><link>https://nikdoof.com/posts/2021/antenna/</link><pubDate>Wed, 30 Jun 2021 13:00:00 -0100</pubDate><guid>https://nikdoof.com/posts/2021/antenna/</guid><description>I&amp;rsquo;m a semi-active participant in the #gemini IRC channel on Tilde.chat, and today I noticed that ew0k had announced Antenna. The idea of Antenna is to avoid polling of Atom feeds, where back-offs and downtimes can be detrimental to services trying to combine feeds from several sites. Instead, sites push their updates to the service, much like ping-backs of the old blogging days. Updates to Antenna are then focused on receiving these pings and updating their data, rather than the constant battle of polling and scraping.</description><content type="html"><![CDATA[<p>I&rsquo;m a semi-active participant in the #gemini IRC channel on Tilde.chat, and today I noticed that <a href="gemini://warmedal.se/~bjorn/">ew0k</a> had announced <a href="gemini://warmedal.se/~bjorn/posts/announcing-antenna.gmi">Antenna</a>. The idea of Antenna is to avoid polling of Atom feeds, where back-offs and downtimes can be detrimental to services trying to combine feeds from several sites. Instead, sites push their updates to the service, much like ping-backs of the old blogging days. Updates to Antenna are then focused on receiving these pings and updating their data, rather than the constant battle of polling and scraping.</p>
<p>As of this post, I&rsquo;ve added a call into my site build script. Hopefully, this service will present a more active view of the Gemspace community and allow some sites that have slipped through the nets of CAPCOM to get noticed.</p>
]]></content></item><item><title>Using Kiln</title><link>https://nikdoof.com/posts/2021/using-kiln/</link><pubDate>Tue, 29 Jun 2021 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2021/using-kiln/</guid><description>Setting up Kiln for this site has been quite the adventure, i&amp;rsquo;ve been using Hugo for a few years and have grown accustomed to the way of working, ideally I would of continued using it for this site but getting Hugo to support Gemini markup is difficult at best. Kiln is in that sweet-spot of pure functionality and simplicity, keeping the key tools from other static site generators like Templates and link building, but using the in-built Go template language and a few extra actions to keep it simple.</description><content type="html"><![CDATA[<p>Setting up <a href="https://git.sr.ht/~adnano/kiln">Kiln</a> for this site has been quite the adventure, i&rsquo;ve been using <a href="https://gethugo.com">Hugo</a> for a few years and have grown accustomed to the way of working, ideally I would of continued using it for this site but getting Hugo to support Gemini markup is difficult at best. Kiln is in that sweet-spot of pure functionality and simplicity, keeping the key tools from other static site generators like Templates and link building, but using the in-built Go template language and a few extra actions to keep it simple.</p>
<p>I&rsquo;m a newbie to Go development, but the code for Kiln is simple and easy to understand. I encountered a strange bug on <a href="https://dimension.sh">dimension.sh</a> that would cause Kiln to SEGV on a build due to path walking, i&rsquo;ve still not been able to identify <em>why</em> but I was at least able to identify where the error occurred and how to mitigate it for the meanwhile.</p>
<p>Since i&rsquo;ve been digging deep into the tool and how to configure it, i&rsquo;ve been keeping notes of anything useful i&rsquo;ve discovered. The documentation for Kiln is sparse at the moment but enough to get people working, hopefully over time it&rsquo;ll improve. I still have to navigate SourceHut&rsquo;s submission by mailing list patching, something i&rsquo;ve not done for the best part of 15 years.</p>
<p><a href="/~nikdoof/notes/kiln.gmi">Kiln Notes</a></p>
]]></content></item><item><title>Zigbee2MQTT on Kubernetes</title><link>https://nikdoof.com/posts/2021/zigbee2mqtt-on-kubernetes/</link><pubDate>Mon, 05 Apr 2021 14:50:53 +0100</pubDate><guid>https://nikdoof.com/posts/2021/zigbee2mqtt-on-kubernetes/</guid><description>For some time, I&amp;rsquo;ve been using a Zig-a-zig-ah and Zigbee2MQTT on a spare Raspberry Pi 2. It was far from the most stable platform as the zzh sucked power and caused voltage issues with the Pi, but it worked for what I needed. After a few months, I was frustrated at the frequent restarts the system required. The voltage drops should cause just enough of an issue for Z2M to stop communicating with the zzh and break all of my home automation.</description><content type="html"><![CDATA[<p>For some time, I&rsquo;ve been using a <a href="https://electrolama.com/projects/zig-a-zig-ah/">Zig-a-zig-ah</a> and <a href="https://www.zigbee2mqtt.io/">Zigbee2MQTT</a> on a spare <a href="https://www.raspberrypi.org/blog/raspberry-pi-2-on-sale/">Raspberry Pi 2</a>. It was far from the most stable platform as the zzh sucked power and caused voltage issues with the Pi, but it worked for what I needed. After a few months, I was frustrated at the frequent restarts the system required. The voltage drops should cause just enough of an issue for Z2M to stop communicating with the zzh and break all of my home automation. I needed a stable platform, which I decided to over-engineer.</p>
<p>I started using Kubernetes about a year ago when I needed to learn the platform for some upcoming projects in my day job. As a good test ground, I moved the vast majority of my home workloads over to a small one master and three worker cluster, excluding the apparent issues of persistent storage and ingress. It was relatively easy and is now in a state where it&rsquo;s stable to use for more critical items. To ease configuration, I make use of <a href="https://fluxcd.io/">Flux v2</a> to manage <a href="https://helm.sh">Helm</a> deployments and other objects using a single Github repository.</p>
<h3 id="the-hardware">The Hardware</h3>
<p>The node I&rsquo;m going to be using to host the adapter is a relatively low-powered system, a <a href="https://www.dell.com/downloads/global/products/optix/en/desktop-optiplex-160-customer-brochure-en.pdf">Optiplex 160</a>, its an SFF (small form factor) Atom-based PC. One problem is that the system itself isn&rsquo;t powerful enough to run Zigbee2MQTT. Thankfully, Z2M supports <a href="https://www.zigbee2mqtt.io/how_tos/how_to_connect_to_a_remote_adapter.html">remote devices over TCP sockets</a>, so I&rsquo;ll be able to keep a small pod to act as a gateway to the device on the hardware and use the fast VM nodes to run Z2M itself.</p>
<p>First of all, we needed a consistent device name for the adapter. This step isn&rsquo;t critical, as you could use the full device path, but by giving it a consistent name (as defined by udev) it makes the configuration a little more user-friendly. On the node I created <code>/etc/udev/rules.d/zigbee-controller.rules</code> with the following:</p>
<pre tabindex="0"><code># CC2531
KERNEL==&#34;ttyACM*&#34;, ATTRS{idVendor}==&#34;0451&#34;, ATTRS{idProduct}==&#34;16a8&#34;, MODE=&#34;0666&#34;, SYMLINK+=&#34;zigbee1&#34;
# Zig-a-zig-ah
KERNEL==&#34;ttyUSB*&#34;, ATTRS{idVendor}==&#34;1a86&#34;, ATTRS{idProduct}==&#34;7523&#34;, MODE=&#34;0666&#34;, SYMLINK+=&#34;zigbee2&#34;
</code></pre><p>I have two controllers of a different type, so I created rules to export them as <code>zigbee1</code> and <code>zigbee2</code>. A quick reboot and these devices now appear as their correct names:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-shell" data-lang="shell"><span class="line"><span class="cl">% ls -1 /dev/zigbee*
</span></span><span class="line"><span class="cl">/dev/zigbee1
</span></span><span class="line"><span class="cl">/dev/zigbee2
</span></span></code></pre></div><p>I am going to be making use of the <a href="https://github.com/k8s-at-home/charts/tree/master/charts/stable/ser2sock">ser2sock Helm chart</a> from <a href="https://github.com/k8s-at-home/charts">K8s-at-home</a>, this deploys a small application to proxy the serial port to a TCP socket, which Zigbee2MQTT can use. But before I could deploy it, I needed to make sure that the pod can find the hardware and be assigned to the same node every time.</p>
<p>By default with Kubernetes, your application will be scheduled on any worker node that is operational and not cordoned off. This presents a problem when dealing with physical hardware, it is only available on a select node, and you need to ensure that the application is ran on that particular node. Kubernetes supports the concept of labelling, where a node can be tagged with a text string, and a Pod definition can require a set of labels that it expects for the node it&rsquo;ll be running on. Using <code>kubectl</code> its a quick change to the node:</p>
<pre tabindex="0"><code>% kubectl label node k8s-node04 doofnet.uk/device=zigbee-controller
% kubectl get node k8s-node04 --show-labels
NAME         STATUS                     ROLES    AGE   VERSION   LABELS
k8s-node04   Ready,SchedulingDisabled   &lt;none&gt;   40d   v1.20.4   doofnet.uk/device=zigbee-controller
</code></pre><p>So now I have the elements I need to write a Helm release. The device will be linked as <code>/dev/zigbee2</code> and available on a node labelled with <code>doofnet.uk/device=zigbee-controller</code>. So I create a <code>HelmRelease</code> object to deploy <code>ser2sock</code>:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">helm.toolkit.fluxcd.io/v2beta1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">HelmRelease</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">zigbee2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">ha</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">interval</span><span class="p">:</span><span class="w"> </span><span class="l">5m</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">chart</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">chart</span><span class="p">:</span><span class="w"> </span><span class="l">ser2sock</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">version</span><span class="p">:</span><span class="w"> </span><span class="s1">&#39;2.0.3&#39;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">sourceRef</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">HelmRepository</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">k8s-at-home</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">flux-system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">interval</span><span class="p">:</span><span class="w"> </span><span class="l">1m</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">values</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">device</span><span class="p">:</span><span class="w"> </span><span class="l">/dev/zigbee2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">tolerations</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">effect</span><span class="p">:</span><span class="w"> </span><span class="l">NoSchedule</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">operator</span><span class="p">:</span><span class="w"> </span><span class="l">Exists</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="l">CriticalAddonsOnly</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">operator</span><span class="p">:</span><span class="w"> </span><span class="l">Exists</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span>- <span class="nt">effect</span><span class="p">:</span><span class="w"> </span><span class="l">NoExecute</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">operator</span><span class="p">:</span><span class="w"> </span><span class="l">Exists</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">affinity</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">nodeAffinity</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">requiredDuringSchedulingIgnoredDuringExecution</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">nodeSelectorTerms</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span>- <span class="nt">matchExpressions</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">key</span><span class="p">:</span><span class="w"> </span><span class="l">doofnet.uk/device</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">operator</span><span class="p">:</span><span class="w"> </span><span class="l">In</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">values</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span>- <span class="l">zigbee-controller</span><span class="w">
</span></span></span></code></pre></div><p>Another option I&rsquo;ve put in the release is the list of tolerations. My physical node is very underpowered and I don&rsquo;t want any other workloads on it, so I marked the node as <code>ScheduleDisable</code>. By adding the tolerations I can have this pod ignore the restrictions and still run on the node.</p>
<p>Once pushed to Flux and deployed by Helm you&rsquo;ll see a new pod appear called <code>zigbee2-ser2sock</code>, and if you check the logs you&rsquo;ll be able to see if it found the hardware and started correctly:</p>
<pre tabindex="0"><code>ser2sock Serial 2 Socket Relay version V1.5.5 starting
ser2sock Listening socket created on port 10000
ser2sock Start wait loop using ser2sock communication mode
ser2sock Opened com port at /dev/ttyUSB0
ser2sock Setting speed 115200
ser2sock Set speed successful
</code></pre><p>The pod definition, created by the Helm chart, maps the device name you provided (<code>/dev/zigbee2</code>) to <code>/dev/ttyUSB0</code> for ease of configuration. Initially this confused me and its always worth checking the Chart definitions to ensure what value is being used where.</p>
<h3 id="zigbee2mqtt">Zigbee2MQTT</h3>
<p>Now that the adapter is available via TCP using <code>ser2sock</code>, I can deploy Zigbee2MQTT. K8s-at-home also has a chart ready to deploy the application, so I make use of it:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">apiVersion</span><span class="p">:</span><span class="w"> </span><span class="l">helm.toolkit.fluxcd.io/v2beta1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">HelmRelease</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">metadata</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">zigbee2mqtt-zigbee2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">ha</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">interval</span><span class="p">:</span><span class="w"> </span><span class="l">5m</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">chart</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">spec</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">chart</span><span class="p">:</span><span class="w"> </span><span class="l">zigbee2mqtt</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">version</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;6.2.1&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">sourceRef</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">kind</span><span class="p">:</span><span class="w"> </span><span class="l">HelmRepository</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">name</span><span class="p">:</span><span class="w"> </span><span class="l">k8s-at-home</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">namespace</span><span class="p">:</span><span class="w"> </span><span class="l">flux-system</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">interval</span><span class="p">:</span><span class="w"> </span><span class="l">1m</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">  </span><span class="nt">values</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">image</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">tag</span><span class="p">:</span><span class="w"> </span><span class="m">1.18.2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">config</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">homeassistant</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">permit_join</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">mqtt</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">base_topic</span><span class="p">:</span><span class="w"> </span><span class="l">zigbee2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">server</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;mqtt://mosquitto.monitoring&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">serial</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">port</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;tcp://zigbee2-ser2sock:10000&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">frontend</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">port</span><span class="p">:</span><span class="w"> </span><span class="m">8080</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">persistence</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">data</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">accessMode</span><span class="p">:</span><span class="w"> </span><span class="l">ReadWriteOnce</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span><span class="nt">size</span><span class="p">:</span><span class="w"> </span><span class="l">1Gi</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span><span class="nt">ingress</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">enabled</span><span class="p">:</span><span class="w"> </span><span class="kc">true</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">      </span><span class="nt">hosts</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">        </span>- <span class="nt">host</span><span class="p">:</span><span class="w"> </span><span class="l">zigbee2-dashboard.apps.doofnet.uk</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">          </span><span class="nt">paths</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">            </span>- <span class="nt">path</span><span class="p">:</span><span class="w"> </span><span class="l">/</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">              </span><span class="nt">pathType</span><span class="p">:</span><span class="w"> </span><span class="l">Prefix</span><span class="w">
</span></span></span></code></pre></div><p>Under the <code>values</code> section, you can see a key of <code>config</code>; any subkey of this value used to create the Zigbee2MQTT configuration YAML file in a special location on the pod, so on the first boot it&rsquo;ll use these values as the defaults. I defined URL for our <code>ser2sock</code> instance, and also details of our MQTT server and a base topic to use.</p>
<p>Once done, push it to Flux, wait for it to deploy, and check the pod named <code>zigbee2mqtt-zigbee2</code>:</p>
<pre tabindex="0"><code>zigbee2mqtt-zigbee2 Using &#39;/data&#39; as data directory
zigbee2mqtt-zigbee2 &gt; zigbee2mqtt@1.18.2 start /app
zigbee2mqtt-zigbee2 &gt; node index.js
zigbee2mqtt-zigbee2 Logging to console only&#39;
zigbee2mqtt-zigbee2 Starting Zigbee2MQTT version 1.18.2 (commit #abd8a09)
zigbee2mqtt-zigbee2 Starting zigbee-herdsman (0.13.88)
zigbee2mqtt-zigbee2 zigbee-herdsman started
zigbee2mqtt-zigbee2 Coordinator firmware version: &#39;{&#34;meta&#34;:{&#34;maintrel&#34;:1,&#34;majorrel&#34;:2,&#34;minorrel&#34;:7,&#34;product&#34;:1,&#34;revision&#34;:20210120,&#34;transportrev&#34;:2},&#34;type&#34;:&#34;zStack3x0&#34;}&#39;
</code></pre><p>And I can check the dashboard ingress endpoint I defined in the configuration:</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2021/zigbee2mqtt-on-kubernetes/z2m_hua72190e4a49f4110de824191195bcd5f_130163_900x0_resize_box_3.png" width="900" height="417">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>Success! We have a running Zigbee2MQTT in Kubernetes. This solution was built off the back of hard work done by the Zigbee2MQTT team, K8s-at-home, and a few others, but with relatively little work, you can have a hardware-dependent application running quite quickly within Kubernetes.</p>
]]></content></item><item><title>CentOS 8 and the broken SELinux Policy</title><link>https://nikdoof.com/posts/2021/centos8-and-broken-selinux-policy/</link><pubDate>Mon, 22 Mar 2021 17:32:00 +0000</pubDate><guid>https://nikdoof.com/posts/2021/centos8-and-broken-selinux-policy/</guid><description>TASK [www : Enable httpd_read_user_content] ************************************ 04:03:45 459 fatal: [s1.dimension.sh]: FAILED! =&amp;gt; {&amp;#34;changed&amp;#34;: false, &amp;#34;msg&amp;#34;: &amp;#34;Failed to manage policy for boolean httpd_read_user_content: [Errno 0] Error&amp;#34;} The result of a standard weekly AWX run on a system was an error. Failed to manage policy for boolean ...: [Errno 0] Error, not exactly the most helpful error to be spat out by Ansible, but it&amp;rsquo;s all I had. I had recently completed patching on the VM, so the occurrence of the issue can be attributed to an updated package.</description><content type="html"><![CDATA[<pre tabindex="0"><code>TASK [www : Enable httpd_read_user_content] ************************************
04:03:45
459
fatal: [s1.dimension.sh]: FAILED! =&gt; {&#34;changed&#34;: false, &#34;msg&#34;: &#34;Failed to manage policy for boolean httpd_read_user_content: [Errno 0] Error&#34;}
</code></pre><p>The result of a standard weekly AWX run on a system was an error. <code>Failed to manage policy for boolean ...: [Errno 0] Error</code>, not exactly the most helpful error to be spat out by Ansible, but it&rsquo;s all I had. I had recently completed patching on the VM, so the occurrence of the issue can be attributed to an updated package.</p>
<h2 id="the-issue">The Issue</h2>
<p>The Ansible error is a generic catch-all error within the SEBoolean module. The Python in the module tries to replicate what the management commands are doing under the hood, so while the code is complicated, it should be easily replicated with the CLI commands. Running to set the boolean worked:</p>
<pre tabindex="0"><code># setsebool httpd_enable_homedirs=on
#
</code></pre><p>But, running to apply the boolean permanently failed.</p>
<pre tabindex="0"><code># setsebool -P httpd_enable_homedirs=on
libsepol.context_from_record: type systemd_sleep_exec_t is not defined
libsepol.context_from_record: could not create context structure
libsepol.context_from_string: could not create context structure
libsepol.sepol_context_to_sid: could not convert system_u:object_r:systemd_sleep_exec_t:s0 to sid
invalid context system_u:object_r:systemd_sleep_exec_t:s0
#
</code></pre><p>The underlying error is that type <code>systemd_sleep_exec_t</code> isn&rsquo;t defined in the SELinux policy, a fundamental part in my opinion, which points to something wrong with the policy files. Searching on RedHat&rsquo;s and CentOS bug trackers directs me to a few other people experiencing similar.</p>
<h2 id="the-resolution">The Resolution</h2>
<p>In previous instances, I had an issue where multiple versions of the SELinux policy files were installed, and removing the old instance and re-installing the current version resolved the issue. In this instance, I only had one installed version, so I re-installed:</p>
<pre tabindex="0"><code># dnf re-install &#34;selinux-policy*&#34;

...

Reinstalling:
 selinux-policy             noarch  3.14.3-54.el8_3.2 baseos 622 k
 selinux-policy-targeted    noarch  3.14.3-54.el8_3.2 baseos 15 M

...

Reinstalled:
  selinux-policy-3.14.3-54.el8_3.2.noarch
  selinux-policy-targeted-3.14.3-54.el8_3.2.noarch

Complete!
</code></pre><p>And now it works as expected.</p>
<pre tabindex="0"><code># setsebool -P httpd_enable_homedirs=on
#
</code></pre>]]></content></item><item><title>CalDigit TS3+ and a Logitech StreamCam</title><link>https://nikdoof.com/posts/2021/caldigit-ts3-plus-and-a-streamcam/</link><pubDate>Tue, 16 Mar 2021 17:33:14 +0000</pubDate><guid>https://nikdoof.com/posts/2021/caldigit-ts3-plus-and-a-streamcam/</guid><description>When I switched back to an M1 MacBook Air, I picked a CalDigit TS3+ as my desk dock. The M1 suffers from a distinct lack of &amp;lsquo;standard&amp;rsquo; ports, which is excellent for the form factor, but it isn&amp;rsquo;t beneficial for desk usage where you need a collection of accessories plugged in. Thankfully, the CalDigit TS3+ plugs that hole nicely by providing a wide selection of ports and DisplayPort output.
I use my M1 in &amp;lsquo;clamshell&amp;rsquo; mode when at my desk, so I had to look around for a webcam.</description><content type="html"><![CDATA[<p>When I switched back to an M1 MacBook Air, I picked a CalDigit TS3+ as my desk dock. The M1 suffers from a distinct lack of &lsquo;standard&rsquo; ports, which is excellent for the form factor, but it isn&rsquo;t beneficial for desk usage where you need a collection of accessories plugged in. Thankfully, the CalDigit TS3+ plugs that hole nicely by providing a wide selection of ports and DisplayPort output.</p>
<p>I use my M1 in &lsquo;clamshell&rsquo; mode when at my desk, so I had to look around for a webcam. I usually wouldn&rsquo;t bother, but with the current situation, all of our meetings are remote. Having a webcam available makes the meeting flow a little easier, especially when dealing with new people, which was precisely my situation due to starting a new job. When shopping, options were quite limited online, and the cheap camera I bought in the past had tripled in price over the last couple of months. I decided to suck up the cost and pay for a <em>real</em> webcam, something with decent features and image quality, and in the end, I picked up the Logitech StreamCam for an eye-watering £129.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2021/caldigit-ts3-plus-and-a-streamcam/streamcam_hu9d1c0878b45849543183f17ceca51867_336549_900x0_resize_q75_box.jpg" width="900" height="675">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>The camera itself is jam-packed full of features; 1080p 60fps, auto-focus and exposure, USB C, and even a 1/8&quot; screw mounting base to place it on a tripod if needed. As it turned out I picked the wrong camera for my set of devices, and I am dogged with some compatibility problems:</p>
<ul>
<li>No Apple Silicon support in the tools to access the advanced features, and the x86 tools just crash under Rosetta.</li>
<li><a href="https://zerosleeps.com/blog/2021/01/23/caldigit-ts3-plus-logitech-streamcam-and-macos/">Incompatibilities with the StreamCam&rsquo;s 5gbps/10gbps support</a> with the Caldigit TS3+</li>
<li>Very hit and miss 60fps support.</li>
</ul>
<p>M1 support, I couldn&rsquo;t expect Logitech to have that ready just a month or two after the first actual hardware launch, after all even <a href="https://helpx.adobe.com/uk/download-install/kb/apple-silicon-m1-chip.html">bigger companies</a> are still trying to support it.</p>
<p>The CalDigit incompatibility was a head-scratcher. I emailed CalDigit&rsquo;s support, and within 24 hours I had back an extremely detailed response, to which I deleted and can&rsquo;t share here, but <a href="ttps://zerosleeps.com/blog/2021/01/23/caldigit-ts3-plus-logitech-streamcam-and-macos/">Scott Macpherson</a> received a similar reply. I&rsquo;ve yet to approach Logitech support, but I expect the result won&rsquo;t be any real resolution as it looks more and more like a hardware issue. For the moment, the StreamCam works perfectly in one of the Thunderbolt ports, the front USB-C port, or with a C to A adapter in most of the other ports. The only port set that has issues is the 10gbps group on the back of the TS3+.</p>
<p>As for the final item, I don&rsquo;t have any strict proof, but I suspect this is related to the lack of application support on M1 chips. I can&rsquo;t get any application to use the promised 60fps mode on the camera, maybe it needs the applications to enable it, but I&rsquo;ll have to wait and see if Logitech updates their application. Initially, I thought it was due to using a C to A adapter in a USB 3.1 port and that the bandwidth wasn&rsquo;t wide enough for 60fps (USB 2.0 is 480mbps something something, but I digress), but now its plugged into a Thunderbolt port I don&rsquo;t see any marked improvement.</p>
<p>In summary, this was a quick warning about using a Logitech StreamCam with a CalDigit TS3+ or an M1 Mac. I&rsquo;d stay clear if you hit either of these situations at the moment. Maybe a hardware revision or a firmware update could resolve it shortly, but I&rsquo;m not holding my breath.</p>
<p>[Update 2021/03/17]</p>
<p>I spotted a <a href="https://www.reddit.com/r/MacOS/comments/k1ouss/does_logitechs_logi_capture_webcam_software_work/gqikori/?utm_source=reddit&amp;utm_medium=web2x&amp;context=3">comment from Swarm2099 on Reddit</a>, version 2.0.200 of Logitech Capture, designed from macOS 10.14, works on M1 Macs. The version is massively out of date, but at least its <em>something</em>.</p>
<p>[Update 2021/05/05]</p>
<p>After a short and grumpy tweet on my account, Logitech <a href="https://twitter.com/Logitech/status/1387441855894220802">replied</a> stating that an M1 version of Logitech Capture is in the works, but with no release date defined.</p>
]]></content></item><item><title>Is an expensive keyboard a 'productivity tool'?</title><link>https://nikdoof.com/posts/2021/is-a-keyboard-a-productivity-tool/</link><pubDate>Sun, 14 Mar 2021 12:33:00 +0000</pubDate><guid>https://nikdoof.com/posts/2021/is-a-keyboard-a-productivity-tool/</guid><description>Sometime around 2015 I purchased an MX Master mouse, and I had a gift certificate for PC World (an electronics shop in the UK) which reduced the price to about £40. At the time, it felt like a frivolous purchase, but I had to buy a new mouse as my ageing Microsoft Explorer mouse had finally died after around ten years of service.
&amp;ldquo;Buy it for Life&amp;rdquo; is the idea that you purchase practical, durable, and quality-made products that could potentially last a lifetime.</description><content type="html"><![CDATA[<p>Sometime around 2015 I purchased an MX Master mouse, and I had a gift certificate for PC World (an electronics shop in the UK) which reduced the price to about £40. At the time, it felt like a frivolous purchase, but I had to buy a new mouse as my ageing <a href="https://www.amazon.com/Microsoft-B75-00001-IntelliMouse-Explorer/dp/B00002JXBI">Microsoft Explorer</a> mouse had finally died after around ten years of service.</p>
<p>&ldquo;Buy it for Life&rdquo; is the idea that you purchase practical, durable, and quality-made products that could potentially last a lifetime. You can&rsquo;t &lsquo;buy for life&rsquo; with electronics, but a quality item will readily give you ten years of life. I justified my expensive mouse purchase in terms of value for money. My Explorer had lasted ten years, and if the MX Master lasted another ten then that&rsquo;ll spread the cost to around £9/year.</p>
<p>Keyboards have always been this practical item, a tool to be used, abused, and disposed of when broke. From a £10 PS2 keyboard I got with a new PC, to a <a href="https://www.amazon.com/Dell-RT7D50-Slim-104-key-Keyboard/dp/B007OY1K7M">Dell Slim 104-key USB keyboard</a> I picked up from eBay as it was the same as the keyboard I was using at work, I never really took any time to think about my primary input device.</p>
<p>Mechanical keyboards have taken off in a big way in the last ten years, driven by what I think is people&rsquo;s desire for something <em>better</em>. I was exposed quite early, around 1998, to an IBM M mechanical keyboard that was hidden away with an IBM PS2 in a family cupboard. At the time I thought it was far too loud to use daily, and I stuck with my terrible domed keyboard. Rolling on ten years later and what is old is new again, people are demanding more from their keyboard, the rise in popularity of the &lsquo;Das Keyboard&rsquo; showed that people still wanted old clicky mechanism.</p>
<p>I didn&rsquo;t investigate mechanical keyboards until 2016 when popularity was surging, and many pre-built keyboards were available for the average user. Even then, the choice of switches and price drove me away from the idea. Then in early 2020, when COVID-19 pushed most into the home office, I came to the realization I didn&rsquo;t have a home office desk that I could call my own; the equipment I needed just wasn&rsquo;t there. Like everyone else, I quickly ordered bits online; monitor, desk, headsets, my company provided the bulk of what I needed, but I was still missing a few key elements. When browsing for a keyboard on Amazon I was dissatisfied with the choices, and I couldn&rsquo;t find my favorite Dell keyboard on eBay. Only then did I think back to 2015 and my selection of mice; the MX Master is still chugging on without issues, a bit worn, but still working. I thought it&rsquo;d be nice if I had a keyboard like that.</p>
<p>My initial mech keyboard of choice was an <a href="https://en.akkogear.com/product/silent/">Akko 3068</a>, a 68% keyboard that covered the base keys in a small form-factor, pre-built, and available from China on a reasonable timescale. It was my main workhorse for most of the lockdown, bar the US layout, the loud Cherry MX Brown switches, and the echoing internals, it served me well. Six months later, I had already decided I needed an upgrade. Around that time, <a href="https://caps-unlocked.com">Caps Unlocked</a> was running a group buy (essentially, a pre-order) for their new <a href="https://caps-unlocked.com/cu65-r2/">CU65 keyboard</a>. It has a milled aluminum case, with a hot-swappable PCB so you can change your switches without getting the soldering iron out, and a USB-C interface. Perfect for my new home working desk setup.</p>
<p>March 2021, and I&rsquo;ve finally received my CU65. Like a child with a new toy, I had to get it open, built, and working. It took only 3-4 hours, which I think is good for my first ever build, and I&rsquo;m seriously impressed with the results.</p>
<ul>
<li>Keyboard and switches: £159</li>
<li>Stabilizers: £14.95</li>
<li>Key Caps: £39.99</li>
</ul>
<p>In total, it came to £213.94, which is an eye-watering amount for a keyboard. A well-constructed and repairable keyboard could potentially last ten years. Mechanical switches are rated for upwards of 50 million key presses and cost in the range of 50p to £1 each, so replacing broken switches won&rsquo;t blow the budget.</p>
<p>Let&rsquo;s get back on topic; why would an expensive keyboard be a productivity tool?</p>
<p>First big point is comfort. Custom keyboards come in all shapes and sizes, from full-size to 65%, to <a href="https://ohkeycaps.com/products/built-to-order-dactyl-manuform-keyboard">Dactyl</a>, so depending on your usage style, you can find a keyboard that suits you. You have to remember that your keyboard is your primary interface to your computer, most modern knowledge work is done in front of a computer, so much like investing in a good chair, a good keyboard is a must.</p>
<p>Most custom keyboards are configurable to some degree. <a href="https://qmk.fm">QMK</a> is a custom firmware that allows you to configure and remap keys exactly how you want. It allows you to select the features that you want rather than the choices being made by someone who will never meet you. For example, as a developer, I use certain keys and features on the keyboard; Page Up, Page Down, Insert, and Delete are required within easy reach. While they do come on standard on my 65% keyboard, I could easily remap them to a new location. You can modify the keyboard layout to suit you and keep <em>you</em> productive.</p>
<p>Finally, with a longer life-span it means you will not be replacing this keyboard anytime soon, so familiarity will help a lot. People get used to how they use a keyboard, and it was why I previously bought one of those Dell Slim keyboards as I was using it daily at work. Once you are familiar with your new device and layout, you&rsquo;ll find that your typing speed will increase, shaving time off any tasks. <a href="https://www.youtube.com/channel/UCoOae5nYA7VqaXzerajD0lg">Ali Abdaal</a> did a video purely on his <a href="https://www.youtube.com/watch?v=eGXbCUXAKYs">mechanical keyboard</a>, and how he can <a href="https://www.youtube.com/watch?v=1ArVtCQqQRE">type fast</a>, so I&rsquo;m not alone in thinking this.</p>
<p>Anyway, I&rsquo;m sure I&rsquo;ve written this post only to justify my purchase in some way. I&rsquo;ll be able to give a longer review of my CU65 in the near future. <em>*clack clack clack*</em></p>
]]></content></item><item><title>Managing My Writing Output</title><link>https://nikdoof.com/posts/2021/managing-my-writing-output/</link><pubDate>Fri, 12 Mar 2021 11:18:16 +0000</pubDate><guid>https://nikdoof.com/posts/2021/managing-my-writing-output/</guid><description>This morning I decided to look at my sites, and I counted seven domains where I create some form of written output. Only one, this site, has a clear-cut definition of what content to place on there. The remaining list of six domains have been sites created on a whim or created to fill a specific need, and splitting my time between them has reduced the amount of quality work I can create on others.</description><content type="html"><![CDATA[<p>This morning I decided to look at my sites, and I counted seven domains where I create some form of written output. Only one, this site, has a clear-cut definition of what content to place on there. The remaining list of six domains have been sites created on a whim or created to fill a specific need, and splitting my time between them has reduced the amount of quality work I can create on others. I had made a promise that I&rsquo;d <a href="https://nikdoof.com/posts/2020/setting-goals-for-a-blog/">create regular content</a> for this blog, and I feel like I am failing.</p>
<p>Finding the right avenue to post my work has always been a struggle for me. In the past, I took to creating new <em>blogs</em> (much like this one) to express a facet of my work and interests, but as I get older I feel that my interests are merging and the lines of separation are no longer there. For example, technology and my <a href="https://nikdoof.com/posts/2020/my-gtd-setup-over-the-years/">Getting Things Done</a> journey have collided in the past ten years, with my productivity system being a creation from the technical aspects of my life and work.</p>
<p>I have to grab hold of the situation and change something, as my diluted content is slowly slipping in quality and frequency. After a short process of mapping out my choices I&rsquo;ve cut down my outputs to two; This blog for productivity-related output, and another (at a to be decided domain) for my combined technical writing.</p>
<p>I also need to take control of my time I set aside to write. My writing in the previous six months could be described as &lsquo;scattered&rsquo; at best. I&rsquo;d work for an hour or so when I could to create a post but only if I have an idea of what to write. My new attempt to resolve this problem is to schedule time each week to write, even if it is a few hundred words and half-cooked. A few hours on Sunday morning from now on is going to be dedicated to writing.</p>
<p>Location, time, all that is missing now is <em>what</em> to write about.</p>
<p>I spent every morning reviewing my RSS feeds, and I save anything I find interesting into Instapaper, I also use Readwise to consolidate my highlights into a single view from all my media. Finding an interesting snippet or a post that I bring up in conversation is quite easy with this system, but I don&rsquo;t produce any output from gathering all this data. I&rsquo;m going to try something for a week or two to see if it helps:</p>
<blockquote>
<p>What did you read that was interesting?</p>
</blockquote>
<p>I&rsquo;ll add this to my Daily Tasks on my Journal. Hopefully when I reach Sunday i&rsquo;ll have a short list of interesting things I could write about, which could reduce the mental strain of trying to think of a topic.</p>
<p>With my first block of time only three days from now, lets see how we go.</p>
]]></content></item><item><title>Is Readwise Worth It?</title><link>https://nikdoof.com/posts/2021/is-readwise-worth-it/</link><pubDate>Thu, 04 Feb 2021 09:18:00 +0000</pubDate><guid>https://nikdoof.com/posts/2021/is-readwise-worth-it/</guid><description>I have an idea of how I want to work. Sometimes when I come across an insightful article or book I like to keep note of useful information, interesting concepts, or just quotes. Currently, I&amp;rsquo;m using a collection of different tools that enable this; Instapaper for online media, Kindle highlights for my books, and LogSeq to capture any other media notes. The issue is that all these systems are separate and introduce friction in trying to rediscover interesting content you&amp;rsquo;ve previously digested.</description><content type="html"><![CDATA[<p>I have an idea of how I want to work. Sometimes when I come across an insightful article or book I like to keep note of useful information, interesting concepts, or just quotes. Currently, I&rsquo;m using a collection of different tools that enable this; <a href="https://instapaper.com">Instapaper</a> for online media, Kindle highlights for my books, and <a href="https://logseq.com">LogSeq</a> to capture any other media notes. The issue is that all these systems are separate and introduce <em>friction</em> in trying to rediscover interesting content you&rsquo;ve previously digested. Most of the &ldquo;output&rdquo; you have created is locked away behind UIs that present no easy export options.</p>
<p><a href="https://readwise.io/i/andrew080">Readwise</a> aims to solve this problem, by giving a single location for all your highlights and snippets to coalesce into a single, reviewable view. Their exporting tools allow for quickly moving your important highlights from Readwise to other PKM, such as Notion, and Roam.</p>
<p>Most people look at the price and wonder if it is really worth the $7.99/month price tag. Initially, when I first encountered Readwise it only had a basic Roam export and the price felt high for what was essentially a fancy Kindle sync. After reviewing the feature set now, the team added more and more features that I can start to see the benefit of the tool.</p>
<p>At the time of writing (2021-02-04), Readwise supports the following sources:</p>
<ul>
<li>Kindle</li>
<li>Instapaper</li>
<li>Hypothesis</li>
<li>Goodreads</li>
<li>Medium</li>
<li>Airr</li>
<li>Feedly</li>
<li>Pocket</li>
<li>Twitter</li>
<li>Apple Books</li>
<li>Their own web highlighter</li>
<li>Manual imports from Email, CSV, free form text, photos, * PDFs</li>
<li>Scribd</li>
<li>O&rsquo;Reilly Learning</li>
<li>Google Play Books</li>
<li>Command</li>
</ul>
<p>They&rsquo;re very quickly approaching IFTTT levels of integration for highlights! While I only use 5 of these sources I can&rsquo;t wait to see what else the team introduces shortly.</p>
<p>Readwise posted on their blog <a href="https://blog.readwise.io/why-were-bootstrapping-readwise/">&ldquo;Why we&rsquo;re bootstrapping Readwise&rdquo;</a> which breaks down their decision. In short, they don&rsquo;t feel the product they want to build will appeal to the VCs, so they&rsquo;ve decided to let paying customers drive their future progress. Turning away free users will have an impact, but having a small and loyal customer base will push them in the right direction for the market they want to target.</p>
<p>It comes down to a few questions, Would you prefer to pay a premium for a product that fulfills a niche? If so, how much of a premium would you consider paying? By not allowing their product to be driven by a VC, Readwise is concentrating on their core following who helped bootstrap the idea. Too many companies get sucked into the &ldquo;VC thinking&rdquo; of mass-market appeal, slowly transforming into something that doesn&rsquo;t represent the original goals they had.</p>
<p>Is it a good tool? Yes. Is it for you? Maybe, maybe not. Is it worth the price? Totally.</p>
]]></content></item><item><title>Macbook Air M1 - One Month On</title><link>https://nikdoof.com/posts/2021/macbook-air-m1-review/</link><pubDate>Mon, 25 Jan 2021 00:47:06 +0000</pubDate><guid>https://nikdoof.com/posts/2021/macbook-air-m1-review/</guid><description>Well, it&amp;rsquo;s not strictly one month later, more a month and a bit. Pandemic time has the ability to stretch out normal time periods to be a tad longer than what you expect. On the 10th December 2020 I ordered one of the new MacBook Air M1, and since the 16th it has been my daily driver laptop. Previously I had been using a Surface Pro 6 but surprisingly the performance started to drop off a cliff just a few months after purchase, mostly due to some minor SSD issues, but it was just the start of the issues I would have, but I digress.</description><content type="html"><![CDATA[<p>Well, it&rsquo;s not strictly one month later, more a month and a bit. Pandemic time has the ability to stretch out normal time periods to be a tad longer than what you expect. On the 10th December 2020 I ordered one of the new MacBook Air M1, and since the 16th it has been my daily driver laptop. Previously I had been using a Surface Pro 6 but surprisingly the performance started to drop off a cliff just a few months after purchase, mostly due to some minor SSD issues, but it was just the start of the issues I would have, but I digress.</p>
<p>The big selling point of the new Macbook Air is the M1 chipset; a ARM-based SoC (named &ldquo;Apple Silicon&rdquo;) that has been derived from Apple&rsquo;s work on the A series processes used in iPhones and iPad for quite a few years. With switching to ARM, Apple has been able to reduce heat and increase battery life to way beyond what is expected in a productive laptop, while maintaining the processing speed of a good mid-range Intel system.</p>
<p>So, lets talk about performance. I can safely say that the M1 outperforms the Surface Pro 6 by a large margin, and while I expect people to note that the Surface Pro 6 is over 2 years old I can&rsquo;t really point out how much of a night and day difference it is. My workload is mostly based around Visual Studio Code and other simple development applications, on the SP6 it would take about 3-4 seconds to open my VS Code setup, on the M1 its under a second. Safari opens near-instantly, I can run several Electron apps with no visible slowdown. The seconds here and there saved allow for more frictionless workflow i&rsquo;ve ever experienced with my SP6. From what i&rsquo;ve heard a similar jump can be felt even with the ARM-based Surface Pro X and the Surface Pro 7.</p>
<p>With all major architecture changes you do hit some snags, biggest is the lack of support for &ldquo;Apple Silicon&rdquo; on some applications, which require using Rosetta 2 with Intel-based binaries. When you initially load a Intel binary on a M1 you can expect a wait of 5-10 seconds depending on the size of the application, but after that its back to near instant. The transcoded Intel applications run slower than native, and its most obvious on CPU chewing applications like video rendering and gaming, but for day to day apps like Office they <em>feel</em> the same.</p>
<p>Adjustment to my workflow has been a bit of a challenge, as it turns out I have switched to a heavily Microsoft Edge based workflow using Memex and other tools, having to rediscover tools for MacOS that work with Safari has been a challenge. While I could switch to using Edge on MacOS or Chrome, I ideally want to keep in sync with my iOS based devices now that I can take advantage of the extra features available to me. I&rsquo;m no stranger to MacOS, as I used a 2013 MacBook Pro before my Surface Pro 6, but its something to consider if you want to switch to using a Mac full time.</p>
<p>As a relatively low power user, even for my job role, I find the M1 a amazing workhorse for the price, and I&rsquo;m extremely happy to have this as my daily driver. I&rsquo;m hoping that I can make use of this device for a good 5-6 years before I have t consider upgrading again, much like my old MacBook Pro. Apple are pushing hard into ARM/AS and new devices are on the horizon. If you&rsquo;re considering upgrading then I&rsquo;d wait until the next generation of devices to come out, hopefully with a even more crazy jump of power that the M1s had.</p>
]]></content></item><item><title>LogSeq for My Second Brain</title><link>https://nikdoof.com/posts/2020/logseq-for-my-second-brain/</link><pubDate>Wed, 14 Oct 2020 09:00:00 +0100</pubDate><guid>https://nikdoof.com/posts/2020/logseq-for-my-second-brain/</guid><description>Admitting something isn&amp;rsquo;t working for you is the first step, next is deciding on what to do next. Less than a month ago I posted how [Notion] was now the key part of my &amp;ldquo;Second Brain&amp;rdquo; and at that point I&amp;rsquo;d spent three weeks building my &amp;ldquo;LifeOS&amp;rdquo;, as its frequently called, in the application and integrating with my Getting Things Done workflow.
In comes LogSeq. LogSeq is an in-development, local-first, non-liner outliner notebook, much in the same thread as Roam Research, Obsidian, and several other tools that are available.</description><content type="html"><![CDATA[<p>Admitting something isn&rsquo;t working for you is the first step, next is deciding on what to do next. Less than a month ago I <a href="/post/my-gtd-setup-over-the-years/">posted</a> how [Notion] was now the key part of my &ldquo;Second Brain&rdquo; and at that point I&rsquo;d spent three weeks building my &ldquo;LifeOS&rdquo;, as its frequently called, in the application and integrating with my <a href="/post/getting-things-done-2nd-reading/">Getting Things Done</a> workflow.</p>
<p>In comes <a href="https://logseq.com">LogSeq</a>. LogSeq is an in-development, local-first, non-liner outliner notebook, much in the same thread as <a href="https://roamreasearch.com">Roam Research</a>, <a href="https://obsidian.md">Obsidian</a>, and several other tools that are available. The key difference is that LogSeq will (eventually) be open source. LogSeq is a web application first, and it runs reasonably well on desktop, iPad, and iPhone allowing for access in any location.</p>




  

<figure style="padding: 0.25rem; margin: 2rem 0;">
  <img style="max-width: 100%; width: auto; height: auto;" src="/posts/2020/logseq-for-my-second-brain/logseq-screenshot_hu172af5eb9c66027f8c7ba228daacb927_131691_900x0_resize_box_3.png" width="900" height="506">
  <figcaption>
  <small>
    
  </small>
  </figcaption>
</figure>
<p>So, why did I chose LogSeq for my Second Brain?</p>
<p>As it turned out the fixed formal layout of Notion stifled how I like to browse information, while Notion does now support backlinks, I was essentially looking at a list of notes much like how I had them formatted in Evernote. For years the key tool keeping Evernote useable for me was the fully-featured search, deep inspection into PDFs and other file formats allowed for you to quickly pick the correct note for the job, Notion lacks this feature. LogSeq also lacks &ldquo;deep searching&rdquo; but the application itself dissuades you from just dumping files into it, instead you write concise notes which are empowered by backlinks and &ldquo;unlinked reference&rdquo; to discover new relevant information.</p>
<p>Notion and Evernote are fundamentally different tools than LogSeq and their ilk, but as it turns out I was always trying to make a non-liner notebook in a tool not designed for it. Once I started using it I realized what I had been missing and it allowed me to finally take notes in the form that I wanted while keeping them useful and searchable.</p>
<p>Of course, like any new application, it doesn&rsquo;t come without its issues. LogSeq has a lot of quirks and issues which will be resolved over time, but for the moment they can be quite troubling.</p>
<ul>
<li>You can&rsquo;t have it open on multiple devices at the same time, as its essentially using Git under the hood it is very easy for it to get into a situation where a merge is required to save your data, which it doesn&rsquo;t support at the moment.</li>
<li>Copy and paste in some information &ldquo;crashes&rdquo; the application, as its all browser-based JavaScript it does have a few issues with pasted data, thankfully a quick refresh resolves the issue.</li>
<li>While custom CSS is supported for theming, the base CSS for the site is very complicated and has a lot of overriding. I&rsquo;ve started creating a set of <a href="https://github.com/nikdoof/base16-logseq">base16 themes</a> for LogSeq but its been quite a challenge.</li>
<li>Markdown isn&rsquo;t parsed into the LogSeq format, pasting in a list of bullet points won&rsquo;t create a list blocks, instead create a single block with a list of bullets within it.</li>
</ul>
<p><em>These issues exist as of 2020-10-14, and mat have been resolved in the future.</em></p>
<p>It is still very early days for LogSeq, you can follow the development on <a href="https://github.com/logseq/logseq">GitHub</a> and the <a href="https://discord.gg/KpN4eHY">Discord community</a>. If you want a more feature-complete product right now I&rsquo;d suggest using Roam Research.</p>
]]></content></item><item><title>My 5 Biggest Problems With Notion</title><link>https://nikdoof.com/posts/2020/my-5-biggest-problems-with-notion/</link><pubDate>Sun, 11 Oct 2020 08:38:10 +0100</pubDate><guid>https://nikdoof.com/posts/2020/my-5-biggest-problems-with-notion/</guid><description>I migrated over to Notion within a few hours of opening an account, my discovery of it coincided with my attempt to re-implement my Getting Things Done workflow with P.A.R.A. style document storage. I had previously been a very heavy user of Evernote, but with the purchase of an Office365 tenant to handle my emails I converted over to OneNote, and then after a while I realized nothing was working for me and I had to thing about re-implementing my reference storage.</description><content type="html"><![CDATA[<p>I migrated over to <a href="https://notion.so">Notion</a> within a few hours of opening an account, my discovery of it coincided with my attempt to re-implement my <em>Getting Things Done</em> workflow with <a href="https://fortelabs.co/blog/para/">P.A.R.A.</a> style document storage. I had previously been a very heavy user of Evernote, but with the purchase of an Office365 tenant to handle my emails I converted over to OneNote, and then after a while I realized nothing was working for me and I had to thing about re-implementing my reference storage.</p>
<p>Notion sells itself as the &ldquo;all-in-one workspace tool&rdquo;, while primarily built for business and small teams it has found a home with individuals trying to manage their piles of information. The idea is simple, you have two types of data; pages and databases, and with them you can build complex structures much like a wiki.</p>
<p>I&rsquo;ve now been using Notion for a few weeks, and I&rsquo;ve hit on a few issues which I&rsquo;d like to see resolved. I understand that these issues could be trivial to other users, but these are the issues that are affecting me personally.</p>
<h3 id="global-search-for-properties">Global Search for Properties</h3>
<p>Global free text and content search works well. From anywhere in Notion you can hit the shortcut of <code>Ctrl+P</code> to bring up a panel, much like Visual Studio Code, and search for freeform text in your pages and databases. My P.A.R.A. resources are stored in a database with a multi-select indicating the topic, when you attempt to search for a topic you don&rsquo;t get any hits, unless that text is in the contents of a page. If Notion allows the text search to also look at Select and Multi-Select I feel that&rsquo;ll open up the usage of these fields. The workaround for this is to use the search on the database view, which does mean a few extra steps of discovering the database you need, then searching the values in there.</p>
<h3 id="page-locking">Page Locking</h3>
<p>Locking is Notion&rsquo;s way to stop editing on your pages and database, once the lock is enabled you can&rsquo;t modify the page layout and properties, but sub pages can be created without issue. This works wonderfully for my Areas and Projects which the fields will very rarely change, but for Resources I&rsquo;ve hit a problem that reduces its efficiency.</p>
<p>Again, Multi-selects, I use them for tagging topics and when I create a new Resource I throw a few topics on the page to allow for easier searching, but when the database is locked you cannot add new items to the Multi-Select. A suggestion would be to have a mechanism where the lock is granular, say only allow additions to selects on selected fields, rather than locking every element in the database.</p>
<h3 id="offline-mode">Offline Mode</h3>
<p>Offline mode has been one of the biggest issues for the Notion community, and I am by no means the only one with an issue on this subject. Honestly, this is one of my biggest Notion regrets. I moved from Evernote which has a fully-featured Offline mode, the access this gave me was amazing and I was able to reference documents and notes on the go in any location from my phone. Its a feature I grew to depend on over time and now I don&rsquo;t have access to similar it is causing issues for my workflow.</p>
<p>Notion are currently working on an offline mode, which I am extremely pleased about, but I think my requirements differ from the majority of Notion users. Rarely do I need to create anything on the go, and I work around this gap by using <a href="https://getdrafts.com/">Drafts</a> on my phone and watch. So in reality I&rsquo;m looking for an &ldquo;offline read-only&rdquo; mode, which I feel would be a lot simpler for Notion to implement.</p>
<h3 id="opening-multiple-items">Opening Multiple Items</h3>
<p>If you wish to look at another page while writing one, you have two options;</p>
<ul>
<li>Click the link and take it full page, then switching back.</li>
<li>Ctrl+Click to open a new tab/instance of Notion and flick between the two running instances.</li>
</ul>
<p>The second option doesn&rsquo;t sound to bad, but it starts a full instance of the Notion UI again on your desktop. Ideally what I&rsquo;d be looking for is how Evernote works, when you Ctrl+Click a note in Evernote it opens a new window showing just the note.</p>
<h3 id="data-portability">Data Portability</h3>
<p>Notion can export your data as a collection of Markdown and CSV files, but once you have that data you&rsquo;ll quickly realize that the Markdown files are not formatted the way you expect. When using a database to store paged you can assign properties that you can use for key information, for example in my &ldquo;To Read&rdquo; database I have properties for the URL on Good Reads, the Author, and dates when I discovered, read, and finished the book. When you export this data out of notion it is pasted into the Markdown as just text.</p>
<p>A very common format for Markdown documents, used by a lot of static website generators, is the <a href="https://assemble.io/docs/YAML-front-matter.html">YAML Front-matter</a>, this allows you to assign metadata in YAML format to your document, and I think this should be the way that Notion exports its Markdown pages. By using the YAML front-matter it would be painless to take a Notion export and copy it into a <a href="https://gohugo.io/">Hugo</a> setup to publish the pages outside of the Notion offered public hosting.</p>
<h3 id="conclusion">Conclusion</h3>
<p>These may feel like minor issues for Notion, and most of them could be easily resolved, but they can present some real annoyances to someone&rsquo;s workflow. Notion is a product designed for one usage but is being used for another, and I can understand that their primary goal is not to service the PKM crowed, but with a few modifications the product could be outstanding.</p>
]]></content></item><item><title>Clearing Distractions</title><link>https://nikdoof.com/posts/2020/clearing-distractions/</link><pubDate>Wed, 30 Sep 2020 18:20:00 +0100</pubDate><guid>https://nikdoof.com/posts/2020/clearing-distractions/</guid><description>bing&amp;hellip; bing&amp;hellip;
Everyone knows that sound. My phone, watch, tablet, all making noises to say &amp;ldquo;HEY LOOK AT ME&amp;rdquo; and rarely is it anything important. On a normal day, I have about 60-100 notifications on my devices; Each one of these pulls my attention away from my current focus, and while I&amp;rsquo;ve got better at managing these &amp;ldquo;micro interruptions&amp;rdquo; it is still just another interruption.
Over the past couple of years Microsoft, Apple, and Google have all worked on bringing distraction-free functions to their OSes and devices.</description><content type="html"><![CDATA[<p><em>bing</em>&hellip; <em>bing</em>&hellip;</p>
<p>Everyone knows that sound. My phone, watch, tablet, all making noises to say &ldquo;<strong>HEY LOOK AT ME</strong>&rdquo; and rarely is it anything important. On a normal day, I have about 60-100 notifications on my devices; Each one of these pulls my attention away from my current focus, and while I&rsquo;ve got better at managing these &ldquo;micro interruptions&rdquo; it is still just another interruption.</p>
<p>Over the past couple of years Microsoft, Apple, and Google have all worked on bringing distraction-free functions to their OSes and devices. Windows 10 introduced &ldquo;Focus Assist&rdquo; and Apple has &ldquo;Do Not Disturb&rdquo; settings, but even them only take away the burden for a period of time and those notifications are still building up behind the scenes.</p>
<p>So I thought; What if I just do away with these notifications, prune down the list of applications to a few essentials that I allow to notify me? I already do with my Apple Watch, which was more out of absolute frustration that I&rsquo;m getting a wrist notification of an upcoming SpaceX launch from the <a href="https://www.kennedyspacecenter.com/">KSC app</a>. Maybe now it is time to do the same with my iPhone.</p>
<h2 id="notification-permissions">Notification Permissions</h2>
<p>First off I took my iPhone and mass purged notification privileges from as many apps as I could. The first issue I encountered was seemingly important applications with notifications enabled, but with no details about <em>what</em> would be sent.</p>
<ul>
<li>Health - Does that mean any heart rate notifications?</li>
<li>FaceTime - Does that affect when people call me?</li>
<li>Steam - I&rsquo;m guessing I can&rsquo;t use Steam Guard with it off?</li>
</ul>
<p>Some applications break down the notification types into groups, Photos, for example, breaks down its notification to &ldquo;Memories&rdquo;, &ldquo;Shared Albums&rdquo; and &ldquo;Sharing Suggestions&rdquo;. It&rsquo;s useful to have that granularity, but I don&rsquo;t care about any of the options presented to me. If all applications had this option it&rsquo;d be very useful to reduce the noise.</p>
<p>After the initial cull I&rsquo;m left with 41 applications that have some level of notifications enabled.</p>
<h2 id="sounds">Sounds</h2>
<p>The pesky dings and vibrates, the big interrupter themselves. In almost all cases I don&rsquo;t want to be interrupted by a notification, so I disabled these on all applications except the critical applications I need to hear from:</p>
<ul>
<li>Phone - for obvious reasons</li>
<li>OpsGenie - for those impending call-outs from work</li>
<li>Health - if it detects something I need to be notified about, I want to know&hellip;</li>
<li>Telegram - my wife&rsquo;s preferential communication method</li>
</ul>
<p>This alone cut down my &ldquo;micro interruptions&rdquo; a lot, no more detecting every little vibration through the desk and checking to see what it is, if it&rsquo;s vibrating it&rsquo;s most likely important (or my wife sharing a <a href="https://catanacomics.com/">Catana comic</a> with me).</p>
<h2 id="banners">Banners</h2>
<p>Banner notifications are the next tier of notifications to be checked over. These are the on-screen prompts that you get, the main interrupters. By default notification pop on the screen, hang around on your lock screen, and also sit in the Notification Centre. iOS has an option to control all three of these elements, so the next questions for each of the 41 applications were:</p>
<ol>
<li>If I&rsquo;m using my phone, do I need to know about it <em>right now</em>?</li>
<li>If not, do I need to see it the second I look at my phone?</li>
<li>If not, then do I need to know about it at all?</li>
</ol>
<p>As it turns out, I don&rsquo;t need most application&rsquo;s notifications <em>right now</em>, Notification Centre is fine, even the lock screen is a push for most of them. Wallet and banking notifications? Sure I want to know about them as soon as I look at my phone, but I don&rsquo;t need to see the contents of my email inbox that second.</p>
<h2 id="badges">Badges</h2>
<p>Badges are useful, especially for messaging applications. In most cases it&rsquo;s all I need to see, for example; my emails are not massively important and don&rsquo;t need instant attention. If the application has a notification then it&rsquo;ll appear as a badge on the application icon. After removing banners for most applications it seems to be the perfect balance, I have the majority of my important applications on my home screen in groups I can quickly see that I&rsquo;ve notifications and decide to look at them <em>on my terms</em>. My home screen can look peppered with red &ldquo;8&quot;s and other numbers but at least I can check them when I feel like it, rather than have the notifications flashed in front of my eyes.</p>
<h2 id="the-week-so-far">The Week So Far&hellip;</h2>
<p>It has been a good 48 hours now since I made the change, and I have to say it has been bliss. My phone now sits screen up on my desk, I don&rsquo;t have it dragging my eyes every minute or so when the screen lights up. Sounds and vibrations are mostly a thing of the past and reserved only for the most important items. Since implementing the changes on my iPhone I&rsquo;ve been able to switch back to &ldquo;Mirror my iPhone&rdquo; on my Apple Watch without any negative consequences.</p>
<p>Of course, it can&rsquo;t be all positives, I&rsquo;m missing my critical emails. When you login Netflix and other services online you&rsquo;ll get an email notification saying that a new device as logged in, with my lockdown of all notifications from Outlook I&rsquo;m also missing these notifications. It&rsquo;d be useful if Outlook had the option to mute all notifications except for one folder in your account, while the option exists for &ldquo;Focused Inbox&rdquo; it doesn&rsquo;t for any other folder.</p>
<p>I expect I&rsquo;ll be writing a follow-up article in a month or so to discuss further issues I&rsquo;ve found, or just to rave about how much it has changed my life. One thing that stands out is that my battery usually lasts just about the full day, and at the moment at 6 pm and at <em>72%</em>&hellip; wow.</p>
]]></content></item><item><title>I Regret Picking a Surface Pro</title><link>https://nikdoof.com/posts/2020/i-regret-picking-a-surface-pro/</link><pubDate>Fri, 25 Sep 2020 10:52:08 +0100</pubDate><guid>https://nikdoof.com/posts/2020/i-regret-picking-a-surface-pro/</guid><description>I walked away from Apple, my Macbook Pro, and OSX about a year ago. It was time for a upgrade and I spent time looking across the market to decide what my next workhorse machine should be, driven mostly by specs I ended up selecting a Surface Pro 6, and unfortunately, I made a mistake. The Surface Pro, even after six iterations of the product, feels half cooked, tries to do everything but can&amp;rsquo;t master one of them.</description><content type="html"><![CDATA[<p>I walked away from Apple, my Macbook Pro, and OSX about a year ago. It was time for a upgrade and I spent time looking across the market to decide what my next workhorse machine should be, driven mostly by specs I ended up selecting a Surface Pro 6, and unfortunately, I made a mistake. The Surface Pro, even after six iterations of the product, feels half cooked, tries to do everything but can&rsquo;t master one of them.</p>
<h3 id="touch-interface">Touch Interface</h3>
<p>Windows&rsquo; touch interface is amazingly immature and you come across interface issues on numerous apps where it is obvious the developer never thought of a touch interface. Add onto it that application developers never seem to think about the touch interface as the mouse and keyboard are still the primary interface for Windows.</p>
<p>I suppose this is one area where Apple and iPadOS succeeded, they started with Touch only and slowly progressed toward keyboard and mouse interfaces, but it feels incredibly unfair to compare something that was developed from different viewpoints. I don&rsquo;t think Windows will be ever truly touch native to the level of iPadOS, and now for future devices I may actually avoid touch screen devices and pickup a recent iPad for all my touch and writing needs.</p>
<h3 id="soft-keyboard">&ldquo;Soft&rdquo; Keyboard</h3>
<p>While the keyboard does indeed function of a keyboard, the lack of a hard hinge really restricts the usability of the Surface Pro in all situations, if you don&rsquo;t have enough space for that kickstand then its not worth even trying to use the device. You can get third party options to give it a hard keyboard but this is really a mistake on my part, I never thought I was a lap user of a laptop, and it turns out I am&hellip;</p>
<h3 id="hardware-specs--quality">Hardware Specs &amp; Quality</h3>
<p>Originally when I bought the Surface Pro I was doing a spec for spec comparison between the latest Macbook and the Surface Pro, which I feel was my biggest mistake. Again, Apple works hard on that software and hardware integration and Apple manages to squeeze every little ounce of performance out of a relatively mid spec machine. For the Surface it seems that while Microsoft had done some performance tweaks and modifications to the device firmware for speed it fails terribly with the quality of the software itself.</p>
<p>The Surface range has a lot of firmware issues, one big one for me is that sometime the charger just stops being detected, it wont light up and charge and before you realise it your on 10% battery in the middle of a project worrying that your power brick is dead.</p>
<p>Another issue i&rsquo;ve experienced is the 400mhz clock issue, the CPU thinks its overheating and the thermal throttle kicks in, totally expected for a system with no fans in it, but it never <em>stops</em>. The only fix is either to run a piece of software to force a flag off on the CPU, or a hard reset of the hardware itself. Thankfully this has been fixed but it took Microsoft over 6 months to acknowledge and fix the firmware.</p>
<h3 id="what-next">What Next?</h3>
<p>Honest, I may go back to a Mac. I really like my Windows environment, i&rsquo;d love to have a Linux laptop but my workflow is so dependent on Mac/Win tools that I can&rsquo;t step away at the moment. Maybe i&rsquo;ll look at Lenovo or Dell for my next device&hellip;</p>
]]></content></item><item><title>Getting Things Done - 2nd Reading</title><link>https://nikdoof.com/posts/2020/getting-things-done-2nd-reading/</link><pubDate>Wed, 23 Sep 2020 09:00:00 +0100</pubDate><guid>https://nikdoof.com/posts/2020/getting-things-done-2nd-reading/</guid><description>My first reading of Getting Things Done (GTD) by David Allen was back around 2007. I was inspired by Merlin Mann to pickup the book and give it a try and while I never did finish that first full read through it changed how I thought of task management. For years and years i&amp;rsquo;d been a mess when it came to fulfilling my commitments, like the examples in GTD everything was stored in my memory in a perpetual vortex of uncertainty and stress about what I had to complete next.</description><content type="html"><![CDATA[<p>My first reading of <em><a href="https://amzn.to/3jQXV78">Getting Things Done</a></em> (GTD) by David Allen was back around 2007. I was inspired by <a href="http://www.43folders.com/">Merlin Mann</a> to pickup the book and give it a try and while I never did finish that first full read through it changed how I thought of task management. For years and years i&rsquo;d been a mess when it came to fulfilling my commitments, like the examples in GTD everything was stored in my memory in a perpetual vortex of uncertainty and stress about what I had to complete next. At the age of 23 I decided to grab a hold of what I had and make changes for the better, and GTD seemed like the key to solve my problems.</p>
<p>Fifteen years later and i&rsquo;m swiftly approaching my 39th birthday, time to pickup the book again and refresh myself. This time its with the 2015 Edition of the book, the fundamentals are still there but its had small updates for the ever moving state of technology.</p>
<h3 id="in-a-nutshell-for-newcomers">In a nutshell, for newcomers</h3>
<p>The idea of <em>Getting Things Done</em> is better living through lists, getting all those pesky tasks out of your head and into a <em>trusted system</em>. By trusted system that could mean one of many things, your pocket notebook, a to-do list app, back of a napkin, or post-it notes on your monitor. The system doesn&rsquo;t have to be something tried and tested, it only has to be something that <em>you</em> can trust to keep your lists. Once everything is out of your head you can start <em>doing</em> rather than worrying about what you&rsquo;re forgetting.</p>
<p>Another element is the &ldquo;Inbox&rdquo;, a method to store items for review later and potentially add to your task list, a reference file system to keep important information filed away and easily accessible, and how to manage projects from start to finish.</p>
<p>By combining these techniques David hopes that you can attain <em>&ldquo;Mind Like Water&rdquo;</em>, in that you can react quickly and efficiently to new issues that arise without having that drowning feeling of hundreds of to-do list items building up on you.</p>
<p>Now, back to my reading.</p>
<h3 id="projects-reviewed">Projects, reviewed</h3>
<p>After my second reading it became obvious I have been doing projects wrong, very wrong. I&rsquo;m not sure where my initial misunderstandings came from, maybe it was worded differently in an older revision, but this time it clicked. The biggest takeaway was if you look at a next action and it <strong>needs more than one step to complete that action</strong> then it probably should be a project. It sounds stupid, but take these examples:</p>
<ul>
<li>Buy new household contents insurance.</li>
<li>Pack my suitcase for holiday.</li>
<li>Clean the house</li>
</ul>
<p>I can hear you saying <em>&ldquo;Wait a minute, they&rsquo;re next actions!&rdquo;</em>, let me show why they&rsquo;re projects:</p>
<p><strong>Buy new household contents insurance.</strong></p>
<ul>
<li>Create a contents inventory and valuation for the house</li>
<li>Research which providers have the required cover</li>
<li>Use a price comparison website to find the cheapest</li>
<li>Purchase insurance</li>
</ul>
<p><strong>Pack my suitcase for holiday</strong></p>
<ul>
<li>Create a packing list of items</li>
<li>Wash any clothing that is needed</li>
<li>Pack items into packing cubes</li>
<li>Pack toiletries</li>
<li>Pack suitcase</li>
</ul>
<p><strong>Clean the house</strong></p>
<ul>
<li>Buy replacement bleach</li>
<li>Vacuum Living Room</li>
<li>Dust Shelves</li>
<li>Empty Trash from bathroom bins</li>
</ul>
<p>While its much easier to bulk all these items into a single next item, you&rsquo;re not really breaking it down to atomic tasks that can be done individually in a short amount of time, when grouped together &ldquo;Clean the house&rdquo; is a two hour job, where as &ldquo;Dust Shelves&rdquo; is a ten minute task that can be done in isolation. By breaking up the tasks you start reducing the amount of time required to complete the task, people generally have a few hours of spare time to get things done, and these types of a projects could be chipped away in ten minute chunks through out the week.</p>
<h3 id="the-power-of-the-somedaymaybe">The power of the &lsquo;Someday/Maybe&rsquo;</h3>
<p>Another sticking point I had was the use of the Someday Maybe lists.</p>
<blockquote>
<p>Make an Inventory of Your Creative Imaginings What are the things you really might want to do someday if you have the time, money, and inclination? Write them on your Someday/Maybe list.</p>
</blockquote>
<p>After reading this in the book, I suddenly look over at my to-do list of hobby projects and realise i&rsquo;ve been creating another Someday Maybe list away from my main list. As for why, I have no idea. My main list seemed to be reserved for the big picture goals, retire, progress my career, complete my bucket list.</p>
<p>Having a separate list is by no means a bad thing, even David recommends it, but for me it had a mental distinction away from a Someday Maybe list, when in reality its exactly the same thing.</p>
<h3 id="the-horizons-of-focus">The &lsquo;Horizons of Focus&rsquo;</h3>
<p>This was another topic that confused me, maybe it was due to my age, or maybe that i&rsquo;ve never had &ldquo;horizon 0&rdquo; under control, but I never did really see the point of the exercise.</p>
<blockquote>
<p>What are your key goals and objectives in your work? What should you have in place a year or three years from now? How is your career going? Is this the lifestyle that is most fulfilling to you? Are you doing what you really want or need to do, from a deeper and longer-term perspective?</p>
</blockquote>
<p>The higher levels are not really about day to day work, rather keeping a list of what will drive your future projects, plans, and your someday list. Introducing &ldquo;horizon 3&rdquo; or &ldquo;Areas of Responsibility&rdquo; has been a key driver for a lot of new actions and projects in my system. it may feel obvious to collect next actions into logical groups, and you may already do this by using contexts, but by having those groupings outside of your system you just have another thing in your mind to keep hold of. My current list of areas are:</p>
<ul>
<li>Self</li>
<li>Finances</li>
<li>Home</li>
<li>Family</li>
<li>Job 1</li>
<li>Job 2</li>
<li>Hobby 1</li>
<li>Hobby 2</li>
</ul>
<p>Much like a trigger list, those areas drive my mind in the right direction and usually make it easier to identify new actions and items for my inbox.</p>
<h3 id="dont-be-strict-with-contexts">Don&rsquo;t be strict with contexts</h3>
<blockquote>
<p>Before I go on a long trip, I will create “Before Trip” as a temporary category into which I will move everything from any of my action lists that must be handled before I leave. That becomes the only list I need to review, until they’re all done.</p>
</blockquote>
<p>Contexts are a tool, I used to have them as a fixed list in my Nirvana; Calls, Internet, Office, and spent time tagging all actions for only that list. That quote from David changed my thinking of this, your context list should be adaptable to what is coming up in the near future, if you come across a task that would work better in a new context, why not?</p>
<p>I thought of a prime example for this, a few years ago I went to a wood turning symposium and I have a specific list of items I wanted to achieve while I was there; buy this, buy that, meet x, pickup a copy of y. I was building a shopping list of actions to complete at the hotel it was based at, when that could of just been a single context I could create and solely focus on for the weekend I was there.</p>
<h3 id="conclusion">Conclusion</h3>
<p>For most people who have read <em><a href="https://amzn.to/3jQXV78">Getting Things Done</a></em> i&rsquo;m sure my point are nothing outstanding, but its always good to grow and learn from previous mistakes. GTD is by no means a perfect system, or is it a set of fixed rules to follow, everyone has their own implementations and quirks. Its always useful to re-read the source material from time to time, as looking at it from your current perspective could bring new insights, and maybe fix an issue or two you&rsquo;re currently having. I can safely say I have a lot to re-implement as my personal system has drifted, for the worse, from the ideal.</p>
]]></content></item><item><title>My 'Getting Things Done' Setup Over the Years</title><link>https://nikdoof.com/posts/2020/my-gtd-setup-over-the-years/</link><pubDate>Sat, 19 Sep 2020 08:23:53 +0100</pubDate><guid>https://nikdoof.com/posts/2020/my-gtd-setup-over-the-years/</guid><description>I&amp;rsquo;ve been a half invested Getting Things Done (GTD) practitioner for the better part of ten years, probably even longer. Over time my workflow has changed from a basic Filofax and paper notes to numerous applications, Todoist and Evernote were the longest serving applications by far, but even they headed towards the chopping block once the subscription fees started ramping up.
For the longest time I ran a modified version of NextAction which did most of the heavy process lifting in Todoist, but when it came down to it the script was just a kludge to make a non-GTD system work with GTD ideals.</description><content type="html"><![CDATA[<p>I&rsquo;ve been a half invested <a href="https://gettingthingsdone.com/">Getting Things Done</a> (GTD) practitioner for the better part of ten years, probably even longer. Over time my workflow has changed from a basic Filofax and paper notes to numerous applications, Todoist and Evernote were the longest serving applications by far, but even they headed towards the chopping block once the subscription fees started ramping up.</p>
<p>For the longest time I ran a modified version of <a href="https://github.com/nikdoof/NextAction">NextAction</a> which did most of the heavy process lifting in <a href="https://todoist.com">Todoist</a>, but when it came down to it the script was just a kludge to make a non-GTD system work with GTD ideals. <a href="https://nirvanahq.com">NirvanaHQ</a> was designed from the ground up with GTD in mind, and with that a lot of the processes and ideas in GTD worked seamlessly out of the box without any modifications. Moving my workload was easy and after taking a day or two to trial the system I jumped in with a lifetime subscription.</p>
<p>My filing system started out as a basic <a href="https://evernote.com">Evernote</a> account, initially spurred by the good reviews from <a href="http://www.43folders.com/">Merlin Mann</a> and other names in the early GTD community. Evernote seemed to cover the essentials, somewhere to store your documents and with a well thought out organisation and search system. It worked, I gathered around 2,000 documents within a short time period and most of my paper documentation was quickly scanned, stored, and shredded.</p>
<p>Within a few years, Evernote started ramping up their subscription fees. While I can&rsquo;t argue the service was worth paying for the issue I had was that I was paying for Office365 at the same time, which included <a href="https://www.onenote.com">OneNote</a> and a sizeable amount of storage. During that time I was trying to reduce my overall subscription costs and decided that Evernote was a easy tool to be rid of, and I migrated to OneNote as my filing system.</p>
<p>NirvanaHQ and OneNote have been the cornerstone of my system for a good three years, and yet again i&rsquo;ve recently been shown a new system: <a href="https://notion.so">Notion</a>.</p>
<p>Notion came along and really changed my view of what a filing system could be, with Notion you have the concept of Pages and Databases, which is a new type of flexibility that OneNote or Evernote could never offer. While they both have the concept of tables they were never truly databases that could interlink data.</p>
<p>About the same time I came across the <a href="https://fortelabs.co/blog/para/">PARA Method</a> which works well with a GTD system to provide some structure and management around your filing system. Within the first book of GTD the filing system is referenced as a store that needs reviewing every so often for cleaning it out, with PARA it splits the mass into &ldquo;Areas&rdquo; and &ldquo;Projects&rsquo; much like GTD, and assigns the useful information storage to &ldquo;Resources&rdquo;.</p>
<p>Notion has allowed me to build my PARA method storage system in a very structured way, allowing me to define a database for areas, projects, and resources but allowing those records within to be free form and formatted how they need to be. PARA is by no means restricted to a certain set of tools, its worth noting that Tiago&rsquo;s blogs do use Evernote as the example data store.</p>
<p>The only item i&rsquo;m missing from Notion is something Evernote did really well; storage of actual files. I&rsquo;ve amassed a large collection of PDFs of scanned payslips, copies of instruction manuals, and receipts from that expensive but fragile item. OneNote had the same issues in that it never really like the concept of storing a file, Notion is the same but i&rsquo;m sure that over time it&rsquo;ll resolve itself. For the moment the raw files are filed away on Onedrive, in what seems to be the last bastion of chaos in my system.</p>
<p>My system is now comprised of the following:</p>
<ul>
<li>Task/List Management - <a href="https://stream.tensixtyone.com/#NirvanaHQ">NirvanaHQ</a></li>
<li>Reference System - <a href="https://notion.so">Notion</a></li>
<li>Calendar - <a href="https://office365.com">Office 365</a> Calendar</li>
<li>File Storage - <a href="https://office365.com">OneDrive</a></li>
</ul>
<p>I&rsquo;ve also added a <a href="/workflow">new item</a> on the header of the site to track my workflow tools, and i&rsquo;ll be keeping it up to date with what i&rsquo;m using.</p>
]]></content></item><item><title>Setting Goals for a Blog</title><link>https://nikdoof.com/posts/2020/setting-goals-for-a-blog/</link><pubDate>Tue, 15 Sep 2020 16:37:55 +0100</pubDate><guid>https://nikdoof.com/posts/2020/setting-goals-for-a-blog/</guid><description>It is not unusual to find many people jaded at best about the value of goal-setting, given the stress created by what are often perceived as artificial expectations decreed from on high. - David Allen
Nothing sums up my experiences with goals better than that quote. I&amp;rsquo;ve been a recurrent &amp;ldquo;goal breaker&amp;rdquo; for many years, in that every goal I set myself I either fail or ignore after a short period of time.</description><content type="html"><![CDATA[<blockquote>
<p>It is not unusual to find many people jaded at best about the value of goal-setting, given the stress created by what are often perceived as artificial expectations decreed from on high. - <a href="https://gettingthingsdone.com/2017/12/big-secret-goal-setting/">David Allen</a></p>
</blockquote>
<p>Nothing sums up my experiences with goals better than that quote. I&rsquo;ve been a recurrent &ldquo;goal breaker&rdquo; for many years, in that every goal I set myself I either fail or ignore after a short period of time. Getting the right balance in a goal can be difficult and almost a skill in its own right. Most people struggle with either trying to define the scope of something monumental and endless (e.g. <em>Progress my Career</em>), or keep it too easy (e.g. <em>Don&rsquo;t dine out for a month</em>) and feel unfulfilled when completed.</p>
<p>A lot of discussion about goal setting has been done by almost every productivity &ldquo;guru&rdquo; out there. You can find hundreds of articles talking about the best way to do it, but i&rsquo;m not here to discuss that, more to explain how I came to the goal i&rsquo;ve defined for this site.</p>
<p>Starting a project like this site is useless unless you have something to aim for. Writing out into the void is a good cause but if you&rsquo;ve got nothing to drive you then your willpower to continue will drift away after a short time. Personally I want to write <em>something</em>, be it technical and informational to just airing my mental drool, but I want to do it consistently and with purpose.</p>
<h4 id="frequency">Frequency</h4>
<p>I want to post frequently, but not place such as strain on my free hours that it causes other obligations to slip, or that the site is pushed aside by other pressing issues. In the past i&rsquo;ve set myself a goal of posting daily, which was a huge mistake and within two weeks I was three days behind. This time i&rsquo;ll create a sensible balance with my available time. I&rsquo;m lucky in that I can enjoy my weekends relatively interruption free, so i&rsquo;m able to ensure I get at least an hour to put together an article.</p>
<h4 id="content">Content</h4>
<p>I want the articles to be a length worthy of posting, rather than some short snippets. I&rsquo;ve previously fell into the trap of producing content that was nothing more than a few lines of semi-related information thrown together to form some sort of post. What does that actually achieve? Yes i&rsquo;m ticking off my goals but essentially I was cheating myself of a satisfying result.</p>
<p>I want the site to have some consistent and meaningful content, which people would like to read, and keeping to the topic for the overall theme of the site. Again, referring back to some old blogs i&rsquo;ve deviated off-topic too many times, and potentially losing readers due to it. If i&rsquo;ve decided on a topic I should stick to it.</p>
<h4 id="the-goal">The Goal</h4>
<p><em>Post at least one new article a week, and the content should be meaningful and keep with the overall topic and theme of the site.</em></p>
<p>With the goal set, I can start thinking about next actions and projects, defining the scope all with an aim to complete this <em>open-ended</em> goal.</p>
]]></content></item><item><title>ShopDisney's Problems in a Lockdown world</title><link>https://nikdoof.com/posts/2020/the-shopdisney-problem/</link><pubDate>Tue, 11 Aug 2020 17:33:12 +0100</pubDate><guid>https://nikdoof.com/posts/2020/the-shopdisney-problem/</guid><description>Note: the issues outlined below are resolved, this was originally posted 2020-06-09 on another site.
ShopDisney is suffering from a bot problem. With the parks and stores shut, Disney now sell limited edition items through the ShopDisney website, in the past, the stock has been equally split between the stores and online to avoid a single location being the only marketplace for the items. Since lockdown has begun, the technical collectors have worked out the best way to game the ShopDisney website to access limited edition items before everyone else.</description><content type="html"><![CDATA[<p><em>Note: the issues outlined below are resolved, this was originally posted 2020-06-09 on another site.</em></p>
<p>ShopDisney is suffering from a bot problem. With the parks and stores shut, Disney now sell limited edition items through the ShopDisney website, in the past, the stock has been equally split between the stores and online to avoid a single location being the only marketplace for the items. Since lockdown has begun, the technical collectors have worked out the best way to game the ShopDisney website to access limited edition items before everyone else.</p>
<p>Today, I aim to make this information a little more public to allow Disney to consider fixing it.</p>
<p>So we have the following facts of how ShopDisney works:</p>
<ul>
<li>Each item has a Product Code. This product code is used everywhere on the ShopDisney website: pages, image references, XHR calls, and so on.</li>
<li>Visiting <code>shopdisney.co.uk/&lt;product id&gt;</code> redirects you to the correct page for the product, even when not actively for sale.</li>
<li>The CDN for the ShopDisney website is hosted by Adobe Scene7.</li>
<li>Scene7 has no rate limiting or banning of frequent requests.</li>
<li>Images are available on the CDN even when the product is unavailable to purchase.</li>
</ul>
<p>Interested parties could for example:</p>
<ul>
<li>Take a Product ID of the same type of item you&rsquo;re interested in, say a pair of limited edition Ears.</li>
<li>Iterate the ID and hit the CDN until you get a 2xx response.</li>
<li>Hit <code>https://shopdisney.co.uk/&lt;id&gt;</code> and get redirected to the correct URL for the product.</li>
<li>Log the URL and image.</li>
</ul>
<p>After a few hours, it is possible to discover a slew of items that will be available shortly on ShopDisney. Others have reported that the XHR-based shopping basket system trusts the client when adding items to the basket. You won&rsquo;t be able to checkout the basket items, but with popular items, the few seconds you&rsquo;ll save by having the items in your basket already will give you a significant advantage.</p>
]]></content></item><item><title>Recovering The Past</title><link>https://nikdoof.com/posts/2020/recovering-the-past/</link><pubDate>Thu, 09 Jul 2020 18:17:00 +0100</pubDate><guid>https://nikdoof.com/posts/2020/recovering-the-past/</guid><description>Over the past two weeks i&amp;rsquo;ve been hit by somewhat of a nostalgia trip, after coming across a copy of my old blog on archive.org and not having a suitable backup of the content itself i&amp;rsquo;ve decided to try and preserve my posts in a useful format for the future.
The Format I spent some time on what format I would ideally like to keep these posts in, text with some simple markup for links and formatting.</description><content type="html"><![CDATA[<p>Over the past two weeks i&rsquo;ve been hit by somewhat of a nostalgia trip, after coming across a copy of my old blog on <a href="https://archive.org">archive.org</a> and not having a suitable backup of the content itself i&rsquo;ve decided to try and preserve my posts in a useful format for the future.</p>
<h3 id="the-format">The Format</h3>
<p>I spent some time on what format I would ideally like to keep these posts in, text with some simple markup for links and formatting. The obvious chose was <strong>Markdown</strong>, sticking to the true core of the markup and not using any extensions or plugins available for most parsers these days.</p>
<p>Of course not everything should be embedded in the post, so I took pointers from Hugo and Jeykll and worked the posts into a YAML/Markdown hybrid. YAML&rsquo;s multidocument feature makes it easy to parse these documents with a few lines of Python:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-python" data-lang="python"><span class="line"><span class="cl"><span class="kn">import</span> <span class="nn">yaml</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="k">with</span> <span class="nb">open</span><span class="p">(</span><span class="n">args</span><span class="o">.</span><span class="n">file</span><span class="p">)</span> <span class="k">as</span> <span class="n">fobj</span><span class="p">:</span>
</span></span><span class="line"><span class="cl">    <span class="n">raw_doc</span> <span class="o">=</span> <span class="n">fobj</span><span class="o">.</span><span class="n">read</span><span class="p">()</span>
</span></span><span class="line"><span class="cl">
</span></span><span class="line"><span class="cl"><span class="n">_</span><span class="p">,</span> <span class="n">header</span><span class="p">,</span> <span class="n">text</span> <span class="o">=</span> <span class="n">raw_doc</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s1">&#39;---&#39;</span><span class="p">)</span>
</span></span><span class="line"><span class="cl"><span class="n">docs</span> <span class="o">=</span> <span class="p">[</span><span class="n">yaml</span><span class="o">.</span><span class="n">safe_load</span><span class="p">(</span><span class="n">header</span><span class="p">),</span> <span class="n">text</span><span class="p">]</span>
</span></span></code></pre></div><p>So an example file would look like:</p>
<div class="highlight"><pre tabindex="0" class="chroma"><code class="language-yaml" data-lang="yaml"><span class="line"><span class="cl"><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">title</span><span class="p">:</span><span class="w"> </span><span class="s2">&#34;Post Title&#34;</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">date</span><span class="p">:</span><span class="w"> </span><span class="ld">2020-07-09</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nt">tags</span><span class="p">:</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">tag1</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w">    </span>- <span class="l">tag2</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="nn">---</span><span class="w">
</span></span></span><span class="line"><span class="cl"><span class="w"></span><span class="l">Post *content* **goes** [here](#)</span><span class="w">
</span></span></span></code></pre></div><h3 id="gathering-the-corpus">Gathering The Corpus</h3>
<p>As it turns out, I wrote a lot of posts. While I don&rsquo;t think i&rsquo;m at the level of a daily blogger I was a very directed technical person who only took time to write a blog post when I felt the need to broadcast something to the world. So having a collection of over 300 posts feels impressive for me, sure a vast majority of them a small and pointless posts, but good for me.</p>
<p>The issue next is that a lot of these posts were in various formats: B2, B2evolution, Mephisto, Wordpress, and archive.org stored only one type of output: HTML. Converting these posts is all manual labor, find the post on archive.org, copy and paste the formatted text, and correct the formatting in Visual Studio Code.</p>
<p>So far this process is ongoing, i&rsquo;ve done 3 years of posts and i&rsquo;ve still got 10 years more&hellip;</p>
<h3 id="reviewing-my-old-posts">Reviewing My Old Posts</h3>
<p>The next job would be to review the posts and see what is applicable and acceptable for the modern internet. 2003-2007 was a strange and weird time on the internet, most of the big social media wasn&rsquo;t around and people were generally safe posting far too much information online. Everyone had their guard down and blogs were usually only seen by close friends and family in a person&rsquo;s &ldquo;bubble&rdquo;.</p>
<p>For example I purchased a Apple Powerbook in 2003, for some reason I felt the need to share the tracking number of my deliver online for people to view where my delivery was up to. Today that&rsquo;d be stupid, really stupid. I think if you did that now on social media someone would call up the courier and re-route your parcel.</p>
<p>Another &ldquo;elephant in the room&rdquo; is the direct posting of my inane mental drivel from my angst filled years of my late teens. I was stupid, young, and not really thinking about what I posted and why. In the end it did actually lose my job due to a post I made on that blog, and today some of those posts will be classified as hateful. Thankfully i&rsquo;ve grown and i&rsquo;m not the same person as back then, so these posts will be confined to this post archive and archive.org forever and will never be published again, stored there as a reminder to myself.</p>
<h3 id="epilogue">Epilogue</h3>
<p>Gathering my posts has been an interesting exercise, while a lot of up front work i&rsquo;ll hopefully be able to avoid this in the future by sticking to the Hugo/Jekyll mixed YAML/Markdown format for all future posts. At least now i&rsquo;ll have a <em>corpus</em> of my output in one place that I can take to any new re-design or site that takes my fancy.</p>
<p>Also from a <a href="https://gettingthingsdone.com/">GTD</a> this feels like one long review of my past, it has allowed me to see some of my actions in new light, see old hobbies i&rsquo;ve long since forgotten in a new light, and gave me some new aims for the future.</p>
]]></content></item><item><title>Maildir vs Mbox</title><link>https://nikdoof.com/posts/2020/maildir-vs-mbox/</link><pubDate>Mon, 06 Jul 2020 23:20:38 +0100</pubDate><guid>https://nikdoof.com/posts/2020/maildir-vs-mbox/</guid><description>Initially I had setup dimension.sh to use Maildir for its user mail store, as this was a preference of mine for all my dedicated servers over the years. In my view mbox was the legacy format doomed to be left behind while the superior Maildir ate its lunch and got all the newest, hottest tools wrote for it.
The problem is when you&amp;rsquo;re trying to setup a pubnix system with a legacy feel, it doesn&amp;rsquo;t really feel right excluding some tools that people want to use.</description><content type="html"><![CDATA[<p>Initially I had setup <a href="https://dimension.sh">dimension.sh</a> to use Maildir for its user mail store, as this was a preference of mine for all my dedicated servers over the years. In my view <code>mbox</code> was the legacy format doomed to be left behind while the superior <code>Maildir</code> ate its lunch and got all the newest, hottest tools wrote for it.</p>
<p>The problem is when you&rsquo;re trying to setup a pubnix system with a legacy feel, it doesn&rsquo;t really feel right excluding some tools that people want to use. For example using <code>Maildir</code> restricted the usable mail clients essentially down to <code>mutt</code>, which can be a bit of a beast to use at the best of times. <a href="https://dimension.sh/~gohan">~gohan</a> asked for <code>alpine</code> to be installed as it was his preferred client and the question came forefront again as to why I&rsquo;ve setup <code>Maildir</code>.</p>
<p>Thankfully a patched version of <code>alpine</code> is available to support <code>Maildir</code>, so i&rsquo;ve compiled a version and placed it on the Diemsion RPMs repository as <code>alpine-maildir</code>, for anyone interested.</p>
]]></content></item><item><title>Dimension RPMs Repository</title><link>https://nikdoof.com/posts/2020/dimension-rpms-repository/</link><pubDate>Sat, 04 Jul 2020 17:43:17 +0100</pubDate><guid>https://nikdoof.com/posts/2020/dimension-rpms-repository/</guid><description>While I was in the progress of setting up dimension.sh I slowly came to the realisation that the distribution I select (CentOS 8) didn&amp;rsquo;t have coverage for a lot of the popular utilities that are required on a pubnix system. Thankfully the EPEL exists and while most packages were not built for CentOS 8 the source RPMs were available to download and build.
But, we did encounter a few issues with some other packages, for example efingerd and gophernicus, these are available on most Debian distributions but not for RHEL base ones.</description><content type="html"><![CDATA[<p>While I was in the progress of setting up <a href="https://dimension.sh">dimension.sh</a> I slowly came to the realisation that the distribution I select (CentOS 8)
didn&rsquo;t have coverage for a lot of the popular utilities that are required on a pubnix system. Thankfully the EPEL exists and while most packages were not
built for CentOS 8 the source RPMs were available to download and build.</p>
<p>But, we did encounter a few issues with some other packages, for example efingerd and gophernicus, these are available on most Debian distributions but not
for RHEL base ones. So I took the time to create some package SPEC files and create the RPMs for CentOS 8.</p>
<p>These SPECs are now available on <a href="https://github.com/dimension-sh/dimension-rpms/">GitHub</a>, and the resulting RPMs are available at YUM repository hosted
on <a href="https://dimension-sh.github.io/dimension-rpms">GitHub pages</a>. To use the YUM repo all you need to do is either manually download the <a href="http://dimension-sh.github.io/dimension-rpms/dimension-rpms.repo">.repo file</a>.
or run the following:</p>
<pre tabindex="0"><code>yum-config-manager --add-repo http://dimension-sh.github.io/dimension-rpms/dimension-rpms.repo
</code></pre><p>At the moment I&rsquo;m not planning on building for RHEL/CentOS 7, but the SPECs should work for them and its relatively easy to build the repository manually.</p>
]]></content></item><item><title>Fixing VMware ESXi 6.5 Upgrade Issues</title><link>https://nikdoof.com/posts/2016/vmware-65-upgrade-issues/</link><pubDate>Mon, 21 Nov 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/vmware-65-upgrade-issues/</guid><description>A few days ago VMWare released vSphere 6.5, and in it came a raft of improvements to vCenter and other fluffy features that everyone loves. I&amp;rsquo;ve been running a small dual host setup at home for a while now, and while not in anyway a &amp;ldquo;real production environment&amp;rdquo; its been the host of a lot of household services, most notably (for my other half) a Plex server. Unfortunately everything didn&amp;rsquo;t go to plan, my lab host (Obelisk) took the update without issue being managed by the now embedded update manager in vCenter, my other host (Anshar) didn&amp;rsquo;t take to it at all.</description><content type="html"><![CDATA[<p>A few days ago VMWare released vSphere 6.5, and in it came a raft of improvements to vCenter and other fluffy features that everyone loves. I&rsquo;ve been running a small dual host setup at home for a while now, and while not in anyway a &ldquo;real production environment&rdquo; its been the host of a lot of household services, most notably (for my other half) a Plex server. Unfortunately everything didn&rsquo;t go to plan, my lab host (Obelisk) took the update without issue being managed by the now embedded update manager in vCenter, my other host (Anshar) didn&rsquo;t take to it at all.</p>
<p>The error I encountered was &ldquo;<strong>Cannot run upgrade script on host</strong>&rdquo;, a lovely generic error which had me scrabbling around inside the ESXi logs to find the solution: It turns out that one time or another I put the USB stick with the ESXi install into my Mac, which in turn sprayed a collection of &ldquo;.Spotlight-V100&rdquo;, &ldquo;.fseventd&rsquo;, and various other Mac specific files in the local datastore and various critical folders in the filesystem. Thankfully the host still booted so I was able to resolve it.</p>
<ul>
<li>Enable SSH on your ESXi host</li>
<li>Run <code>find -name &quot;.Spotlight-V100&quot; -type d -exec rm -rf {} \;</code></li>
<li>Run <code>find -name &quot;.Trashes&quot; -type d -exec rm -rf {} \;</code></li>
<li>Run <code>find -name &quot;.fseventd&quot; -type d -exec rm -rf {} \;</code></li>
<li>Re-run the Upgrade</li>
</ul>
<p>Now this should of all worked, and the logs indicated it wasn&rsquo;t failing on any silly parts, but again I was hit with the &ldquo;Unable to run update script&rdquo; error. Further digging was required.</p>
<p>It turns out that VUM writes out a very detailed log of exactly what it is doing into <code>/var/log/vua.log</code> and this should be your first port of call for debugging any issues. In my log it indicated it was expecting that 6.5 was already installed and when it was comparing the list of VIBs to update it was extremely confused why everything was out of date. It seems that ESXi depends on a system called &ldquo;locker&rdquo; to store all its package information and one of the first VIBs that is updated includes the updated locker files. Somehow I had to revert these files back to the 6.0 files. VMWare itself seems to recommend copying over the files from a working host, which wasn&rsquo;t possible in my case as the other host was already on 6.5. So I held my breath and did the following:</p>
<ul>
<li>Remove tools-light using <code>esxcli software via remove -n tools-light</code></li>
<li>Install the 6.0 version using <code>update profile esxcli software profile update -p ESXi-6.0.0-20161004001-standard -d https://hostupdate.vmware.com/software/VUM/PRODUCTION/main/vmw-depot-index.xml</code></li>
</ul>
<p>&hellip;and thankfully it seemed to complete without issue. Running VUM from then on updated to 6.5 without issue. Each situation is different, but looking at the logs can really give you some insight into what is going on, or you can run the installer ISO directly and watch it spit out the specific issue its hitting. But, here are some reference links for working out what is wrong:</p>
<ul>
<li><a href="https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2007163">VMware KB - Upgrading a VMware ESXI host fails with the error &ldquo;Cannot run upgrade script on host&rdquo;</a></li>
<li><a href="https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&amp;cmd=displayKC&amp;externalId=2030665">VMware KB - &ldquo;The host returns esxupdate error code:15</a></li>
<li><a href="https://www.viktorious.nl/2012/09/18/esxi-5-1-upgrade-fails-with-the-error-cannot-execute-upgrade-script-on-host/">viktorious.nl - ESXi 5.1 upgrade fails with the error &ldquo;Cannot run upgrade script on host&rdquo;</a></li>
<li><a href="http://virtuallyhyper.com/2012/09/esxi-host-fails-to-upgrade-to-5-1-with-update-manager/">virtuallyhyper - ESXi host fails to upgrade to 5.1 with Update Manager</a></li>
</ul>
]]></content></item><item><title>Installing VMware vSphere CLI 6.0 on Debian</title><link>https://nikdoof.com/posts/2016/vmware-cli-on-debian-8/</link><pubDate>Sun, 03 Apr 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/vmware-cli-on-debian-8/</guid><description>In a attempt to try and improve monitoring on my ESX system, i&amp;rsquo;ve started to poke around with a few Munin plugins which look interesting. The biggest road block was the requirement to have the VMware vSphere CLI installed. Unfortunately its doesn&amp;rsquo;t seem to be a simple task of install and forget, as like most commercial software companies they&amp;rsquo;re yet to sign up to the RPM/dpkg route for distributing their software.</description><content type="html"><![CDATA[<p>In a attempt to try and improve monitoring on my ESX system, i&rsquo;ve started to poke around with a few Munin plugins which look interesting. The biggest road block was the requirement to have the VMware vSphere CLI installed. Unfortunately its doesn&rsquo;t seem to be a simple task of install and forget, as like most commercial software companies they&rsquo;re yet to sign up to the RPM/dpkg route for distributing their software.</p>
<p>Thankfully, after a while Googling and a few experimentations i&rsquo;ve found the following magic bullet:</p>
<pre><code># apt-get install libxml-libxml-perl perl-doc libssl-dev e2fsprogs libarchive-zip-perl libcrypt-ssleay-perl libclass-methodmaker-perl libdata-dump-perl libsoap-lite-perl libdatetime-format-iso8601-perl

# echo &quot;ubuntu&quot; &gt; /etc/tmp-release
# export httpproxy=
# export ftp_proxy=
</code></pre>
<p>This works for Debian 8 (Jessie), and its been reported that it works for Debian 7 as well.</p>
<p><em>Ps. VMware, no, /usr/bin isn&rsquo;t a sane default for installing your software into!</em></p>
]]></content></item><item><title>Upgrading a Google GB-7007 / U1 firmware</title><link>https://nikdoof.com/posts/2016/flashing-google-gb-7007/</link><pubDate>Sun, 03 Apr 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/flashing-google-gb-7007/</guid><description>For many years Google has produced a range of &amp;ldquo;Search Appliances&amp;rdquo;, the idea is that you have a miniature Google search engine within your business that is able to index all your internal files and make them available in a nice interface that everyone is use to. Over the years they&amp;rsquo;ve produced many iterations of the product with the most recent ones being essentially rebadged Dell hardware.
The &amp;ldquo;current&amp;rdquo; generation (and I use that loosely) is a rebadged Dell R710 with the bare minimum fitting options:</description><content type="html"><![CDATA[<p>For many years Google has produced a range of &ldquo;Search Appliances&rdquo;, the idea is that you have a miniature Google search engine within your business that is able to index all your internal files and make them available in a nice interface that everyone is use to. Over the years they&rsquo;ve produced many iterations of the product with the most recent ones being essentially rebadged Dell hardware.</p>
<p>The &ldquo;current&rdquo; generation (and I use that loosely) is a rebadged Dell R710 with the bare minimum fitting options:</p>
<ul>
<li>2 x Intel Xeon E5620</li>
<li>48GB ECC RAM</li>
<li>PERC H700 with 8 x 2.5&quot; SAS drives</li>
<li>iDRAC Express</li>
</ul>
<p>Extra niceties have been left out to cut costs, so no internal SD card reader, no CD/DVD drive. The hardware was generally given free with a license so you can find these devices popping up on the market every so often when people&rsquo;s licenses expire and they don&rsquo;t want the hardware filling up their racks.</p>
<p>Google actually publishes some quick guidelines on <a href="https://support.google.com/gsa/answer/6055109?hl=en">repurposing the hardware after the end of the license</a> which should be good enough for the vast majority of people, but the BIOS is out of date and its still tagged up with the Google Search Appliance boot screen, finding updates for the BIOS is near impossible. If you want to run ESXi with all the fluffy bits it can be a bit of a pain.</p>
<p>But, this is a R710, can&rsquo;t we just use Dell&rsquo;s version?</p>
<p>Actually, yes you can. On the Dell R710 <a href="http://www.dell.com/support/home/us/en/04/product-support/product/poweredge-r710/drivers">support page</a>, grab the <a href="http://www.dell.com/support/home/us/en/04/Drivers/DriversDetails?driverId=4HKX2&amp;fileId=3287625347&amp;osCode=WS8R2&amp;productCode=poweredge-r710&amp;languageCode=en&amp;categoryId=BI">latest BIOS package</a> in the &ldquo;Non-Packaged&rdquo; format and put it on a bootable USB stick with FreeDOS on it. Then just follow these steps:</p>
<ol>
<li>Pop the lid off your GB-7007 and disable the BIOS password (check on the back of the lid for details)</li>
<li>Boot the system and enter the BIOS, change the boot order to USB first.</li>
<li>Reboot with the USB stick in one of the ports, wait until you hit the FreeDOS prompt</li>
<li>From the prompt run <code>R710-060400C.exe /forcetype</code></li>
</ol>
<p>OK, that last item may look a little scary. The update process has a check in it to see if the system you&rsquo;re trying to flash the BIOS to is the target system, this appliance will identify as a Google Search Appliance so will always fail this check, even though the hardware is identical to the R710 system. The <code>/forcetype</code> option disables this check and forces the BIOS to install.</p>
<p>After a minute or two your system will reboot and you&rsquo;ll get the normal Dell boot logo and options, congratulations, your Google Search Appliance is now a Dell R710 sporting a lovely yellow case.</p>
]]></content></item><item><title>Upgrading the firmware on a HP ProCurve 2824</title><link>https://nikdoof.com/posts/2016/upgrading-hp-procurve-2824/</link><pubDate>Sat, 27 Feb 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/upgrading-hp-procurve-2824/</guid><description>As it turns out, the &amp;ldquo;new&amp;rdquo; switch i&amp;rsquo;ve acquired was very out of date in regards to firmware. A few bugs have been fixed and some silly Java problems have been resolved on the Web UI, so its worth taking the time to update it.
First of all, check what firmware and boot ROM your switch is using using the show flash command on the CLI:
sw3# show flash Image Size(Bytes) Date Version ----- ---------- -------- ------- Primary Image : 3003952 12/21/05 I.</description><content type="html"><![CDATA[<p>As it turns out, the &ldquo;new&rdquo; switch i&rsquo;ve acquired was very out of date in regards to firmware. A few bugs have been fixed and some silly Java problems have been resolved on the Web UI, so its worth taking the time to update it.</p>
<p>First of all, check what firmware and boot ROM your switch is using using the <code>show flash</code> command on the CLI:</p>
<pre><code>sw3# show flash
Image           Size(Bytes)   Date   Version
-----           ----------  -------- -------
Primary Image   : 3003952   12/21/05 I.08.87
Secondary Image : 3003952   12/21/05 I.08.87
Boot Rom Version: I.08.07
Current Boot    : Primary
</code></pre>
<p>All firmware versions I.08.07 onwards need the I.08.07 Boot ROM, and you&rsquo;ll need to flash up to this version first. Thankfully HP provide that specific version on the website to download. Follow the exact same steps as below for I.08.07, then repeat for whatever version you&rsquo;re upgrading to.</p>
<p>To get the firmware to the switch we use a TFTP server, a little out of scope for this article but you can find a lot of free and open source servers for this, i&rsquo;m using my local pfSense gateway&rsquo;s TFTP server for this, i&rsquo;ve uploaded the <code>I_10_107.swi</code> firmware file to the TFTP and from the switch&rsquo;s CLI I run the following:</p>
<pre><code>sw3# copy tftp flash 10.1.1.1 I_10_107.swi secondary
The Secondary OS Image will be deleted, continue [y/n]?  y
03261K
</code></pre>
<p>After a few seconds you&rsquo;ll be back at the prompt. To check everything has worked as expected check the <code>show flash</code> command:</p>
<pre><code>sw3# show flash
Image           Size(Bytes)   Date   Version
-----           ----------  -------- -------
Primary Image   : 3003952   12/21/05 I.08.87
Secondary Image : 3428242   08/24/15 I.10.107
Boot Rom Version: I.08.07
Current Boot    : Primary
</code></pre>
<p>All you need to do is reboot the switch with the new firmware, check everything works, then flash over the image to the primary flash storage:</p>
<pre><code>sw3# boot system flash secondary
Device will be rebooted, do you want to continue [y/n]?
</code></pre>
<p>Once the system is up and working, use <code>show flash</code> again to check its booted to the secondary area.</p>
<pre><code>sw3# show flash
Image           Size(Bytes)   Date   Version
-----           ----------  -------- -------
Primary Image   : 3003952   12/21/05 I.08.87
Secondary Image : 3428242   08/24/15 I.10.107
Boot Rom Version: I.08.07
Current Boot    : Secondary
</code></pre>
<p>And if everything is working as expected, flash the firmware over to the primary image exactly the same way as before</p>
<pre><code>sw3# copy tftp flash 10.1.1.1 I_10_107.swi primary
The Primary OS Image will be deleted, continue [y/n]?  y
03261K
</code></pre>
<p>For the final (optional) step, switch back to the primary image:</p>
<pre><code>sw3# boot system flash primary
Device will be rebooted, do you want to continue [y/n]?
</code></pre>
<p>And you&rsquo;re all done.</p>
]]></content></item><item><title>Resetting a HP ProCurve 2824</title><link>https://nikdoof.com/posts/2016/resetting-hp-procurve-2824/</link><pubDate>Fri, 26 Feb 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/resetting-hp-procurve-2824/</guid><description>Another day, another switch. This time i&amp;rsquo;ve bought a second HP ProCurve 2824, they&amp;rsquo;re solid and reliable and with a quick replacement to the fans they&amp;rsquo;re damn near silent. Throw in the full Layer 2 feature set and a basic Layer 3 (named by HP as L3-lite) its a workhorse of a switch suited for small environment or edge switches on larger networks.
Main problem is that most of these ex-corporate switches come pre-configured with some setup you neither know or care about, thankfully resetting this switch is amazingly easy.</description><content type="html"><![CDATA[<p>Another day, another switch. This time i&rsquo;ve bought a second HP ProCurve 2824, they&rsquo;re solid and reliable and with a quick replacement to the fans they&rsquo;re damn near silent. Throw in the full Layer 2 feature set and a basic Layer 3 (named by HP as L3-lite) its a workhorse of a switch suited for small environment or edge switches on larger networks.</p>
<p>Main problem is that most of these ex-corporate switches come pre-configured with some setup you neither know or care about, thankfully resetting this switch is amazingly easy.</p>
<ol>
<li>With the power on, poke the <code>Reset</code> and <code>Clear</code> buttons at the same time with whatever pokey devices you can find.</li>
<li>Release the <code>Reset</code> button</li>
<li>Wait until the Test LED starts blinking</li>
<li>Release the <code>Clear</code> button</li>
</ol>
<p>Within a few seconds you&rsquo;ll have a factory default switch, grab your straight through serial cable and have a play with the CLI.</p>
<p>Its worth taking the time to get the firmware up to date, remember to check the change logs and the documentation as some interim steps may be needed to bring it up to the current version. The software for this switch has progressed quite a bit, its still the same horrible Java based Web UI but little features introduced here and there really help out.</p>
]]></content></item><item><title>Broken OpenVPN IPv4 routing with iOS9 and IPv6</title><link>https://nikdoof.com/posts/2016/fixing-openvpn-ipv6/</link><pubDate>Wed, 10 Feb 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/fixing-openvpn-ipv6/</guid><description>After finally taking the time to get tunnelled IPv6 into the homelab via Hurricane Electric I thought it would be nice to extend out the routing to my VPN clients, after all they connect in an appear like local devices to the rest of the network, why not?
What I thought was a simple configuration change has been puzzling me for the last few days, what I didn&amp;rsquo;t realise is that after switching on IPv6 in the OpenVPN server all IPv4 traffic hasn&amp;rsquo;t been correctly routed via the VPN.</description><content type="html"><![CDATA[<p>After finally taking the time to get tunnelled IPv6 into the homelab via <a href="https://tunnelbroker.net">Hurricane Electric</a> I thought it would be nice to extend out the routing to my VPN clients, after all they connect in an appear like local devices to the rest of the network, why not?</p>
<p>What I thought was a simple configuration change has been puzzling me for the last few days, what I didn&rsquo;t realise is that after switching on IPv6 in the OpenVPN server all IPv4 traffic hasn&rsquo;t been correctly routed via the VPN. It turns out a small issue in either the OpenVPN client, iOS or something in-between has broke the configuration, but thankfully it only requires a small fix.</p>
<p>The solution finally came from the OpenVPN bug tracker, ticket <a href="http://community.openvpn.net/openvpn/ticket/614">614</a>:</p>
<blockquote>
<p>IPv4 routing on iOS 9 is broken if IPv6 is enabled inside the tunnel.
The tests were done with tun-ipv6 and redirect-gateway activated and all the IPv4 traffic bypasses VPN gateway, while IPv6 works fine.
Works as expected without tun-ipv6. Doesn&rsquo;t work with tun-ipv6 but no IPv6 address.</p>
</blockquote>
<p>Exactly what I was experiencing. Thankfully <code>fkooman</code> came across an entry in the <a href="https://docs.openvpn.net/docs/openvpn-connect/openvpn-connect-ios-faq.html">FAQ</a> which mentioned an undocumented option called <code>redirect-gateway ipv6</code>. Injecting this option in the OpenVPN server resolves the routing issues.</p>
<p>On pfSense you just need to add <code>push &quot;redirect-gateway ipv6&quot;</code> into the &ldquo;Advanced Options&rdquo; section of the OpenVPN server configuration</p>
]]></content></item><item><title>Miniflux - Easy, self-hosted RSS</title><link>https://nikdoof.com/posts/2016/miniflux-easy-self-hosted-rss/</link><pubDate>Mon, 08 Feb 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/miniflux-easy-self-hosted-rss/</guid><description>Since the demise of Google Reader a lot of new tools and sites have tried to take over the mantle as the de-facto RSS reader for the masses. The biggest (to my understanding) is Feedly which used the shutdown to push their product, unfortunately over time the investment in the &amp;ldquo;free&amp;rdquo; Feedly seems to have slowly slipped away in favour of their Pro offering, which isn&amp;rsquo;t surprising for any company wanting to turn a profit.</description><content type="html"><![CDATA[<p>Since the demise of Google Reader a lot of new tools and sites have tried to take over the mantle as the de-facto RSS reader for the masses. The biggest (to my understanding) is Feedly which used the shutdown to push their product, unfortunately over time the investment in the &ldquo;free&rdquo; Feedly seems to have slowly slipped away in favour of their Pro offering, which isn&rsquo;t surprising for any company wanting to turn a profit. This issue seems to be replicated across all the hosted providers who are trying to make a profit out of a service Google had supplied for free, and old stalwarts like me still struggle with the idea of paying $3-$7 a month for aggregating RSS.</p>
<p>With the aim to take matters into my own hands I decided to hunt around for an open source solution that I could self host, I&rsquo;m already paying for a dedicated server so why not use that to host it?</p>
<p>Thankfully, it seems that a lot of other people had the same issue and a large list of <a href="https://github.com/Kickball/awesome-selfhosted#feed-readers">open source solutions</a> had popped up. The interesting one seems to involve the &ldquo;Fever&rdquo; API, which is a simple method of exporting these feed readers out to mobile and desktop readers without any quirky reader dependent applications, my favourite RSS application <a href="http://reederapp.com/mac/">Reeder</a> supported this API so really helped with the decision of what solution I needed.</p>
<p><a href="https://miniflux.net">Miniflux</a> seems to be the perfect balance between function and simplicity, It can be installed damn near anywhere as it only uses PHP and a few standard modules, in addition it supports importing and exporting OPML files and the Fever API to allow my desktop and mobile client to keep in sync with no extra work.</p>
<p>Installation couldn&rsquo;t be <a href="https://miniflux.net/documentation/installation">simpler</a>. Checkout the repo, move to a folder of your choice and throw in a Nginx configuration:</p>
<pre><code>server {
  listen 80;
  server_name rss.domain.com;
  root /home/user/www/rss.domain.com/;
  index index.php;

  index index.php index.html index.htm;

  # the following line is responsible for clean URLs
  try_files $uri $uri/ /index.php?$args;

  # serve static files directly
  location ~* ^.+.(jpg|jpeg|gif|css|png|js|ico|html|xml|txt)$ {
    access_log        off;
    expires           max;
  }

  location ^~ /data/ {
    deny all;
  }

  location ~ \.php$ {
    # Security: must set cgi.fixpathinfo to 0 in php.ini!
    fastcgi_split_path_info ^(.+\.php)(/.+)$;
    fastcgi_pass 127.0.0.1:8812;
    fastcgi_index index.php;
    fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
    fastcgi_param PATH_INFO $fastcgi_path_info;
    include /etc/nginx/fastcgi_params;
  }
}
</code></pre>
<p>Done. Your Fever API endpoint is available at <code>/fever/</code> and the username and password can be configured in the UI for the application. Everything is stored in Sqlite so easy to backup and move around.</p>
<p>If you&rsquo;re looking for something thats simple and works, i&rsquo;d recommend giving it a try!</p>
]]></content></item><item><title>Homelab Puppet</title><link>https://nikdoof.com/posts/2016/homelab-puppet/</link><pubDate>Wed, 03 Feb 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/homelab-puppet/</guid><description>It might sound like using a nuclear weapon to swat a fly, but when you&amp;rsquo;re working with Puppet in your day job it can be really useful to have a test bench to fiddle with new ideas at home. After all thats what homelabs exist right?
Puppet Enterprise comes with a 10 free nodes license as stock, for a small homelab its perfect for managing that configuration that applies to all systems, DNS, routing, SSH keys, you get the idea.</description><content type="html"><![CDATA[<p>It might sound like using a nuclear weapon to swat a fly, but when you&rsquo;re working with Puppet in your day job it can be really useful to have a test bench to fiddle with new ideas at home. After all thats what homelabs exist right?</p>
<p><a href="https://puppetlabs.com/download-puppet-enterprise">Puppet Enterprise</a> comes with a 10 free nodes license as stock, for a small homelab its perfect for managing that configuration that applies to all systems, DNS, routing, SSH keys, you get the idea. Also, as my day job runs Puppet Open Source its great to test out the commercial version and get to know it before the inevitable upgrade where a lot more is at stake.</p>
<p>For my installation I went with CentOS 7 and a single node installation, I use Code Manager to automatically deploy my configuration from a git repository I have stored in <a href="https://gogs.io">Gogs</a>, which if you&rsquo;ve not seen already I highly suggest checking out. Agents are mostly Debian 8 with a sprinkle of CentOS7 and RHEL7 for my learning needs.</p>
<p>Heres some handy hints from my Puppet usage, both in work and home:</p>
<h3 id="use-puppet-enterprise">Use Puppet Enterprise</h3>
<p>10 free nodes! Take advantage of it if you can. While open source Puppet is great, the installer and Console makes Enterprise worth the $100/year/node just for the saved time of fiddling with config.</p>
<h3 id="use-puppet-forge">Use Puppet Forge</h3>
<p>It might seem obvious, but a lot of places suffer from <a href="https://en.wikipedia.org/wiki/Not_invented_here">NIH</a> when it comes to Puppet and decide to re-write from scratch instead of expanding on an already open source module. While the vast majority of modules I use are public on the Forge I have slipped into a habit of quickly hacking together a profile for an application rather than write a full module to share. In general using the Forge will save you time, so take advantage of it.</p>
<h3 id="use-distro-packages">Use distro packages</h3>
<p>While you can grab x .tar.gz from y website, extract, run, copy files and such, save yourself the pain and use distribution packages whenever possible. Not only does it make for much easier installation and management it saves you a lot of time when it comes to upgrading.</p>
<h3 id="dont-aim-for-100-coverage">Don&rsquo;t aim for 100% coverage</h3>
<p>Trying to configure every part of a system with Puppet will burn you out quickly, cover the required elements and tick them off first. In my opinion Puppet shouldn&rsquo;t be handling assigning IPs to devices or managing file systems, but setting DNS, firewall rules and package repositories are right up its street.</p>
<h3 id="things-break-so-check-your-config-first">Things break, so check your config first</h3>
<p>The <code>--noop</code> option is your friend. Make use of it to check that your new shiny config won&rsquo;t blow a hole in the side of your system due to a dodgy Heira YAML file. In Puppet Enterprise you can even run this from the console.</p>
<p>If you have the infrastructure to spare, get a <a href="">Jenkins</a> system setup and lint/test that config before it hits a live system. If you want to get really fancy, have Jenkins auto push to your production branch after testing for that continuous deployment feeling.</p>
<h3 id="puppet-enterprise-has-application-orchestration">Puppet Enterprise has Application Orchestration!</h3>
<p>While its a recent development I highly suggest you read the documentation and have a play with this. Hand holding multi system deployments is no longer needed!</p>
<p>I&rsquo;m sure people have a hundred and one other things to say, but i&rsquo;ll leave that for the experts&hellip;</p>
]]></content></item><item><title>The strange case of an OCZ Petrol SSD</title><link>https://nikdoof.com/posts/2016/the-strange-case-of-an-ocz-petrol-ssd/</link><pubDate>Sat, 30 Jan 2016 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2016/the-strange-case-of-an-ocz-petrol-ssd/</guid><description>A few years ago I took the risk and installed an SSD into my father&amp;rsquo;s PC, At the time his 300GB Seagate drive had failed in his stock Dell PC, just a touch outside of the warranty period, and in a attempt to keep the costs low I ended up picking a cheap SSD for him. The cheapest at the time was a OCZ Petrol 64GB. Only after a year or so did the horror stories about OCZ SSDs start appearing and a lot of people experienced failures after just a few weeks to months.</description><content type="html"><![CDATA[<p>A few years ago I took the risk and installed an SSD into my father&rsquo;s PC, At the time his 300GB Seagate drive had failed in his stock Dell PC, just a touch outside of the warranty period, and in a attempt to keep the costs low I ended up picking a cheap SSD for him. The cheapest at the time was a OCZ Petrol 64GB. Only after a year or so did the <a href="http://windowssecrets.com/forums/showthread.php/141651-High-failure-rate-of-OCZ-SSDs-yields-answers-that-raise-troubling-questions">horror stories</a> about OCZ SSDs start appearing and a lot of people experienced failures after just a few weeks to months. My father&rsquo;s SSD carried on chugging for a good few years, and died just a few weeks ago, not bad for a cursed brand&hellip;</p>
<p>The strange part was how it failed. Usually these SSDs just stopped working in every way and would appear to the BIOS. In this instance it was still there, it still booted, and it got about half way through the boot sequence for Windows XP before dying with IO errors BSOD. At the time I wrote off the disk as a complete failure, trying to plug it into another PC didn&rsquo;t work, USB to SATA connector didn&rsquo;t work, even when I did manage to get recognising on a system it said around 95% of the blocks were bad on the device. New SSD was purchased and this one was forgot about on my desk until I picked up a new <a href="http://www.amazon.co.uk/gp/product/B00HJZJI84/ref=as_li_tl?ie=UTF8&amp;camp=1634&amp;creative=6738&amp;creativeASIN=B00HJZJI84&amp;linkCode=as2&amp;tag=nikdoofnet-21">USB 3.0 to SATA cable</a> from Amazon today.</p>
<p>On a whim I decided to plug it into the drive, then into my Mac. OSX by default doesn&rsquo;t write to NTFS but can read it, and it turns out it identifies something very weird in this device. When operated in read-only mode with no writes attempted to the device it works perfectly, this also confirms what I was seeing in the PC in that the boot loader and initial stages of Windows XP worked fine, but when it came to actually check the disk and do a write it caused the device to lock solid.</p>
<p>So, if you have a OCZ Petrol that you need to recover data from, try getting a device that supports write blocking and give it a go.</p>
]]></content></item><item><title>Fixing CIFS/Samba Browse Speed on OSX</title><link>https://nikdoof.com/posts/2015/slow-samba-browsing-on-osx/</link><pubDate>Sun, 08 Nov 2015 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2015/slow-samba-browsing-on-osx/</guid><description>One thing that has always frustrated me on Mac OS X is the impossible slow directory listing and browsing speeds on CIFS/SMB shares, Apple&amp;rsquo;s devices, such as the Time Capsule and OS X shares work perfectly, but anything running Samba has this amazingly slow response on any folder with more than 200 files.
Today i&amp;rsquo;ve been finally configuring my FreeNAS installation on my HP Gen8 Microserver, and after a good twenty or so minutes researching the issue I found a small post on the FreeNAS forums suggesting the following settings:</description><content type="html"><![CDATA[<p>One thing that has always frustrated me on Mac OS X is the impossible slow directory listing and browsing speeds on CIFS/SMB shares, Apple&rsquo;s devices, such as the Time Capsule and OS X shares work perfectly, but anything running Samba has this amazingly slow response on any folder with more than 200 files.</p>
<p>Today i&rsquo;ve been finally configuring my FreeNAS installation on my HP Gen8 Microserver, and after a good twenty or so minutes researching the issue I found a small post on the FreeNAS forums suggesting the following settings:</p>
<pre><code>ea support = no
store dos attributes = no
</code></pre>
<p>Boom, quickly added to the configuration files and browsing now flies. Next is to try and improve the overall transfer speed over 25MiB/sec.</p>
]]></content></item><item><title>Introducing the Home Lab</title><link>https://nikdoof.com/posts/2015/introducing-the-home-lab/</link><pubDate>Sun, 18 Oct 2015 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2015/introducing-the-home-lab/</guid><description>Driven by some techie mental disability and the thirst to understand more i&amp;rsquo;ve slowly expanded out my home network into a &amp;ldquo;homelab&amp;rdquo;. A few months ago I picked up a cheap HP ProCurve 2824 from eBay, its a great gigabit switch with Layer 2 and basic Layer 3 capabilities, after a quick retrofit of the fans with some nice quiet Sunon Maglevs its been ticking over nicely as the core switch of my network.</description><content type="html"><![CDATA[<p>Driven by some techie mental disability and the thirst to understand more i&rsquo;ve slowly expanded out my home network into a &ldquo;homelab&rdquo;. A few months ago I picked up a cheap HP ProCurve 2824 from eBay, its a great gigabit switch with Layer 2 and basic Layer 3 capabilities, after a quick retrofit of the fans with some nice quiet Sunon Maglevs its been ticking over nicely as the core switch of my network.</p>
<p>In addition to the new switch I &ldquo;acquired&rdquo; Jo&rsquo;s PC to use as a VM host, with some extra network cards and a bit more memory its now serving as a multifunction machine; pfSense, various lab VMs, and monitoring systems.</p>
<p>From time to time i&rsquo;ll be posting about my latest experiments, what i&rsquo;m learning, and how its now presenting a even larger drain on the electricity than it was previously.</p>
]]></content></item><item><title>Resetting the TP-Link TL-SG3210</title><link>https://nikdoof.com/posts/2015/resetting-tp-link-tl-sg3210/</link><pubDate>Mon, 12 Oct 2015 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2015/resetting-tp-link-tl-sg3210/</guid><description>In the hunt to introduce VLANs across all segments of my home network I managed to pick-up a L2 managed switch for a lot cheaper than I expected, the TL-SG3210 only offers the bare basics but its enough to get some control to the last remnant of the unmanaged network hidden behind a powerline ethernet adapter. At £36 I couldn&amp;rsquo;t say no to it.
As to expect, it all came pre-configured for the last user&amp;rsquo;s network, and this time I had no helpful IP sticker like the HP 2824 (I have no idea how I managed to reset that).</description><content type="html"><![CDATA[<p>In the hunt to introduce VLANs across all segments of my home network I managed to pick-up a L2 managed switch for a lot cheaper than I expected, the TL-SG3210 only offers the bare basics but its enough to get some control to the last remnant of the unmanaged network hidden behind a powerline ethernet adapter. At £36 I couldn&rsquo;t say no to it.</p>
<p>As to expect, it all came pre-configured for the last user&rsquo;s network, and this time I had no helpful IP sticker like the HP 2824 (I have no idea how I managed to reset that). This switch however had a RJ45 console port, which luckily responded to a standard Cisco rollover cable. Once you&rsquo;re all plugged all you need to do is:</p>
<ul>
<li>Set the port to 38400 baud, 8 data bits, No parity, 1 stop bit (8/N/1)</li>
<li>Reboot the device, when prompted hit CTRL+B</li>
<li>At the prompt <code>reset</code> then <code>reset</code> again and the device should reboot.</li>
</ul>
<p>Once alive again the switch should be available on <code>192.168.0.1</code> with <code>admin</code> as the username and <code>admin</code> as the password. I&rsquo;ve yet to really delve into the CLI of this device but i&rsquo;d expect some nicer features to be tucked away in there, for the moment its doing a fine job of splitting out that dirty Murdoch device (Sky TV box) and the snooping LG TV off the main network.</p>
]]></content></item><item><title>DVB-T and SDR with the RTL2632</title><link>https://nikdoof.com/posts/2015/dvbt_and_sdr/</link><pubDate>Sun, 30 Aug 2015 21:24:00 +0100</pubDate><guid>https://nikdoof.com/posts/2015/dvbt_and_sdr/</guid><description>I spotted a Youtube video the other day that talked quickly about SDRs (Software Defined Radios) and how you can pick up one for $20, which is a massive difference from the £300-400 devices I spotted a few years ago. Of course, I decided there and then that i&amp;rsquo;d grab one to experiment with and searched Google for the mystical device. As it turns out its the Realtek RT26832 based devices which allow SDR type functionality, and while a lot of devices out there are higher than the magical $20 due to them being advertised as a SDR it was quite easy to find one of these generic DVB-T tuners with the right chipset on Amazon for a grand total of £9.</description><content type="html"><![CDATA[<p>I spotted a <a href="https://www.youtube.com/watch?v=3J7WoyKpMT4">Youtube video</a> the other day that talked quickly about SDRs (Software Defined Radios) and how you can pick up one for $20, which is a massive difference from the £300-400 devices I spotted a few years ago. Of course, I decided there and then that i&rsquo;d grab one to experiment with and searched Google for the mystical device. As it turns out its the Realtek RT26832 based devices which allow SDR type functionality, and while a lot of devices out there are higher than the magical $20 due to them <a href="http://www.amazon.co.uk/gp/product/B00P2UOU72/ref=as_li_tl?ie=UTF8&amp;camp=1634&amp;creative=6738&amp;creativeASIN=B00P2UOU72&amp;linkCode=as2&amp;tag=nikdoofnet-21">being advertised as a SDR</a> it was quite easy to find one of these generic DVB-T tuners with the right chipset on <a href="http://www.amazon.co.uk/gp/product/B009VBUYA0/ref=as_li_tl?ie=UTF8&amp;camp=1634&amp;creative=6738&amp;creativeASIN=B009VBUYA0&amp;linkCode=as2&amp;tag=nikdoofnet-21">Amazon for a grand total of £9</a>. With the order being eligible under Amazon Prime I ended up ordering the item yesterday (Saturday) and it was delivered today (Sunday).</p>
<p>So, straight out of the box and into my Debian Jessie test system and everything worked, no tweaking or hassling, within seconds I had a working DVB adapter and I used the standard DVB tools to scan and create a channels.conf within a minute or so. My last experience with the LinuxDVB stack was around 2005-2006ish with MythTV, the drivers &ldquo;sort of&rdquo; worked and everything was a little rough round the edges, it seems the last 10 years have really cleaned up the stack. With that in mind its not really worth posting about getting the DVB-T tuner to work, because it just did&hellip;</p>
<p>SDR required a little extra work. I&rsquo;ve not spent a large amount on time trying to get the full toolset to work on Debian, the <code>rtl-sdr</code> toolset is available as a package with Jessie and can be easily install, the biggest problem was that because I was using my test system I didn&rsquo;t have a X session running to run anything on. I got everything installed and spun up a <code>rtl_tcp</code> instance without much incident, the biggest roadblock was that you can&rsquo;t have the DVB kmod inserted at the same time as using the <code>rtl-sdr</code> package tools, but a quick <code>rmmod</code> and blacklist sorted that out, the tools are very quick to point out exactly what needs to be done.</p>
<p>Instead of working on Linux I got everything up and running on my Macbook Pro running OS X Yosemite, while OSX doesn&rsquo;t have the full suite of tools available a few good ones have been developed for the platform. I found that <a href="http://cubicsdr.com">CubicSDR</a> was by far the easiest to get rolling with, no messing with MacPorts or any other third party packaging tools, just a DMG and a pre-packaged application. While it isn&rsquo;t as feature complete as some of the other packages out there it does cover the basics to go poking around. Their <a href="https://github.com/cjcliffe/CubicSDR/blob/master/README.md">todo list</a> does looks interesting, especially with the target of having digital demodulation built in.</p>
<p>Quick overview done and i&rsquo;m now looking for a better antenna. While not being used as a SDR the stick itself will be happily serving as a DVB-T source for my Plex system using TVHeadEnd and with a quick MCx to Coax adapter you can have it plumbed into the household aerial without much issue.</p>
<p>[Update - 2015/08/31]</p>
<p>Regarding the links earlier on in this post, it turns out that the two tuners are actually different, with the Nooelec it includes an improved tuner chip (the R820T2). As it turns out it is worth investing the few pounds more for this version as its more sensitive and also includes a more stable crystal that won&rsquo;t require much adjustment.</p>
]]></content></item><item><title>Heroku and NextAction</title><link>https://nikdoof.com/posts/2015/heroku_and_nextaction/</link><pubDate>Sat, 29 Aug 2015 19:32:00 +0100</pubDate><guid>https://nikdoof.com/posts/2015/heroku_and_nextaction/</guid><description>A while ago it was announced that Heroku would be changing its price structure, after a few minutes with a calculator I worked out that it&amp;rsquo;ll be essentially better for my bigger apps, and not so much for the small stuff I run. At the time I totally forgot about the NextAction app i&amp;rsquo;ve been running for a very long time to manage my Todoist instance. For what the application does and how it runs its really not worth the $7 to host on a full Heroku instance&amp;hellip;</description><content type="html"><![CDATA[<p>A while ago it was announced that <a href="http://heroku.com">Heroku</a> would be changing its price structure, after a few minutes with a calculator I worked out that it&rsquo;ll be essentially better for my bigger apps, and not so much for the small stuff I run. At the time I totally forgot about the <a href="https://github.com/nikdoof/NextAction">NextAction app</a> i&rsquo;ve been running for a very long time to manage my <a href="http://todoist.com">Todoist</a> instance. For what the application does and how it runs its really not worth the $7 to host on a full Heroku instance&hellip;</p>
<p>So with this in mind I put a few hours into the tool today, it now has a proper <code>setup.py</code> and a basic CLI interface, just enough to get it running on my VM host without much issue or changes.</p>
<p>If you&rsquo;re interested in my branch of the tool, check it out <a href="https://github.com/nikdoof/NextAction">on Github</a>.</p>
]]></content></item><item><title>Flask, EVE, and no persistence</title><link>https://nikdoof.com/posts/2014/flask-eve-and-no-persistence/</link><pubDate>Fri, 27 Jun 2014 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2014/flask-eve-and-no-persistence/</guid><description>Recently, another EVE related web-app idea popped into my mind and due to the generally low impact nature of the application I didn&amp;rsquo;t require a backend data store. For a long time i&amp;rsquo;ve used Django for nearly anything and everything due to the batteries included nature of the framework, but with this application I could throw it all away and start working with Flask; something i&amp;rsquo;ve been meaning to get my teeth into properly since I started my large Django based projects.</description><content type="html"><![CDATA[<p>Recently, another EVE related web-app idea popped into my mind and due to the generally low impact nature of the application I didn&rsquo;t require a backend data store. For a long time i&rsquo;ve used Django for nearly anything and everything due to the batteries included nature of the framework, but with this application I could throw it all away and start working with Flask; something i&rsquo;ve been meaning to get my teeth into properly since I started my large Django based projects.</p>
<p>My tool is already a solved problem, but as is the way of development and EVE i&rsquo;ve set re-inventing the wheel for the sake of &ldquo;security&rdquo; and &ldquo;counter-intelligence&rdquo;, well, I spin it up that way but really I just wanted to try and do it for myself. In the last few years EVE has had a small UI overhaul which now allows almost anything to be copied and pasted outside of the game, the bonus is that once inaccessible scans, inventory lists, and channel member lists are now sources of information to be parsed and worked with. A common tool to come out of all this is a &ldquo;D-Scan&rdquo; tool that allows quick parsing and overview of the scan results from your directional scanner, over the last few years a good scan parser has become an essential tool of any FC and scout.</p>
<p>In my app i&rsquo;m taking a new twist on the tool, trying out a few new views and consolidating some of the loved features from other tools into one that I can use. In the process of developing this i&rsquo;ve set myself a goal of not having this tool depend on a database in anyway, instead using Redis as a caching backend for the various APIs and data stores needed.</p>
<p>The first big problem you need to work with is the EVE SDE (Static Data Extract) and the &ldquo;Inventory Types&rdquo;, this table of around 50,000 rows is something the tool will need to categorize the scan correctly. The positive here is that the SDE doesn&rsquo;t update that often, only when we see content releases will the SDE be updated by CCP and even then the world isn&rsquo;t going to end by not having the latest and greatest SDE to work with. So my solution was to have a package data file populated with a JSON extract of the data I need and when the data is needed its loaded into memory, the relative memory increase of 1-2mb of RAM is nothing in the overall scheme of the application.</p>
<p>So what about the actual scans and results? Parsing the d-scan data is relatively quick, as its essentially a tab delimited file of a fixed format, combined with a few quick lookups of reference data which is all held in a dictionary in memory makes even a taxing Jita d-scan get processed in a few milliseconds without any major optimization. Once the initial parse is done then the results are dumped to JSON, compressed with zlib, then dumped to a unique key in Redis with a expiry of an hour. The view to show the scan results does nothing more than to take the key from the URL and attempt to grab the results from Redis, decompress, and pass the resulting parsed JSON to the template.</p>
<p>The deployment target is Heroku, and ideally the Heroku free tier, so this has dictated some of the design, for example the zlib compression of the resulting scan is there to shave off as many bytes as possible to get the maximum use out of the Redis 25mb services you have available, with the requests we&rsquo;re CPU rich but storage poor, so the trade off works quite well in this case. So how would this work in a DoS issue? If one person keeps spamming large d-scans into the system would the Redis server fill up and stop working for all? Well, no, as the config will be set to expire the oldest keys in the case of low memory which would work perfectly for our tool.</p>
]]></content></item><item><title>Opensourcing Past Projects</title><link>https://nikdoof.com/posts/2014/open-sourcing-past-projects/</link><pubDate>Mon, 14 Apr 2014 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2014/open-sourcing-past-projects/</guid><description>Over a year ago now I stepped back from being a system administrator and developer for the EVE Online alliance Test Alliance Please Ignore, part of my role there I spent hundreds of hours developing their internal authentication system and other small applications that hang off it. At the time it was quite unique and only a handful of other alliances had that level of technical setup. So as you would expect like a small company with something to lose the code was buried away on private servers and rarely looked over by new people.</description><content type="html"><![CDATA[<p>Over a year ago now I stepped back from being a system administrator and developer for the EVE Online alliance Test Alliance Please Ignore, part of my role there I spent hundreds of hours developing their internal authentication system and other small applications that hang off it. At the time it was quite unique and only a handful of other alliances had that level of technical setup. So as you would expect like a small company with something to lose the code was buried away on private servers and rarely looked over by new people.</p>
<p>Today is a very different place, Auth was created at the start of a open source revolution for EVE Online applications, and over time more and more have started becoming open with now specific projects being spun up (such as ECM) to create tools, or large alliances (Brave Newbies) opening their backend for everyone to use.</p>
<p>The repository copies of the code I have a quite out of date, and i&rsquo;m purely the copyright holder of them, which gives me the power to license and open them as I see fit. Now that its been a good amount of time since I left I feel I can safely release these into the public domain without doing any disservice to TEST and the current sysadmins.</p>
<p>So over the next few days i&rsquo;ll be looking to move the following repositories from my private Bitbucket over onto GitHub:</p>
<p>nikdoof / cynomap
nikdoof / django-testauth
nikdoof / limetime
nikdoof / pacmanager
nikdoof / posmaster
nikdoof / test-auth</p>
<p>All in various states, but hopefully useful for someone.</p>
<p>[Update - 2014/03/13]</p>
<p>Also i&rsquo;ll update with links once they&rsquo;re over.</p>
]]></content></item><item><title>Python packaging the right way</title><link>https://nikdoof.com/posts/2014/python-packaging-the-right-way/</link><pubDate>Thu, 20 Feb 2014 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2014/python-packaging-the-right-way/</guid><description>Last night I spent a hour or so packaging up some Python I made to scratch an itch into a distributable module. Packaging has never been my strong point and I always ended up making a fiddly setup.py that had some minor problems or didn&amp;rsquo;t work as expected. This time was especially noteworthy as I had a product that was both Python 2.7 and Python 3.3 compatible.
Thankfully Jeff Knupp posted about open sourcing a python project the right way, which covers getting your project setup right, making it easily testable, and getting it working on TravisCI.</description><content type="html"><![CDATA[<p>Last night I spent a hour or so packaging up some Python I made to scratch an itch into a distributable module. Packaging has never been my strong point and I always ended up making a fiddly <code>setup.py</code> that had some minor problems or didn&rsquo;t work as expected. This time was especially noteworthy as I had a product that was both Python 2.7 and Python 3.3 compatible.</p>
<p>Thankfully Jeff Knupp posted about <a href="http://www.jeffknupp.com/blog/2013/08/16/open-sourcing-a-python-project-the-right-way/">open sourcing a python project the right way</a>, which covers getting your project  setup right, making it easily testable, and getting it working on TravisCI.</p>
<p>So, my project is <a href="https://github.com/nikdoof/businesshours">live on GitHub</a>, is it useful? Probably not, but at least i&rsquo;ve put it out there incase people want to use it or improve on it. Next on the todo list is some better documentation and more tests.</p>
]]></content></item><item><title>Always Catch Errors</title><link>https://nikdoof.com/posts/2014/always-catch-errors/</link><pubDate>Sat, 08 Feb 2014 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2014/always-catch-errors/</guid><description>On the 31st of Jan my NAS stopped responding, no idea what was going on and with zero response to the power button I did a hard reset, I spent the next few hours double checking all my config to find out what the hell had happened. I couldn’t find a solid reason, but at least none of the hardware was failing which give me some good news, I marked it down as an odd issue and carried on.</description><content type="html"><![CDATA[<p>On the 31st of Jan my NAS stopped responding, no idea what was going on and with zero response to the power button I did a hard reset, I spent the next few hours double checking all my config to find out what the hell had happened. I couldn’t find a solid reason, but at least none of the hardware was failing which give me some good news, I marked it down as an odd issue and carried on.</p>
<p>The same happened tonight, exact same result but this time I was prepared to some point. After attempting to login in the console and seeing memory allocation errors, then SSH dying on its arse, I checked my Munin install and notice the memory was heavy swapping. This machine has about 8GB of RAM but at any time its using about 600mb, at first I thought it was a memory leak in something but usually OOM Killer does a good job smiting any unruly processes. Then I checked my process list and noticed it was well over 4,000 sleeping processes, something had obviously gone wrong.</p>
<p>On my Deluge setup, due to the instability of a few of the trackers I use, I have a small Python script that checks the current state of the torrent and if they’re “red” it restarts them. Deluge’s API uses the Twisted framework to make everything async and accordingly a lot easier to work with, this was my first venture into the land of Twisted and it seems I made an error; I didn’t catch the “unable to connect” error. So after it was unable to connect the Twisted reactor was sitting there and running constantly, and as this job was running every 5 minutes it stacked up over 24 hours and killed the machine.</p>
<p>So, its always worth checking for errors, and not assuming that it’ll sort itself out. Lesson learned.</p>
]]></content></item><item><title>New Project Woes</title><link>https://nikdoof.com/posts/2014/new-project-woes/</link><pubDate>Sun, 02 Feb 2014 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2014/new-project-woes/</guid><description>Finding a new project is hard. Very hard. Since i&amp;rsquo;ve stopped playing EVE Online i&amp;rsquo;ve found it very difficult to come with new development ideas. In recent months i&amp;rsquo;ve padded my time with some smaller projects and ideas but usually lost my drive behind them within a few weeks. So what to do?
This weekend i&amp;rsquo;ve been thinking of a few things that slow down my workflow, either time tracking, task planning or something to that nature.</description><content type="html"><![CDATA[<p>Finding a new project is hard. Very hard.
Since i&rsquo;ve stopped playing <a href="http://eveonline.com">EVE Online</a> i&rsquo;ve found it very difficult to come with new development ideas. In recent months i&rsquo;ve padded my time with some smaller projects and ideas but usually lost my drive behind them within a few weeks. So what to do?</p>
<p>This weekend i&rsquo;ve been thinking of a few things that slow down my workflow, either time tracking, task planning or something to that nature. For a long time i&rsquo;ve used <a href="http://davidseah.com">Dave Seah&rsquo;s</a> <a href="http://davidseah.com/blog/node/the-emergent-task-timer/">Emergent Task Timer</a> to track my time usage in work, for a while I tried the Flash app but it didn&rsquo;t really worked how I wanted it. After a while I ended up going offline with Dave&rsquo;s excellent <a href="http://davidseah.com/productivity-tools/">templates</a> and i&rsquo;ve been a happy user for well over three months.</p>
<p>So the idea popped into my head to make a persistent, online ETT that allows for some quick exporting into a format I can use for my timesheets in work, but after a while I decided how much time would I have to spend to improve the paper system that Dave has honed over a long time? Especially now that he has expanded onto Amazon and selling <a href="http://www.amazon.com/gp/browse.html?me=A3CYU3IEF50TTE">pre-printed notepads</a> of his ETP sheet.</p>
<p>The next idea was to create a CLI for <a href="http://trello.com">Trello</a>, and after about five minutes I realised its already a heavily <a href="https://github.com/search?q=trello+cli&amp;ref=cmdform">done solution</a>, fine no Go or Python client, but still done quite a bit.
So here I am, twiddling my thumbs, waiting for the next idea to pop into my head.</p>
]]></content></item><item><title>Deluge Web Interface and Nginx</title><link>https://nikdoof.com/posts/2014/deluge-web-interface-and-ngnix/</link><pubDate>Fri, 17 Jan 2014 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2014/deluge-web-interface-and-ngnix/</guid><description>After a short, frustrating time, i&amp;rsquo;ve finally got the Deluge WebUI to proxy through Nginx without any errors. The revelation came when digging through the Deluge forums I found a little nugget of information which solved it all, a small header call X-Deluge-Base that when passed will prefix any media calls made in the page with that text. So instead of setting up weird aliases and fiddling around with Nginx&amp;rsquo;s options to get it to work I could just specify that and use a very basic server config.</description><content type="html"><![CDATA[<p>After a short, frustrating time, i&rsquo;ve finally got the <a href="http://deluge-torrent.org/">Deluge</a> WebUI to proxy through Nginx without any errors. The revelation came when digging through the Deluge forums I found a <a href="http://forum.deluge-torrent.org/viewtopic.php?p=178145#p178145">little nugget of information</a> which solved it all, a small header call <code>X-Deluge-Base</code> that when passed will prefix any media calls made in the page with that text. So instead of setting up weird aliases and fiddling around with Nginx&rsquo;s options to get it to work I could just specify that and use a very basic server config.</p>
<pre><code>upstream deluge  {
  server localhost:8112;
}

server {
  server_name  deluge.home;

  location / {
    proxy_pass  http://deluge;
    proxy_next_upstream error timeout invalid_header http_500 http_502 http_503 http_504;
    proxy_redirect off;
    proxy_buffering off;
    proxy_set_header        Host            $host;
    proxy_set_header        X-Real-IP       $remote_addr;
    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
    proxy_set_header        X-Deluge-Base   &quot;/&quot;;
  }
}</code></pre>
]]></content></item><item><title>Brother QL-570 and Linux</title><link>https://nikdoof.com/posts/2013/brother-ql-570-and-linux/</link><pubDate>Thu, 04 Apr 2013 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2013/brother-ql-570-and-linux/</guid><description>A few days ago I picked up a Brother QL-570 cheap on Amazon for the other half, as shes about to setup her own online shop and needs to throw out a few address labels. While I did check up online that it was supported on Linux I didn&amp;rsquo;t really look into how well its supported, and unfortunantly its not good.
Brother have released a driver set for the device, but it does have quite a few issues that present massive stumbling blocks.</description><content type="html"><![CDATA[<p>A few days ago I picked up a <a href="http://www.brother.co.uk/g3.cfm/s_page/215760/s_level/38980/s_product/QL570ZU1">Brother QL-570</a> cheap on Amazon for the other half, as shes about to setup her own online shop and needs to throw out a few address labels. While I did check up online that it was supported on Linux I didn&rsquo;t really look into how well its supported, and unfortunantly its not good.</p>
<p>Brother have released a driver set for the device, but it does have quite a few issues that present massive stumbling blocks. The deb packages seem to be setup for Ubuntu only, and die horribly when installed in Debian due to CUPS using a init file of &ldquo;cups&rdquo; instead of &ldquo;cupsys&rdquo;. A hacky <a href="http://installit.googlecode.com/hg/hardware/install.brother-ql-500.sh">way around it</a> does exist, but honestly this is more poking than should be done for a simple deb package.</p>
<p>It seems that the driver has a few issues regarding configuration settings, define too many in CUPS and processor tool Brother includes segfaults, a <a href="https://bugs.launchpad.net/ubuntu/+source/brother-cups-wrapper-common/+bug/423817">fix</a> does exist but a general lack of interest in this issue seems to perpetuate this error, even when a patch is available to resolve the issue. The driver has gone unchanged in Ubuntu and Debian for many years now.</p>
<p>Thankfully, it seems that Linux isn&rsquo;t the only one affected by this issue. After several hours of frustration trying to get the device to work I gave up and setup the printer in CUPS as a raw printer, allowing my Windows PCs to use it with the official drivers, only for it to happen again&hellip;</p>
<p>So, why the post? Well if you&rsquo;re looking for a Linux compatible label printer i&rsquo;d advise to stay away from Brother&rsquo;s offering. Mine works at the moment, but its doing what I really wanted it to do.</p>
]]></content></item><item><title>Django MultiDB to the rescue!</title><link>https://nikdoof.com/posts/2012/django-multidb-to-the-rescue/</link><pubDate>Fri, 12 Oct 2012 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2012/django-multidb-to-the-rescue/</guid><description>One of the key components of Auth is the ability to communicate with numerous other systems and manage their user authentication systems in situations were we can&amp;rsquo;t modify the application to use our authentication API. To achieve this we have the Services API, which is a generic interface designed for basic operations such as creating a user, disabling an account as so on. If we wanted to support a new system we write up a simple Python module with these functions and the API does all the required importing and abstraction out for Auth.</description><content type="html"><![CDATA[<p>One of the key components of Auth is the ability to communicate with numerous other systems and manage their user authentication systems in situations were we can&rsquo;t modify the application to use our authentication API. To achieve this we have the Services API, which is a generic interface designed for basic operations such as creating a user, disabling an account as so on. If we wanted to support a new system we write up a simple Python module with these functions and the API does all the required importing and abstraction out for Auth.</p>
<p>In previous versions of the API we handled databases in a very weird way, either by writing our own SQLAlchemy queries or boding the Django ORM to give us a basic cursor to work with, while it was far from perfect it allowed us to edit the databases of other applications without much hassle.</p>
<p>Recently, Django has updated into version 1.2 and with it came the MultiDB functionality which allows you to access multiple databases natively in the ORM. For our database layer this presents some new options not available to us in the old versions.</p>
<p>Using the Django database introspect you are able to generate a Model from a existing database schema, for this example we&rsquo;re working with Mediawiki&rsquo;s native database in MySQL. So first of all we need to define the database in our settings.</p>
<pre><code>DATABASES = {
    'dreddit_wiki': {
        'NAME': 'wiki',
        'ENGINE': 'django.db.backends.mysql',
        'USER': 'wiki',
        'PASSWORD': 'passwordgoeshere',
    }
}
</code></pre>
<p>Next we fire off the inspect command to produce our database layout</p>
<pre><code>./manage.py inspectdb --database=wiki &gt; wikimodels.py
</code></pre>
<p>After a short time you&rsquo;ll have a fresh Python module with all your database models nearly ready to go, first thing to do would be to edit this file and change any foreign keys to the required Django <code>ForeignKey()</code> field. While inspect does as much as it can it can&rsquo;t detect foreign keys.</p>
<p>Once your read to rock its a simple case of getting your shell out and giving it a test run.</p>
<pre><code>./manage.py shell
&gt;&gt;&gt; from wiki.wikimodels import User
&gt;&gt;&gt; User.objects.using('wiki').get(user_id=1).user_name
'Matalok'
</code></pre>
<p>Simple! No more hassling with db cursors, just simple ORM access without the hassle. The next big leap is defining the database connections at runtime, injecting into the <code>DATABASES</code> variable, by doing this I can remove the problem of having to manage the services&rsquo;s database connection in the settings.py and instead have them defined on a per service basis.</p>
]]></content></item><item><title>Unneeded Dependencies</title><link>https://nikdoof.com/posts/2009/unneeded-dependencies/</link><pubDate>Wed, 18 Nov 2009 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2009/unneeded-dependencies/</guid><description>$ sudo apt-get install bzr Reading package lists... Done Building dependency tree Reading state information... Done The following extra packages will be installed: bzrtools defoma file fontconfig fontconfig-config graphviz libcairo2 libdatrie0 libdirectfb-1.0-0 libfontconfig1 libfontenc1 libgraphviz4 libice6 libpango1.0-0 libpango1.0-common libpixman-1-0 libsm6 libsysfs2 libthai-data libthai0 libts-0.0-0 libxaw7 libxcb-render-util0 libxcb-render0 libxext6 libxfont1 libxft2 libxmu6 libxpm4 libxrender1 libxt6 python-paramiko ttf-dejavu ttf-dejavu-core ttf-dejavu-extra ttf-liberation x-ttcidfont-conf x11-common xfonts-encodings xfonts-utils Suggested packages: bzr-gtk bzr-svn python-pycurl xdg-utils pybaz librsvg2-bin defoma-doc dfontmgr psfontmgr gsfonts graphviz-doc ttf-kochi-gothic ttf-kochi-mincho ttf-thryomanes ttf-baekmuk ttf-arphic-gbsn00lp ttf-arphic-bsmi00lp ttf-arphic-gkai00mp ttf-arphic-bkai00mp Recommended packages: libft-perl The following NEW packages will be installed bzr bzrtools defoma file fontconfig fontconfig-config graphviz libcairo2 libdatrie0 libdirectfb-1.</description><content type="html"><![CDATA[<pre><code>$ sudo apt-get install bzr
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following extra packages will be installed:
  bzrtools defoma file fontconfig fontconfig-config graphviz libcairo2 libdatrie0
  libdirectfb-1.0-0 libfontconfig1 libfontenc1 libgraphviz4 libice6 libpango1.0-0
  libpango1.0-common libpixman-1-0 libsm6 libsysfs2 libthai-data libthai0
  libts-0.0-0 libxaw7 libxcb-render-util0 libxcb-render0 libxext6 libxfont1
  libxft2 libxmu6 libxpm4 libxrender1 libxt6 python-paramiko ttf-dejavu
  ttf-dejavu-core ttf-dejavu-extra ttf-liberation x-ttcidfont-conf x11-common
  xfonts-encodings xfonts-utils 
Suggested packages:
  bzr-gtk bzr-svn python-pycurl xdg-utils pybaz librsvg2-bin defoma-doc dfontmgr
  psfontmgr gsfonts graphviz-doc ttf-kochi-gothic ttf-kochi-mincho ttf-thryomanes
  ttf-baekmuk ttf-arphic-gbsn00lp ttf-arphic-bsmi00lp ttf-arphic-gkai00mp
  ttf-arphic-bkai00mp
Recommended packages:
  libft-perl
The following NEW packages will be installed
  bzr bzrtools defoma file fontconfig fontconfig-config graphviz libcairo2
  libdatrie0 libdirectfb-1.0-0 libfontconfig1 libfontenc1 libgraphviz4 libice6
  libpango1.0-0 libpango1.0-common libpixman-1-0 libsm6 libsysfs2 libthai-data
  libthai0 libts-0.0-0 libxaw7 libxcb-render-util0 libxcb-render0 libxext6 libxfont1
  libxft2 libxmu6 libxpm4 libxrender1 libxt6 python-paramiko ttf-dejavu
  ttf-dejavu-core ttf-dejavu-extra ttf-liberation x-ttcidfont-conf x11-common
  xfonts-encodings xfonts-utils
0 upgraded, 41 newly installed, 0 to remove and 7 not upgraded.
Need to get 16.1MB of archives.
After this operation, 38.9MB of additional disk space will be used.
Do you want to continue [Y/n]?
</code></pre>
<p>That is why it pays to have the following settings in your apt preferences if you want to keep things to a minimum:</p>
<pre><code>APT::Install-Recommends &quot;false&quot;;
APT::Install-Suggests &quot;false&quot;;
</code></pre>
]]></content></item><item><title>Working with SPARC</title><link>https://nikdoof.com/posts/2009/working-with-sparc/</link><pubDate>Fri, 17 Jul 2009 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2009/working-with-sparc/</guid><description>Nearly two weeks ago I replied to a email offering up some old hardware on the Manchester Linux Users list, Tim was offering up a few SPARC machines amongst other things, and due to my interest in different CPU architectures I bit his hand off.
I’m now the lucky owner of two amazing little workhorses, a SPARCStation 20 and a UltraSPARC IV, both true workhorses of the Sun/Solaris era. The SPARCStation is a old world SPARC machine, in its pizza-box format and the interesting MBUS and SBUS connection interfaces with a sprinkle of SCSI, the Ultra on the other hand is more modern PCI and IDE so I was able to get fast network cards and big disks without much issue.</description><content type="html"><![CDATA[<p>Nearly two weeks ago I replied to a email offering up some old hardware on the Manchester Linux Users list, Tim was offering up a few SPARC machines amongst other things, and due to my interest in different CPU architectures I bit his hand off.</p>
<p>I’m now the lucky owner of two amazing little workhorses, a SPARCStation 20 and a UltraSPARC IV, both true workhorses of the Sun/Solaris era. The SPARCStation is a old world SPARC machine, in its pizza-box format and the interesting MBUS and SBUS connection interfaces with a sprinkle of SCSI, the Ultra on the other hand is more modern PCI and IDE so I was able to get fast network cards and big disks without much issue. due to the old nature of the SPARCStation hardware I’ve not really invested much time into it, I’ll have to poke at it sometime this weekend and get the old thing up and running.</p>
<p>For the moment I was far more interested in getting the Ultra running as a OpenBSD firewall / router, the Ultra is relatively small (but large when compared to the SPARCStation) and allowed me to tuck it under a table to run, much to the disappointment of my other half. The existing DD-WRT based Linksys was starting to show its age and was becoming flaky after a few years of running overclocked without added cooling. The SPARC machine presented an excellent opportunity due to the good support of OpenBSD and the amazing pf packet filtering included.</p>
<p>The next few days were spent faffing with the hardware and re-installing OpenBSD 4.5, I had numerous small issues that were all down to a faulty network card, possibly a faulty PCI slot but i’ve not had the time to push it further. The PCI issue was quite difficult to diagnose for someone whose had no experiences of the platform before hand, I’ve now had a days crash course in OpenBoot. I have to say, that OpenBoot is a fantastic platform and it aided me a lot in diagnosing the strange issues.</p>
<p>Unfortunately, there’s always one problem you can never get to the bottom of. I updated the box with a new PCI network card, a old CDROM and a fresh HDD, A day or so passed and no matter what I tried I couldn’t get the machine to boot from CDROM, it refused to detect media and even some times the drive. Thankfully, the previous owner had left a basic OpenBSD install on the machine that allowed me to download the install image and write to the swap partition, allowing for a quick and simple reinstall using the swap partition as the boot media. The CDROM works perfectly in OpenBSD, I don’t have the energy to chase down this bug any further.</p>
<p>Finally, after what feels like a week of work, I have a small footprint firewall that kicks the arse out of my existing DD-WRT box. While it may not be a amazing icon to show off, like Chris as his SGI coffee table, it gives me warm fuzzies that technology that would generally be disposable by modern standards has its use somewhere. Now the box is up and working, it’ll slowly disappear from my radar and the experiences I had with this individual bit of hardware will slip away, that is, until OpenBSD 4.6 or another hardware failure.</p>
<p>Fingers crossed eh?</p>
]]></content></item><item><title>Hacking the ZTE MF627</title><link>https://nikdoof.com/posts/2009/hacking-the-zte-mf627/</link><pubDate>Thu, 11 Jun 2009 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2009/hacking-the-zte-mf627/</guid><description>Its been a while since I’ve done a good hack article. so again I’m back onto my favourite topic of 3G modems. Thanks to the generous promotions at 3dongles4free I’ve been able to pickup a new Three dongle for next to nothing. As I’ve already got my E160G I didn’t really need this to be on the Three network. After a quick search around and a few suggestions from existing mailing lists I’ve found out that a hacked firmware exists and these cheap and cheerful dongles can be flashed to allow any SIM card to be used.</description><content type="html"><![CDATA[<p>Its been a while since I’ve done a good hack article. so again I’m back onto my favourite topic of 3G modems. Thanks to the generous promotions at <a href="https://web.archive.org/web/20121114232822/http://www.3dongle4free.co.uk/">3dongles4free</a> I’ve been able to pickup a new Three dongle for next to nothing. As I’ve already got my E160G I didn’t really need this to be on the Three network. After a quick search around and a few suggestions from existing mailing lists I’ve found out that a hacked firmware exists and these cheap and cheerful dongles can be flashed to allow any SIM card to be used. This should be a simple job of updating the software and using the new SIM card.</p>
<p>First of all, grab the <a href="https://web.archive.org/web/20121114232822/http://www.google.com/search?q=ZTE2.rar">software pack</a> from Rapidshare, due to the questionable nature of this copy of the firmware no one has been willing to host it on their own hosting, and I’ll keep to that idea. Extract the files from the RAR and you should have a firmware upgrade, and a installation folder for the connection software. As the existing Three connection software is very limited, the software package includes the Telstra version which allows you to define your own settings. Before you attempt the software upgrade, you need to remove any existing Three software, install the Telstra version and remove your SIM card from the dongle, then simply plug it in and run the firmware upgrade. This process will take around 15-25 minutes and once it’s done it’ll give you a prompt. During the upgrade do not power off your PC or remove the dongle from the USB socket. This will brick your dongle rendering it completely useless. Now, put in your non-Three SIM card and plug it back into your PC, the Telstra software should start-up and try detect the device, you need to configure the software for your provider’s APN settings, but the PDF document included with the software package will give you all the details you need. Remember, I take no responsibility for people bricking their equipment, you have been warned.</p>
]]></content></item><item><title>A few days with Android</title><link>https://nikdoof.com/posts/2009/a-few-days-with-android/</link><pubDate>Tue, 24 Feb 2009 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2009/a-few-days-with-android/</guid><description>Last week I finally got round to ordering a T-Mobile G1 and I got accepted, and to be honest I was expecting my contract application to bounce like a rubber ball. I guess I left T-Mobile on good terms last time so they’re one of the last networks in the UK that will actually accept me for a new account. I’m quite a aficionado of mobile phones and I do like “smart phones”, so I’ve decided to write this brief overview of the handset and of Android in general.</description><content type="html"><![CDATA[<p>Last week I finally got round to ordering a T-Mobile G1 and I got accepted, and to be honest I was expecting my contract application to bounce like a rubber ball. I guess I left T-Mobile on good terms last time so they’re one of the last networks in the UK that will actually accept me for a new account. I’m quite a aficionado of mobile phones and I do like “smart phones”, so I’ve decided to write this brief overview of the handset and of Android in general.</p>
<h2 id="out-of-box-experience">Out of Box Experience</h2>
<p>I ordered my handset online so I had the joy’s of Royal Mail to contend with, that alone is a separate post and UK followers will know the usual pain everyone goes through in regards to getting anything shipped by them. The first issue that tripped me up was that the SIM card was loose in the packaging and not even slid into the handset box. In a rush I nearly binned the SIM and scupper my chances of actually using the device for another week.</p>
<p>The G1 is very nicely boxed, almost in the same way Apple devices are usually done. Since the release of the iPod and it’s over the top fancy packaging a lot of device developers have been scrabbling to match that “Out of Box” experience, the joy of opening the packaging and having your device presented to you in soft black foam padding. The first major gripe was struck at this early stage, while unpacking the device and it’s related accessories I noted that my white phone has some very nasty black accessories. While the contrast of black and white may work well in some peoples minds, I’d personally like the accessories to be a matching colour.</p>
<p>The box included the standard extras, a quick start guide, a small manual, USB cable, charger and a wired headset. Nothing really to write home about. The manual is the usual introduction spiel, which I refuse to read. I decided to get the phone up and working and to have a play around.</p>
<p>What surprised me next is my first battle with the phone, trying to get the SIM card in the actual device. Turns out you have to use the little pull tab at the top of the phone to remove the back, of course the device or documentation didn’t have any mention of how to do this. Jo can vouch that I spent a frustrated five minutes trying to tear the back off without destroying the phone. A simple plastic pull tab would of sorted this, but I guess it’s the last thing on the manufactures’ mind.</p>
<p>Next came the activation, I thought being a non-tied phone that I wouldn’t have to jump through so many hoops like the iPhone, while this is true the actual procedure can be a little frustrating. Anyone who has visited my house can attest to the near Faraday Cage properties that it has for some networks, unfortunately T-Mobile is one of them, the signal levels in the local area are great, just not in my house. This presented a major issue when I was asked to login to my Google account to sync over various details, the phone then spent the next ten minutes trying to establish a GPRS connection to the outside world. After the tenth or so try it managed to get all the details it needs. I understand that the activation sequence can also do WiFi connection but I didn’t see any mention of this during the set-up, and I think it’s reserved for the people who have “rooted” their phones already, something I’ll want to avoid wherever possible.</p>
<p>So after much hissing, cursing, and a few cups of tea, I was ready to roll.</p>
<h2 id="applications">Applications</h2>
<p>Almost everything on Android is wrote in Java, and I’m quite amazed that it runs as well as it does. While Google/OHA are still polishing the edges the OS seems very stable and easy to use, once you’ve worked out the basics of navigating around you’ll be flying through the applications in no time. The standard “tool set” included with Android will cover the 90% of users, the usual host of tools are included; SMS, Email, Web Browser, Call Manager, along with a few others you might not usually see, like IM.</p>
<p>As this is the “Google Phone” the standard software includes the usual Google mobile applications, Gmail, Maps, and Youtube. I’ve recently moved away from using Gmail, so I can’t really comment on how the application works, and the rest of them operate just as you would expect on the N95 or any other Series 60 handset. I’m not going to spend all day digging into specifics as anyone who has had a go of these apps will know what to expect.</p>
<p>The biggest seller is the Market, the Android team broke away from the strict market that you see on the iPhone and went with a more open process.This has allowed developers to create a wide range of apps in a very short period of time, including replacement applications for the built in-clients. One great example is K9Mail, which expands on the existing Email client to enable better interaction with IMAP servers and a few added features. I’m sure over time the market will grow, and with the introduction of the paid market we’ll see some of the big players start developing apps for it.</p>
<h2 id="overview">Overview</h2>
<p>Ok, I’ve warbled on for a while about my usage of the phone, it’s still early days and I’m still not 100% up to speed with the handset. It’ll take time and I’m sure i’ll have more posts in the future. I’m starting to get my feet wet in the SDK and I’ve got my first “Hello World” application currently installed on my handset. So, Would I recommend the handset to anyone else? A resounding yes, it’s got a lot of potential and anyone slightly technically inclined will love using it.</p>
]]></content></item><item><title>CrunchBang Linux - A day’s usage review</title><link>https://nikdoof.com/posts/2008/crunchbang-linux-a-days-usage-review/</link><pubDate>Mon, 15 Dec 2008 14:32:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/crunchbang-linux-a-days-usage-review/</guid><description>A while ago I spotted a post about a new Ubuntu based distribution that had been released, called CrunchBang Linux, as i’m not a great fan of Ubuntu distros anymore I passed this one up and never look at it again. A few weeks had passed until I heard mention of it again, Dan from Linux Outlaws, mentioned that he is trying out the recent version for a review on the show and that Fab is a massive fan.</description><content type="html"><![CDATA[<p>A while ago I spotted a post about a new Ubuntu based distribution that had been released, called <a href="http://crunchbanglinux.org/">CrunchBang Linux</a>, as i’m not a great fan of Ubuntu distros anymore I passed this one up and never look at it again. A few weeks had passed until I heard mention of it again, <a href="http://adventuresinopensource.blogspot.com/">Dan</a> from <a href="http://linuxoutlaws.com/">Linux Outlaws</a>, mentioned that he is trying out the recent version for a review on the show and that Fab is a massive fan. I decided to take a second look at it, trying my hardest not to be critical due to it’s Ubuntu base.</p>
<p>I’ve now got CrunchBang installed on my main desktop machine and I’ve been using it for a day, Maybe it’s a short length of time to review a distribution but I feel with my past experiences with numerous distros will help me get to grips with a new one quite quickly. Some of you may know, after being a Ubuntu user for well over a year I decided to move back to Debian and became quite critical of Ubuntu for its rash decisions regarding design and key choices. My dislike is not centred purely on Ubuntu, I remember one time where I had a near fit at using a <a href="http://www.opensuse.org/en/">OpenSUSE</a> KDE 4.0 Live CD as I couldn’t switch off the default sound scheme, but that’s for another post. Back to the review&hellip;</p>
<p>CrunchBang Linux promotes itself as a lightweight version of Ubuntu, unlike <a href="http://xubuntu.com/">Xubuntu</a>’s XFCE desktop they’ve decided on using <a href="http://icculus.org/openbox/index.php/Main_Page">OpenBox</a> and a few key programs from other desktop environments, like <a href="http://thunar.xfce.org/index.html">Thunar</a> and <a href="http://lxde.org/">Lxpanel</a>.</p>
<p>My previous experience of the *box window managers have been with Blackbox during the very early days, when Enlightenment was all the rage and <a href="http://toastytech.com/guis/x.html">most distros used FVWM95</a>, so checking out Openbox will hopefully be a refreshing blast to the past. My main concern was compatability, a lot of applications out there depend on certain features of the desktop environment. I left all my expectations at the door and decided to grab the Live CD and have a 10-15 minute play to see if everything works as expected and that it actually works on my slightly quirky setup.</p>
<p>The Live CD / Installation media is mirrored on a few sites, as it’s only a “baby” distro it’s not been picked up by the mainstream mirrors, thankfully, a few people in the community had offered some space up to the project and finding a local, fast mirror isn’t that difficult. As with all Ubuntu style Live CDs, it was a simple case of burning the ISO to a disc and rebooting the machine. I’m not sure if this is a feature of all new Ubuntu discs now, but the ISOLINUX menu had a option to check the installation media for errors, this would save you quite a bit of time if you suspect dodgy media.</p>
<p>The boot was quick, quicker than I expected. Usually with Ubuntu CDs I pop the disc into the drive the slip off to make a cup of tea and head back in time to get the last second or so of the desktop booting. This wasn’t the case with CrunchBang, after returning from a delightful brew making trip I noticed that the desktop was loaded and the default conky panel on the right side informed me that it’s been booted for about 5 minutes. So, boot speed, even from the CD it’s nice and quick.</p>
<p>To a user who has been brought up on the GNOME or KDE environments the initial desktop may take a second to sink in, by default it comes with a minimal panel and system information pane on the right side of the screen and nothing more, no desktop icons or fluffy applications menu, just a basic desktop. Right clicking anywhere on the desktop brings up the system menu and the list of applications. The default install gives you a nice range of applications, some you’ll never use, others are dire essentials.The default includes a few keynote applications:</p>
<ul>
<li>Firefox 3.0.4</li>
<li>Pidgin 2.5.2</li>
<li>Rhythmbox 0.11.3</li>
<li>Skype 2.0</li>
<li>Gwibber 0.7.2</li>
<li>GIMP 2.6</li>
</ul>
<p>A few more are available, and a full list can be found on the <a href="http://crunchbanglinux.org/wiki/applications">Crunchbang Wiki</a>. Needless to say I was impressed, not only had they selected reasonable defaults but as the distribution is based off Intrepid it had the latest and greatest versions available. Skype is a interesting nugget in my opinion, possibly being the only QT application in the default installation. I do understand that lots of people use Skype for VOIP, but maybe they should consider including another application like Ekiga.</p>
<p>So, I have my desktop running as a Live CD, time to see how it fayred in real world usage. I can happly say, after a good hour or so usage I didn’t feel restricted by the choice of desktop environment, Openbox is low key but quick and powerful. I decided after just a few hours usage to commit to this distro, ditching my current Debian Lenny install.</p>
<p>The installation of CrunchBang was nothing really spectacular, It’s a standard <a href="https://wiki.ubuntu.com/Ubiquity">Ubiquity</a> installer which does it’s job very quickly. A few quick selections and the dreaded disc paritioner screens and you on your way. Installation took about 10 minutes on my machine and felt a little quicker than previous Ubuntu installs, but I put this down to a little bias on my part. Rebooting the machine brought up a standard GRUB menu and I happly noticed that it detected my existing Windows installation and put the relative entry in. Again, the boot was quick and my machine boots to the desktop in under a minute.</p>
<p>So, here comes the negatives. A few minor issues have bugged me since i’ve started using CrunchBang, but nothing show stopping. So to save time I’ll just put them down as bullet points:</p>
<ul>
<li>xcompmgr seems to have a “dicky-fit” after a few hours use, making all window focus go out of the window. Disable/enable of Compositing fixes that.</li>
<li>Tray Icons are hit and miss to what actual colour they use for their background. In my case with the “Fawn” gtk theme you get either a brown or beige background, which looks a little messy. Not really a distribution problem but still annoying.</li>
<li>Restarting Conky seems to paint over the entire desktop for no reason, causing the Windows-esq issue when you have to use an existing window to get the desktop to repaint.</li>
<li>By default, the xserver won’t detect 1280×1024. Simple fix of modifying the Xorg configuration but initial boot of the Live CD can be annoying with a mishmash resolution.</li>
</ul>
<p>As I said, the negatives are MINOR. Really, really minor. CrunchBang was designed as a “2nd - 3rd” distribution for users, so it targets the section that are more than happy to have a twiddle with the system configuration and the thought of text only configuration doesn’t phase them. If you fall into this category and you’re looking for a lightweight desktop distribution then i’d suggest you grab a copy of CrunchBang and give it a whirl.</p>
]]></content></item><item><title>Liverpool LUG Talk</title><link>https://nikdoof.com/posts/2008/liverpool-lug-talk/</link><pubDate>Thu, 04 Dec 2008 00:19:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/liverpool-lug-talk/</guid><description>So, I finally got round to giving a talk at LivLUG, anyone who knows me will know i’m not the best public speaker in the world and I get quite nervous at the thought. It was time to grab the bull by the horns and actually do it!
My first talk was on the usage of the Wiimote within Linux, The Wiimote are very simple Bluetooth devices that can be accessed over the standard APIs with an additional library called CWiid.</description><content type="html"><![CDATA[<p>So, I finally got round to giving a talk at <a href="http://livlug.org.uk/">LivLUG</a>, anyone who knows me will know i’m not the best public speaker in the world and I get quite nervous at the thought. It was time to grab the bull by the horns and actually do it!</p>
<p>My first talk was on the usage of the Wiimote within Linux, The Wiimote are very simple Bluetooth devices that can be accessed over the standard APIs with an additional library called CWiid. This allows the device to be used as a input device or as a general I/O device.</p>
<p>It’s quite hard to explain it in just text alone, So i’ve put my presentation on the <a href="http://livlug.org.uk/">LivLUG wiki</a> everyone to have a look at. I recommend you grab it and give it a try yourself.</p>
<p>EDIT: Yes, It’s on the wiki now, but heres the <a href="http://tensixtyone.com/other/wiimote-linux.odp">direct link</a>.</p>
]]></content></item><item><title>Manchester Open Street Map Pary</title><link>https://nikdoof.com/posts/2008/manchester-open-street-map-party/</link><pubDate>Thu, 23 Oct 2008 11:16:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/manchester-open-street-map-party/</guid><description>This weekend (25th to 26th of October) CloudMade will be hosting a Open Street Map mapping party in Manchester at the Marbella Cafe. The weekend will consist of walking the streets on the east side of Manchester, friendly banter and a few pints. Everyone is welcome from hardcore experts to the complete newbies.
The day starts at 10:00am with a introduction and a simple overview of how to map your data, then we’ll head out and meet back at the Marbella Cafe for lunch, then head out again for the afternoon and concluding in drinks in a local watering hole.</description><content type="html"><![CDATA[<p>This weekend (25th to 26th of October) <a href="http://cloudmade.com">CloudMade</a> will be hosting a <a href="http://openstreetmap.org">Open Street Map</a> mapping party in Manchester at the <a href="http://marbellacupcakes.com/">Marbella Cafe</a>. The weekend will consist of walking the streets on the east side of Manchester, friendly banter and a few pints. Everyone is welcome from hardcore experts to the complete newbies.</p>
<p>The day starts at 10:00am with a introduction and a simple overview of how to map your data, then we’ll head out and meet back at the Marbella Cafe for lunch, then head out again for the afternoon and concluding in drinks in a local watering hole.</p>
<p>Check out the <a href="http://wiki.openstreetmap.org/index.php/Manchester/Mapping_Party">wiki</a> and <a href="http://upcoming.yahoo.com/event/1140700/">Upcoming</a> for more details. Hope to see you all there!</p>
<p>Marbella Café<!-- raw HTML omitted -->
2nd Floor<!-- raw HTML omitted -->
Sunshine Studios<!-- raw HTML omitted -->
52-54 Newton St<!-- raw HTML omitted -->
Manchester<!-- raw HTML omitted -->
M1 1ED</p>
]]></content></item><item><title>Howto: Send SMS using a Huawei E160G and Debian</title><link>https://nikdoof.com/posts/2008/howto-send-sms-using-a-huawei-e160g-and-debian/</link><pubDate>Fri, 17 Oct 2008 11:53:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/howto-send-sms-using-a-huawei-e160g-and-debian/</guid><description>People who use their Huawei E160G on Three will know that in the Windows client you can send and receive SMS, this will come at some minor cost of £0.10 per SMS, and you can add bundles onto your mobile broadband account to make this cheaper.
Similar functionality can be achieved in Linux, and it’s very useful if your like me and want to drop someone a message when you don’t have your phone around.</description><content type="html"><![CDATA[<p>People who use their Huawei E160G on <a href="http://www.three.co.uk/">Three</a> will know that in the Windows client you can send and receive SMS, this will come at some minor cost of £0.10 per SMS, and you can add bundles onto your mobile broadband account to make this cheaper.</p>
<p>Similar functionality can be achieved in Linux, and it’s very useful if your like me and want to drop someone a message when you don’t have your phone around.</p>
<p>For this we’ll be using <a href="http://www.gammu.org/">Gammu</a>, which is a toolset for managing phones via the AT GSM command set. It was originally forked from <a href="http://www.gnokii.org/">Gnokii</a>, which was a similar toolset for Nokia handsets. As the E160G opens a serial port with access to the AT command set this is a relatively easy tool to setup.</p>
<p>First of all, we need to grab the packages. As these are standard Debian packages you should have no issues.</p>
<pre tabindex="0"><code># sudo apt-get install gammu
</code></pre><p>Next, we need to configure Gammu to pickup the correct device. Check your dmesg for the serial port:</p>
<pre tabindex="0"><code>$ dmesg|grep tty
[12321.308078] usb 5-3: GSM modem (1-port) converter now attached to ttyUSB0
[12321.308275] usb 5-3: GSM modem (1-port) converter now attached to ttyUSB1
</code></pre><p>Edit ~/.gammurc, or run gammu-config to change the device settings. Your ~/.gammurc file should look similar to:</p>
<pre tabindex="0"><code>[gammu]
port = /dev/ttyUSB0
model =
connection = at19200
synchronizetime = yes
logfile =
logformat = nothing
use_locking =
gammuloc =
</code></pre><p>Give it a test by getting all the SMS from the device:</p>
<pre tabindex="0"><code># gammu getallsms
</code></pre><p>This should bring back all the SMS currently stored on the stick, which should include your login details for the Three website (unless you’ve deleted them). To send a SMS use the “sendsms” command:</p>
<pre tabindex="0"><code>$ gammu sendsms text 07874454543
Enter message text and press ^D:
Test Message!!!!!1!
Sending SMS 1/1....waiting for network answer..OK, message reference=2
</code></pre><p>Gammu has a lot more tools and options to explore, now you have the basic config you can setup a <a href="http://www.gammu.org/wiki/index.php?title=Gammu:SMSD">SMSD</a>, which can expose the ability to send SMS to a network. Also, Gammu has a python interface so you can possibly build your own frontend client for sending SMS. For more details explore the <a href="http://www.gammu.org/wiki/index.php?title=Main_Page">Gammu Wiki</a>.</p>
]]></content></item><item><title>Experimentation Failure</title><link>https://nikdoof.com/posts/2008/experimentation-failure/</link><pubDate>Wed, 15 Oct 2008 11:06:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/experimentation-failure/</guid><description>My grand idea of experimenting with various distributions for the EeePC went wrong, by quite a bit as well. As mentioned in the last post I decided to have a play around with some of the distributions specifically built for the Asus EeePC 701, I was wondering if something out there can beat Debian on this little work horse.
Oh boy was I wrong.
First I attempted to install Zeee (Zenwalk for the EeePC), the installation “media” came as a compressed disk image, nothing that unusual as most of the distros come in their own little installation media package.</description><content type="html"><![CDATA[<p>My grand idea of experimenting with various distributions for the EeePC went wrong, by quite a bit as well. As mentioned in the last post I decided to have a play around with some of the <a href="http://wiki.eeeuser.com/overview.html">distributions</a> specifically built for the <a href="http://eeepc.asus.com/global/">Asus EeePC 701</a>, I was wondering if something out there can beat Debian on this little work horse.</p>
<p>Oh boy was I wrong.</p>
<p>First I attempted to install <a href="http://wiki.eeeuser.com/zeee">Zeee</a> (<a href="http://www.zenwalk.org/">Zenwalk</a> for the EeePC), the installation “media” came as a compressed disk image, nothing that unusual as most of the distros come in their own little installation media package. It turns out that this image is a raw dump of a file system, so I had to create the installation media on a USB stick with the various handy tools, mke2fs, grub, you get the idea. After about 45 minutes of fiddling I called it a day, for some reason the GRUB installation wasn’t detecting the ext2 partition on the USB stick, and couldn’t find the menu.lst file. While this is probably a simple issue it’s a bit more than I could be arsed with, the Zeee guys are doing well but the installation method need a little work, maybe a prepackaged ext2 dump</p>
<p>After the kerfuffle with Zeee I moved onto the latest <a href="http://www.foresightlinux.org/mobile.html">Foresight Linux Mobile Edition</a>, I’ve heard Dan &amp; Fab mention Foresight on the <a href="http://linuxoutlaws.com/">Linux Outlaws</a> podcast and I have downloaded a live CD previously, so I decided to get the image and have a go. This installation went a lot easier, the image was a precompiled usb installation so no hassle there, the installation took time but I put t hat down to the quality of the USB stick I was using. After about 30 minutes I had a working Foresight Linux install, and everything seemed to work out of the box, including the WIFI (which is the usual sticking point for most distros).</p>
<p>Foresight Mobile uses the clutter based launcher you can also find in the <a href="http://www.canonical.com/projects/ubuntu/nbr">Ubuntu Netbook Remix</a>, the mainstream applications are pre-installed and are usable. Within a few minutes I hit my all time pet hate, touchpad clicking, ever since i’ve owned a laptop I’ve never been able to use touchpad click to any degree of success and I don’t see any reason why it should be enabled by default. In previous distribution I know the way to fix this issue is to simply changing the settings in <a href="http://gsynaptics.sourceforge.jp/">gsynaptics</a> or modify the Xorg config, as I was trying to operate from a user perspective I went the simple route of using gsynaptics. It wasn’t installed. I went digging around in the package manager (conary) and didn’t find a related package. After about ten minutes searching I found the “synaptics” package, which proved useless as I had no idea of what it does.</p>
<p>Three hours in and my experiment with Foresight was over, people may complain that it’s a simple issue but having the option enabled by default and then hiding the configuration in a non standard package doesn’t help matters. I have to give Foresight kudos for being one of the first distributions to have a full “netbook” version, but it still needs a little refinement.</p>
<p>So, now I’m back on Debian, tried and tested. This time I installed using the updated <a href="http://wiki.debian.org/DebianEeePC/HowTo/Install">Lenny installation media</a> for the EeePC and it was a breeze, and since I’ve done this “fresh install” a lot more of the features work consistently. In the process of configuring my machine again I’ve noticed that the older guide for the <a href="http://tensixtyone.com/perma/howto-debian-lenny-huawei-e160g">E160G using Network Manager</a> is a little wrong, so I’ll have to update that sometime. For now I’ll be sticking on ol’ faithful. Maybe when the “next big distro” gets released I’ll give it a try.</p>
]]></content></item><item><title>EeePC Experimentation</title><link>https://nikdoof.com/posts/2008/eeepc-experimentation/</link><pubDate>Sat, 11 Oct 2008 21:19:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/eeepc-experimentation/</guid><description>I’ve been using Debian on my EeePC 701 since I got it, I think the original Xandros lasted a whole two hours or so. Over the last few days i’ve been bugged by 2.6.26 issues and various XServer issues, time for a change.
Over the next few weeks it’s my plan to experiment with a few EeePC tailored distributions, much in the same way Dan did. My first distro of choice is Zeee, which is a customised version of Zenwalk.</description><content type="html"><![CDATA[<p>I’ve been using Debian on my EeePC 701 since I got it, I think the original Xandros lasted a whole two hours or so. Over the last few days i’ve been bugged by 2.6.26 issues and various XServer issues, time for a change.</p>
<p>Over the next few weeks it’s my plan to experiment with a few EeePC tailored distributions, much in the same way Dan did. My first distro of choice is Zeee, which is a customised version of Zenwalk. I’ve heard good things about Zenwalk, so now is my time to experience it.</p>
<p>So, I’ll post a follow-up tomorrow.</p>
]]></content></item><item><title>SQL Server Last Full Week</title><link>https://nikdoof.com/posts/2008/sql-server-last-full-week/</link><pubDate>Wed, 17 Sep 2008 14:12:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/sql-server-last-full-week/</guid><description>While writing a new report today I’ve been drove mad while creating a dynamic query that selects the last full week. So here is the code for SQL Server, to save anyone else the pain:
dateadd(dd,0, datediff(dd,0, dateadd(day,-1*datepart(weekday,getdate())+1,dateadd(week,-1,getdate())) )) dateadd(dd,0, datediff(dd,0, dateadd(day,7,dateadd(day,-1*datepart(weekday,getdate()),dateadd(week,-1,getdate()))) ))</description><content type="html"><![CDATA[<p>While writing a new report today I’ve been drove mad while creating a dynamic query that selects the last full week. So here is the code for SQL Server, to save anyone else the pain:</p>
<pre tabindex="0"><code>dateadd(dd,0, datediff(dd,0,
   dateadd(day,-1*datepart(weekday,getdate())+1,dateadd(week,-1,getdate()))
))
dateadd(dd,0, datediff(dd,0,
   dateadd(day,7,dateadd(day,-1*datepart(weekday,getdate()),dateadd(week,-1,getdate())))
))
</code></pre>]]></content></item><item><title>Dropbox on Debian</title><link>https://nikdoof.com/posts/2008/dropbox-on-debian/</link><pubDate>Sat, 13 Sep 2008 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/dropbox-on-debian/</guid><description>Dropbox is out of closed beta and is open for the public, but what is more interesting is that they now have a client for Linux. I’ve had a beta email sat in my inbox for about 3-4 months, but I’ve never got round to signing up as I couldn’t get a client for Linux.
Dropbox is one of the new wave of online storage, sort of a cross between WebDAV and SVN, in fact, I’d say its almost exactly like SVN, just with a nice GUI.</description><content type="html"><![CDATA[<p><a href="https://www.getdropbox.com">Dropbox</a> is out of closed beta and is open for the public, but what is more interesting is that they now have a <a href="http://www.getdropbox.com/install?os=linux">client for Linux</a>. I’ve had a beta email sat in my inbox for about 3-4 months, but I’ve never got round to signing up as I couldn’t get a client for Linux.</p>
<p>Dropbox is one of the new wave of online storage, sort of a cross between <a href="http://www.webdav.org/">WebDAV</a> and <a href="http://subversion.tigris.org/">SVN</a>, in fact, I’d say its almost exactly like SVN, just with a nice GUI. I guess all it would need to take the “value-added” part of this product is for someone to develop a nice front end for <a href="http://aws.amazon.com/s3">Amazon S3</a>, and by looking at their future prices it could be cheaper.</p>
<p>Anyway, picking out the bits of the service is not what I’m here to do. At the moment I run a <a href="http://www.debian.org/">Debian</a> Testing/Unstable desktop machine, I was quite disappointed to not see a specific Debian package for their software on the website. I realised after a few dumb minutes that I could use the Ubuntu packages.</p>
<p>In sources.list, I referenced their Gusty archive:</p>
<pre tabindex="0"><code>deb http://www.getdropbox.com/static/ubuntu gutsy main
</code></pre><p>Then in /etc/apt/preferences I set some basic package pinning to make sure that any packages didn’t collide with the existing Debian repository, not likely but you never know.</p>
<pre tabindex="0"><code>Package: *
Pin: release a=gutsy
Pin-Priority: 400
</code></pre><p>Do a “apt-get update” and you should have the “nautilus-dropbox” package available to install. Simple!
Remember, you’ll need to restart nautilus by either killing it (killall -9 nautilus) or restarting your Gnome session.
[edit: Fixed the first URL]</p>
]]></content></item><item><title>Liverpool LUG Lists</title><link>https://nikdoof.com/posts/2008/liverpool-lug-lists/</link><pubDate>Mon, 01 Sep 2008 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/liverpool-lug-lists/</guid><description>I posted a few days ago about the creation of a adminstration list for all those who are interested in putting a helping hand into the LUG. The idea seems to have gone down well so I&amp;rsquo;ve took the next step and setup the list on our new hosting server; Ringo. If your interested in helping out, signup for the mailing list. It&amp;rsquo;s currently on moderated signup as I want to keep an eye on who signs up, as that way I&amp;rsquo;ll know who they are and be able to prod them if they go idle.</description><content type="html"><![CDATA[<p>I posted a few days ago about the creation of a adminstration list for all those who are interested in putting a helping hand into the LUG. The idea seems to have gone down well so I&rsquo;ve took the next step and setup the list on our new hosting server; Ringo. If your interested in helping out, signup for the mailing list. It&rsquo;s currently on moderated signup as I want to keep an eye on who signs up, as that way I&rsquo;ll know who they are and be able to prod them if they go idle. Please remember, the admin list is for LUG administration only, the banter will still remain on the main LUG list. On a side note, I just want to bring people up to speed with what we&rsquo;re able to offer as a LUG, we can offer some personal hosting, @livlug.org.uk email address and shell access. If your interested just drop a mail to <a href="mailto:sysadmin@livlug.org.uk">sysadmin@livlug.org.uk</a>.</p>
]]></content></item><item><title>Input based EeePC ACPI module</title><link>https://nikdoof.com/posts/2008/input-based-eeepc-acpi-module/</link><pubDate>Mon, 31 Mar 2008 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/input-based-eeepc-acpi-module/</guid><description>The eee-acpi module, hacked by Asus from the asus-laptop module, currently manages the kill switches for the various extra hardware (wifi, cardreader, webcam) and also handles the extra Fn keys via ACPI events.
While hotkeys via ACPI are well supported by acpid and its ilk it is no longer the best way to handle these types of keys. Generally, the drivers for the mainstream laptops (ibm/lenovo, hp) have moved over to the input framework to communicate these key presses, usually displaying as an extra input device under /dev/input.</description><content type="html"><![CDATA[<p>The eee-acpi module, hacked by Asus from the <a href="https://sourceforge.net/projects/acpi4asus/">asus-laptop</a> module, currently manages the kill switches for the various extra hardware (wifi, cardreader, webcam) and also handles the extra Fn keys via ACPI events.</p>
<p>While hotkeys via ACPI are well supported by acpid and its ilk it is no longer the best way to handle these types of keys. Generally, the drivers for the mainstream laptops (ibm/lenovo, hp) have moved over to the input framework to communicate these key presses, usually displaying as an extra input device under /dev/input. These input devices can be handled by HAL and notifications of key presses send over the dbus allowing for desktop environments such as GNOME to handle these events without any strange hackery and fakekeys calls.</p>
<p>Thanks to the previous work of the asus-laptop <a href="http://blog.eikke.com/index.php/ikke/2007/08/15/asus_laptops_multimedia_keys_and_input">developers</a> there’s a <a href="https://web.archive.org/web/20090615141952/http://key.nicolast.be/files/asus_acpi_to_input.patch">patch</a> that exists to disable the existing ACPI events and provide a input device for the extra keys, I’ve managed to hack together a version of the eeepc-acpi module using the Debian 1.01 source to export the “Asus Extra Buttons” input device.</p>
<p>After you have the inputs available, it’s a simple matter of producing a <a href="https://web.archive.org/web/20080725003556/http://people.freedesktop.org/~hughsient/quirk/quirk-keymap-index.html">FDI for HAL</a> to identify the device and map the scan codes to the actual keys. After the initial FDI was created I could use the volume keys without any extra software and also map the two application buttons (marked as VGA switch, and AP button) in GNOME to call scripts. The wifi key (Fn+F2) presented more of a problem, while it was mapped to “wifi” HAL didn’t know how to actually switch off the Atheros card. The killswitch for the card would need to be implemented as a program that listens to dbus, something a little outside my skill set.</p>
<p>The other buttons on the keyboard (sleep, brightness) are pure ACPI calls. This presents a problem that the keys produce events via the input layer and the ACPI layer at the same time, so for example you hit the brightness down button and HAL will pickup the notification and display the brightness OSD, but it quickly goes out of sync as what HAL sees and what the ACPI are doing are completely separate. Again, this is outside my skill set but I’d probably approach it by filtering out the keys in the kernel and let the ACPI events do their work.</p>
<p>The guys over at Fedora have a <a href="https://fedoraproject.org/wiki/Eee_PC?rd=EeePc">similar idea</a> of moving over to an input based module, but for the moment no source has been produced. Due to the numerous little issues I’ve had I’ve decided to put this little project on the back-burner until I see what the Fedora people have produced, after all they’ll have people that are more experienced in this type of thing, whereas I am not.</p>
<p>I’ll get round to posting the source deb for the modified eee-acpi tonight or tomorrow.</p>
]]></content></item><item><title>EeePC, Suspending, and Debian Lenny</title><link>https://nikdoof.com/posts/2008/eeepc-suspending-and-debian-lenny/</link><pubDate>Thu, 27 Mar 2008 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/eeepc-suspending-and-debian-lenny/</guid><description>After initally setting up my EeePC to run Debian Lenny I quickly encountered a issue were the madwifi drivers wouldn’t resume correctly. The card would be unable to operate as it has lost sync with the kernel drivers, removing and reloading the related modules solved the issue.
Some people on the EeeUser forums ripped out the existing script from the default Xandros install, a simple acpi script that jumped through some hoops to disable the modules and clear everything down.</description><content type="html"><![CDATA[<p>After initally setting up my <a href="https://web.archive.org/web/20080517044553/http://eeepc.asus.com/">EeePC</a> to run <a href="http://www.debian.org/">Debian</a> Lenny I quickly encountered a issue were the madwifi drivers wouldn’t resume correctly. The card would be unable to operate as it has lost sync with the kernel drivers, removing and reloading the related modules solved the issue.</p>
<p>Some people on the <a href="https://web.archive.org/web/20080517044553/http://forums.eeeuser.com/">EeeUser forums</a> ripped out the existing script from the default Xandros install, a simple acpi script that jumped through some hoops to disable the modules and clear everything down. The script worked as part of the existing acpi-support package and worked when using the acpi suspend options, now i’ve got GNOME and HAL installed it turns out these are no longer used, therefore still causing the issue.</p>
<p>After a little research it seems that the suspend support within Debian is currently in a state of flux, and a few <a href="http://bugs.debian.org/cgi-bin/bugreport.cgi?bug=451380">bug tickets</a> have been raised about the various issues. This provided my first hint of how to resolve it, a quick script in pm-utils, much like the acpi one, will fix this for good.</p>
<p>Simply, place this script into your <code>/usr/lib/pm-utils/sleep.d/</code> folder, I’ve got it as <code>45eee-wifi</code> that way if the script fails for some reason at least your video will be resumed.</p>
<pre tabindex="0"><code>#!/bin/bash

PWR=$(cat /proc/acpi/asus/wlan)

load_modules() {
   modprobe ath_pci
   modprobe wlan_wep
   modprobe wlan_tkip
   modprobe wlan_ccmp
}

unload_modules() {
   rmmod ath_pci
   rmmod wlan_scan_sta
   rmmod wlan_tkip
   rmmod wlan_wep
   rmmod wlan_ccmp
   rmmod ath_rate_sample
   rmmod wlan_acl
   rmmod wlan
   rmmod ath_hal
}

wifi_on() {

   if [ &#34;$PWR&#34; = &#34;0&#34; ]; then
      modprobe pciehp pciehp_force=1
      sleep 3
      echo 1 &gt; /proc/acpi/asus/wlan
      sleep 2
      load_modules
      sleep 1
   fi
}

wifi_off() {
   if [ &#34;$PWR&#34; = &#34;1&#34; ]; then
      unload_modules

      echo 0 &gt; /proc/acpi/asus/wlan
      sleep 1
      rmmod pciehp
      rmmod pci_hotplug
   fi
}

case &#34;$1&#34; in
        hibernate|suspend)
                wifi_off
                ;;
        thaw|resume)
                wifi_on
                ;;
        *)
                ;;
esac
</code></pre><p>The scripts in the “Arch acpi-eee” package provided the basis for this script, and it also works alot better than the existing scripts provided on eeeuser.com.</p>
]]></content></item><item><title>Howto: Download MP4 from BBC iPlayer</title><link>https://nikdoof.com/posts/2008/howto-download-mp4-from-bbc-iplayer/</link><pubDate>Sun, 09 Mar 2008 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/howto-download-mp4-from-bbc-iplayer/</guid><description>With the launch of BBC iPlayer for iPhones it seems they’ve let slip a little extra “feature”. You can now download programs from BBC iPlayer without DRM in a well encoded MP4 format. How? Easy.
First of all, install User Agent Switch for Firefox and setup the iPhone user-agent:
Description: iPhone User Agent: Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en) App Name: AppleWebKit/420+ (KHTML, like Gecko) App Version: Version/3.</description><content type="html"><![CDATA[<p>With the launch of BBC iPlayer for iPhones it seems they’ve let slip a little extra “feature”. You can now download programs from BBC iPlayer without DRM in a well encoded MP4 format. How? Easy.</p>
<p>First of all, install User Agent Switch for Firefox and setup the iPhone user-agent:</p>
<ul>
<li>Description: iPhone</li>
<li>User Agent: Mozilla/5.0 (iPhone; U; CPU like Mac OS X; en)</li>
<li>App Name: AppleWebKit/420+ (KHTML, like Gecko)</li>
<li>App Version: Version/3.0</li>
<li>Platform: Mobile/1A542a Safari/419.3</li>
</ul>
<p>Now browse to any BBC iPlayer program page and you’ll notice that it tries to serve up a Quicktime video, the MP4. As the URL isn’t displayed raw in the code, you can use a little Javascript wizardry to redirect you to the raw stream:</p>
<pre tabindex="0"><code>javascript:(function(){url = document.getElementById(&#39;mip-flash-player&#39;).getElementsByTagName(&#34;object&#34;)[0].childNodes[0].value; window.location = url;})()
</code></pre><p>Or if you want a simple drag and drop bookmarklet: <!-- raw HTML omitted -->iPlayer Download<!-- raw HTML omitted --></p>
<p>The BBC will either pull the iPhone beta or re-engineer it with the iPhone SDK to develop a full client, either way this will not last long. Initally when I heard the iPhone was supported by iPlayer I was outraged, Why does a device with only around 100,000 users in the UK get priority over a operating system? It almost seems like Karma is against them, but no doubt this will get into the news as &ldquo;hackers exploiting the system&rdquo; rubbish. Only time will tell, enjoy it while you can.</p>
]]></content></item><item><title>Off-site assets with S3</title><link>https://nikdoof.com/posts/2008/off-site-assets-with-s3/</link><pubDate>Sat, 08 Mar 2008 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2008/off-site-assets-with-s3/</guid><description>Finally, i’ve got round to moving the /misc folder off onto Amazon S3. I’ve decided to do this due to my upcoming hosting move, not having to worry about several hundred megabytes of static data will ease the strain of moving. So, how have I done it?
First of all I created a bucket on S3 with the name assets.nikdoof.net, from there I setup a CNAME in BIND for assets.nikdoof.net pointing to s3.</description><content type="html"><![CDATA[<p>Finally, i’ve got round to moving the <code>/misc</code> folder off onto Amazon S3. I’ve decided to do this due to my upcoming hosting move, not having to worry about several hundred megabytes of static data will ease the strain of moving. So, how have I done it?</p>
<p>First of all I created a bucket on S3 with the name assets.nikdoof.net, from there I setup a CNAME in BIND for assets.nikdoof.net pointing to s3.amazonaws.com to allow direct referencing to the files within the bucket.</p>
<pre tabindex="0"><code>assets.nikdoof.net.     IN      CNAME   s3.amazonaws.com.
</code></pre><p>Then for the relocation of the misc folder, setup a simple Apache mod_rewrite rule to transfer all requests for the misc folder to S3.</p>
<pre tabindex="0"><code>RewriteEngine on
RewriteRule ^/misc/(.*)$ http://assets.nikdoof.net/$1 [R,L]
</code></pre><p>So now it’s all up and working, and to give it a try yourself, <a href="img/Rachel-Stevens2.jpg">here</a> is a fetching wallpaper of Rachel Stevens.</p>
]]></content></item><item><title>Local LUGs</title><link>https://nikdoof.com/posts/2007/local-lugs/</link><pubDate>Mon, 31 Dec 2007 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2007/local-lugs/</guid><description>As part of a new year resolution (of a sort), I’ve decided to become more involved in the Linux community and one of the big stepping stones is the local LUGs. While ManLUG is active and quite easy to attend and keep up to speed with, some of the smaller local ones have fell into decline over the last few years. I guess a major issue is that we have such a well respected LUG within a few miles, Manchester LUG has been formed since 1994 and theres people in that group who have helped with major milestones in Linux (such as the MCC Interim releases).</description><content type="html"><![CDATA[<p>As part of a new year resolution (of a sort), I’ve decided to become more involved in the Linux community and one of the big stepping stones is the local LUGs. While <a href="http://www.manlug.org/">ManLUG</a> is active and quite easy to attend and keep up to speed with, some of the smaller local ones have fell into decline over the last few years. I guess a major issue is that we have such a well respected LUG within a few miles, Manchester LUG has been formed since 1994 and theres people in that group who have helped with major milestones in Linux (such as the <a href="http://en.wikipedia.org/wiki/MCC_Interim_Linux">MCC Interim releases</a>). So today, I posted on the Liverpool LUG mailing list in a attempt to stirr up some action. Hopefully in the next few days we can get something organised, even if its just a pub meet it’ll be better than nothing.</p>
]]></content></item><item><title>OSX 10.4.10 and iScroll woes</title><link>https://nikdoof.com/posts/2007/osx-10-4-10-and-iscroll-woes/</link><pubDate>Thu, 08 Nov 2007 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2007/osx-10-4-10-and-iscroll-woes/</guid><description>So I finally rebooted my laptop after the 10.4.10 update, fine a few months late but what the hell. Anyway, my Powerbook started kernel panicking on boot. After a few minutes investigation I found that the iScroll kext was causing all the issues, quickly remove the file in single user mode and I was back up and running&amp;hellip;
So much for seamless updates eh?</description><content type="html"><![CDATA[<p>So I finally rebooted my laptop after the 10.4.10 update, fine a few months late but what the hell. Anyway, my Powerbook started kernel panicking on boot. After a few minutes investigation I found that the iScroll kext was causing all the issues, quickly remove the file in single user mode and I was back up and running&hellip;</p>
<p>So much for seamless updates eh?</p>
]]></content></item><item><title>Nokia N95</title><link>https://nikdoof.com/posts/2007/nokia-n95/</link><pubDate>Tue, 23 Oct 2007 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2007/nokia-n95/</guid><description>I’d been mulling over getting a N95 for a few weeks, and I was quite hyped up with moving to a intelligent handset (compaired my Nokia 3 Series mobile). To cut a long story short, I didn’t get the handset but Jo got it for me on Orange. I’m 6 days into my usage of the N95, so I feel now is a good time to air some initial views.</description><content type="html"><![CDATA[<p>I’d been mulling over getting a N95 for a few weeks, and I was quite hyped up with moving to a intelligent handset (compaired my Nokia 3 Series mobile). To cut a long story short, I didn’t get the handset but Jo got it for me on Orange. I’m 6 days into my usage of the N95, so I feel now is a good time to air some initial views.</p>
<p>First of all, lets start on the obvious. The N95 is Nokia’s convergence product combining 3G, Wifi, and GPS with the Series60 3rd Edition (SymbianOS 9.1) software. The N95 has been a hit with few due to the ever crowded market place (HTC smartphones, iPhone) and a few minor issues. Nokia have recently addressed the major concerns with the launch of the N95-3 which has 8gb on-board storage, better battery, and larger screen, but it could be a little late with the impending arrival of the iPhone in europe. I’ve got the N95-1 version of the phone which was the main release version for the UK.</p>
<p>First boot, the phone showed the traditional “holding hands” Nokia branding, date/time setup, then dropped into the front screen. For the last year or so I’ve been using a HTC Wizard running Windows Mobile and have had to deal with a fair share of awful interface design from the hands of 3rd party developers, but the only way I could describe the front screen that came with the current firmware is a abomination. Orange had decided, in it’s infinite wisdom, to replace the plain and simple Series60 front screen with a menu driven nightmare that chews RAM and makes the whole phone sluggish. I had to get rid of it. I jumped into the familiar menus and tried in vain to remove the horrid front screen, it seems orange don’t want you to remove it.</p>
<p>I might as well admit, I’ve voided my warranty already, in fact, I voided it on the same day. Theres a few details available on-line to change your product ID to a generic Nokia one, then using the official Nokia Software Update to re-flash the phone with a nice generic firmware. This worked a treat. It might just be my opinions, but the Orange menu was just too much of a hindrance on the phone, enough to warrant trying to get rid of it as fast as I could. This is a carrier issue, not really what everyone would experience with the phone.</p>
<p>Bar the misfortune of the Orange menu, the interface is slick and easy to use. The menus for the phone are a tad over-complicated but you soon get used to the location of the commonly used functions. So, i’m keeping it, flashy menus and features win me over. The battery life could do with some improvement, and hopefully Nokia will release the improved battery for N95-1 edition handsets, but I guess only time will tell.</p>
]]></content></item><item><title>Open Street Map</title><link>https://nikdoof.com/posts/2007/open-street-map/</link><pubDate>Mon, 22 Oct 2007 00:00:00 +0000</pubDate><guid>https://nikdoof.com/posts/2007/open-street-map/</guid><description>Ever since I got my N95 I’ve been doing some small mapping for the Open Street Map project. I’ll have to say, it has been fun. As strange as it sound it’s fun walking the streets of the local area.
In Widnes, theres alot of unmapped area, what has been done has been done by Chris, but only around Appleton has been done. In 3-4 days i’ve managed to do a few of my major routes.</description><content type="html"><![CDATA[<p>Ever since I got my <a href="https://web.archive.org/web/20080517044553/http://www.nseries.com/%5B">N95</a> I’ve been doing some small mapping for the <a href="http://www.openstreetmap.com/">Open Street Map</a> project. I’ll have to say, it has been fun. As strange as it sound it’s fun walking the streets of the local area.</p>
<p>In Widnes, theres alot of unmapped area, what has been done has been done by <a href="https://web.archive.org/web/20080517044553/http://www.chrishowells.co.uk/">Chris</a>, but only around Appleton has been done. In 3-4 days i’ve managed to do a few of my major routes. I’d highly recommend anyone with a GPS to get out there and help the project.</p>
]]></content></item></channel></rss>