Tuesday, July 21, 2015

automatic backup with proxmox

This was such a pain in the ass to find... but this is obviously a noob problem.  The steps:

  1. Click the "Datacenter" root of the tree on the left of the UI.  Doesn't matter what view you have selected.  Right where this giant-ass red arrow is pointed:

  1. Next, click the "Backup" tab.  Not sure what else you'd do at this point.  Look for the huge red arrow again:

  1. Now click "Add".  Another enormous arrow points the way:

  1. Now you get a fairly self-explanatory popup for configuring your automatic backups.  No arrows this time :(

  1. Some potential sticking points on the backup form:
    1. If you have no options under "Storage", you need to set up some storage other than the Proxmox boot disk.  Google around for it, I don't have any big-ass arrow-covered guides for that.
    2. The mode you choose matters: 
      1. "Snapshot" is not a real backup - if your VM server's disk craps out and you want to restore from backups, you'll be SOL if you chose snapshots.
      2. "Stop" is the mode I use - while the backup is happening, the VM will not be running, but you get the entire state of the VM into the backup, which makes it a real backup.  On my Xeon E3-1250 (v1 or v2) with several dozen GB disks, this takes an extremely small amount of time, and I'm using the best compression as well.  If you have giant VMs or otherwise really care about performance, you may want to investigate further.
      3. "Suspend" is identical to "Stop" for folks using KVM (VMs).  It differs only for the poor bastards who chose the HD-DVD of virtualization technologies, OpenVZ.  (If you're one of those poor bastards, get out as soon as possible - they're chopping OpenVZ support out of newer versions of Proxmox.  Your pain is my gain, because this allows Proxmox to move away from than ancient 2.6 kernel!)

Wednesday, July 15, 2015

windows nut client configuration

Part 2... (continued from part 1, here)

1. Allow remote connections on the server

# sudo vi /etc/nut/upsd.conf

Add a listen directive to the IP:

LISTEN (your IP) 3439

And then restart upsd:

# sudo pkill upsd
# sudo upsd

Make sure it came back, and that the new directive is good:

# sudo upsc bu650@localhost
# sudo upsc bu650@(your IP)

2. Add a slave user on the server

# sudo vi /etc/nut/upsd.users

    password = somethingelse
    actions = SET
    instcmds = ALL
    upsmon slave

3. Install WinNut

Here is the most recent project page, here is the downloads page (not updated since 2011, ack). and here is a direct link to the most recent msi installer.  Verified to work on Windows 7.

Now install that sucker.

4. Configure WinNut

Open WinNut.  The second row should be labeled "Configuration file path", and have an edit button on the right side.

Click that.  This should open an editor to a upsmon.conf, just like we had on the server.  Add a monitor directive to the server, with the new slave credentials we created in step 2:

MONITOR mybu650@(your IP) 1 slave4u somethingelse slave

Save and close that, click "Apply and start WinNut", and then click the "View" button to see the log file.  If you did it all right, it should say that you're connected!

5. Conclusion

WinNut seems ultra weak.  But it also appears to work, so I'll let it go for now.  But not for long...


Tuesday, July 14, 2015

nut server configuration

Nut sucks to configure.  There is hardly any recent, reliable info out there (aside from the inline documentation in the conf files, wealth of information on the official site, and helpfule error output - but who wants to wade through that ;).  Plus, a few years ago it was a LOT worse, so count your blessings and use a recent distro, ya whippersnappers.

So let's change that.  Here's a guide that's easy to digest and may even work for you!

1. Hardware and driver selection

I'll be setting up a nut server in this one, with my ancient APC Back-UPS 650.  No, not the one you get when you google that - mine is so old, they are now reusing the name for a newer product.  But it's heavy as hell (that's a positive for me), I just replaced the battery, nut supports it, and I got it for free.

The Back-UPS uses a serial interface, and since the desktop I'm setting up as the server has no serial ports, I'm using a USB-serial converter, available for a few dollars everywhere.

Using a USB-serial converter means you'll be using /dev/ttyUSB[0-9], instead of who knows what for a USB UPS.  It's probably at /dev/ttyUSB0, but run this command to find it:

# ls /dev/ttyUSB*

If that comes back with nothing, then you probably hooked up your UPS wrong, (if your UPS is on a real serial port, it will probably be at ttyS[0-9], or maybe ttyO[0-9], or something else weird).  If that command comes back with several ttyUSB entries, you'll have to try them all in turn in the section below, or try unplugging and replugging the UPS, checking which dev entry goes away and then comes back.

First, you're going to need to find your UPS on the nut HCL.  The point is to find which driver and arguments are needed to drive your UPS.  I'll wait.

Now, if you don't find what's obviously your UPS, don't panic.  All may not be lost.  My search, for example, returned three UPS variants, none of which were the 650.  The differences relate to different cabling configurations, and all use the same "genericups" driver, but with different "upstype" arguments.  Since I don't know how my UPS is cabled, I'll just try each of them.

Of course, before we start we also have to install the server package:

# sudo aptitude install nut-server

2. Configure the UPS driver

Open the config file for your UPS hardware:

# sudo vi /etc/nut/ups.conf

Add a section for your UPS - make sure you put it AFTER all the configuration directives:

    driver = genericups
    port = /dev/ttyUSB0
    upstype = 1
    desc = "The beige tractor battery"

The heading is whatever you want to call this UPS.  The driver (and, for me, the "upstype" argument) comes from the nut HCL.  We found the port in the section above.  And the description is also just another place to put some useless text you'll probably never see.

Save the file.

You'll also probably need to add the nut user to the dialout group, so it can access the serial port:

# sudo usermod -aG dialout nut

Now, to test the config, run:

# sudo upsdrvctl start

This will probably fail, because configuring nut is hard.  Read the error output, fix stuff, and try again.

Otherwise, if you get no error messages, your driver config is correct!

3. Configure users

Open the users file:

# sudo vi /etc/nut/upsd.users

Add a user and pass:

    password = something
    actions = SET
    instcmds = ALL
    upsmon master

4. Configure the data server

Open the config file for your UPS daemon:

# sudo vi /etc/nut/upsmon.conf

Add a MONITOR directive:

MONITOR mybu650@localhost 1 admin something master

Start upsd:

# sudo upsd

If you've managed to not mess anything up, you should get no angry messages.  If you got some, read them and fix the problem.

Now, check that you're connected:

# sudo upsc mybu650@localhost

This should give you a bunch of output, the most important of which is the ups.status line.  Of course, since this is nut we're talking about, the value is cryptic, but basically if it says "OL", that means "online", and everything is good!

5. BONUS: Configure a tray icon

If your nut server is also a desktop, you might want a status icon.  This is simple.  Install nut-monitor, aka NUT-monitor:

# sudo aptitiude install nut-monitor

You can start it like this:

# NUT-monitor

I have no idea why you have to yell it.  If you want to make it autostart in the tray, start it like this:

# NUT-monitor --start-hidden

Careful, it's a bitch to kill like that - Ctrl-C wouldn't do it for some reason.

Add that command to your startup scripts to get nut status all the time.  For example, on an openbox system, just:

# vi ~/.config/openbox/autostart

And add:

NUT-monitor --start-hidden &

Boo yah!

(Go to part 2 to see how to install nut for a Windows 7 client.)

Monday, June 15, 2015

import local git repos to GOGS

A few months back I decided I needed a local git server of some kind, and after doing a little research settled on GOGS (who knows why), with a few runners up in case things got dicey with GOGS - Phabricator was one, among a few others.  Today I decided it was time to make the git server happen, and things got very dicey but I stuck with GOGS anyway.

Here are the clean and reduced instructions for cloning your local git repos into GOGS.  This is probably pretty basic for some people, but a git wizard I am not.

  1. Convert your local repos into a bunch of bare repos:

    mkdir bare-repos
    cd bare-repos
    git clone --bare --local /home/you/code/myfrivolousproject
    # repeat until there are "frivolousproject.git" dirs for all your frivolous projects inside bare-repos
  2. Convert them so we can use a dumb webserver:

    cd frivproj1
    git update-server-info -f # I'm not sure, but it seemed like I needed the -f
    cd ../frivproj2
    git update-server-info -f # also, thanks to this guy for this key step!
    # etc...
  3. Start a dumb webserver:

    cd bare-repos
    python -m SimpleHTTPServer
  4. In GOGS, hover over the + at the top right, choose "New Migration", then enter "http://LOCALIPADDR:8000/myfrivproj1".  You can figure out what to enter for the rest of the fields.  Authentication is not needed.  MAKE SURE you enter the actual IP of your client machine, and not localhost!  I think I screw this up every time.
  5. If you screw up, you'll get an extremely unhelpful 500 page, and something like "repository not found" in the logs.  Probably you forgot the update-server-info step, or started the webserver in the wrong directory.  But if you somehow managed to make it all work right, you'll get dropped into your new GOGS project page!
  6. After you do this, you may want to add the git repo as the origin to your local checkout.  cd to the checkout, and run git remote add origin $URL, where URL=the HTTPS link URL found in the gogs page project.

That took way too much work, I must be tired.

Thursday, June 4, 2015

convert existing live btrfs root filesystem to raid1

Fuck it, we're doing it live!

All commands below make these assumptions:

  • sda = EXISTING boot drive
  • sda1 = EXISTING root fs
  • sdz = NEW boot drive
Modify the dev entries in the commands below according to your setup or bad things will happen!
  1. Copy the partition table from the old disk to the new one.  You probably don't want to get this backwards.

    yo@mama:# sfdisk -d /dev/sda | sfdisk /dev/sdz

    After doing this you may want to pull and then reinsert the drive, to refresh the entries in
    /dev.  Linux can be weird about this, partprobe doesn't always work.
  2. The disks I'm using are not the same size, the new one is a bit larger than the current one.  If the new one were smaller that would be a bit trickier, but since it isn't I won't really get into that.  (Basically you'd either need to recreate the partitions by hand on the new drive, or send the sfdisk output to file and modify that by hand before throwing on the new drive.)

    Instead, we'll just let that extra space be.
  3. Add the new disk.  Assuming your existing disk is mounted as the root filesystem, the command would go like this:

    yo@mama:# btrfs device add /dev/sdz1 /
    yo@mama:# btrfs balance start -dconvert=raid1 -mconvert=raid1 /

    If you've read the docs, you know there is also an -s flag for system chunks that can be passed to btrfs balance start.  You don't need to use -s here, according to various people on the internet, and I even found this comment that verifies it in the btrfs-progs sources:

     * allow -s only under --force, otherwise do with system chunks
     * the same thing we were ordered to do with meta chunks

    Also, for some reason balances happen in the foreground by default and output nothing.  Watch the progress by opening another terminal and running:

    yo@mama:# while btrfs balance status / ; do sleep 10; clear; done
  4. And that should be it.  You can make sure it went down as it was supposed to by running:

    yo@mama:# btrfs fi show /

    Which should give a listing with two devices.
Then, you can give it the balls-to-the-wall test by disconnecting your old boot drive. Make sure that the balance has finished before you do this!  If you're a sissy, run sync before pulling the drive.

LACP port bonding with DHCP on debian stretch

Finding information on LACP port bonding can be tricky.  It seems there's a decent guide published every couple years, but then the code changes and the information no longer really applies.

And creating a bonded interface with DHCP is even trickier.  You could solve it this way, if you wanted to cop out.  But copping out is for out coppers, and everyone knows that that ain't me.

So we're going to configure LACP port bonding (aka 802.3ad) on our brand new install of debian stretch (at the time of this writing, stretch is nearly identical to jessie, so this guide ought to work for jessie for a few years).

  1. aptitude install ifenslave
    echo 'mii' >> /etc/modules
    modprobe bonding

    This gives us the ability to create bond interfaces.

  2. vi /etc/network/interfaces

    And make sure the contents look something like this:

    source /etc/network/interfaces.d/*

    # Loopbackz
    auto lo
    iface lo inet loopback

    # Enslave these interfaces
    allow-hotplug eth0
    allow-hotplug eth1

    # The bonded interface
    auto bond0
    iface bond0 inet dhcp
        bond-slaves eth1 eth0 # order matters!  the first listed interface is the one whose mac is used
        bond-mode 802.3ad
        bond-miimon 100
        bond-downdelay 200
        bond-updelay 200
        bond-lacp-rate 1
        bond-xmit-hash-policy layer2+3

  3. Reboot.  Or muck around with /etc/init.d/networking restart, ifconfig INTERFACE DOWN|UP, etc.
  4. If the ifconfig output looks good, and the output from cat /proc/net/bonding/bond0 looks good, and you have network connectivity, then you've done it!
Go have a cold beer, you've earned it.

Saturday, May 9, 2015

on rtfm, from a manual reader

Sometimes you're reading and researching online and you get to a post where someone is asking something very basic, or is very misinformed, and a peanut gallery lurker will inevitably step forward and suggest that the poor sap "read the fucking manual", or rtfm.  When this happens, I cheer - manpages are the bedrock of unix; once you've consumed one, you should be 100% competent in the use of that tool.

But sometimes it doesn't work out like it should.  The ip and sudoers manpages are notorious disasters.  (At one point I remember reading an in-depth writeup about what went wrong with the creation of iproute2, but I can't find it now.  If anyone else has the link, please send it to me!)

Despite some gripes I have with the command interface - the primary operations are too dangerous, so there should be no short options - mdadm has a decent manpage.  But today I came across an insane, inscrutable gem:

These same layouts are available for RAID6.  There are also  4  layouts  that  will provide  an  intermediate stage for converting between RAID5 and RAID6.  These provide a layout which is identical to the corresponding RAID5 layout on the first N-1 devices,  and has the 'Q' syndrome (the second 'parity' block used by RAID6) on the last device.  These layouts are: left-symmetric-6, right-symmetric-6, left-asymmetric-6, right-asymmetric-6, and parity-first-6.

What is the "Q" syndrome??  What does it all mean??  Google came up empty.  The world may never know.

Wednesday, May 6, 2015

a lovely chat with the great satan

There are some conversational topics that will make anyone squirm.  Try steering the watercooler talk towards "the big C," or drop "the C word" and watch your friends, family and coworkers' brains begin to shut down.  Yes, I'm talking about Comcast.

About the only thing worse than talking about them is talking to them, but as modern adults we are often burdened with both tasks.  The initial problem that led to my need to contact the big evil Internet machine was one that I don't really heap much blame on them for - they shipped me the wrong thing.

Last week I received a cold call.  The excellent Truecaller told me it was my wonderful ISP - since I give them a load of money each month, it seemed like it might be in my best interest to take the call.

On the phone, I was told that for $3 more a month I could get an Internet connection that was ~4x faster than my current service, plus TV with HBO, plus two cable boxes.  When I asked about shipping and setup fees, I was told that these would be waived, and two cable boxes would be shipped to my house free of charge.  About this I was ambivalent - I'd sort of rather not have TV service in my house, but since it was a requirement to get the faster Internet, I consented, thinking eventually I'd either hook it up or not.

(In my mind, this hard sales push is a direct response to Netflix's recent insane numbers and jubilant CEO.  But maybe they just like fucking with people, I don't really know.)

So the package arrived, large enough for two cable boxes but feeling empty.  Upon opening it up, I found two identical packets of cable TV information, but no cable boxes.  Chalking it up to standard Comcast fuckery, I waited, but the cable boxes never came.  For me, this shipping mixup is somewhat understandable - there's something about putting the right things in the right boxes that can be difficult for people to wrap their heads around.  Not saying that I don't want to put Comcast down as much as possible, but about this I'm not too peeved.

On the suggestion of a coworker, I decided to chat up Comcast online rather than deal with the known nightmare of a phone conversation with them.

But that chat was worse.  In it, Comcast:
  • Transferred me three times to three different departments.
  • Took one to ten minutes to respond after each message.
  • Informed me that I was only slated to receive one cable box.
  • Intimated that I would be charged for the box's delivery no matter what.
  • Informed me that I could not cancel my TV service over the chat system,
  • citing the insecurity of their chat system as the primary reason!
The most fucked up part is, I knew it was a trap when I agreed to the upgrade.  I guess I just wanted to see how much of a trap it was.

After most ISP encounters, people often find themselves in need of a cathartic retelling.  Comcast's depth of unscrupulousness and incompetence is known by all, but it still fairly boggles the mind to see it.  It's sort of like watching a video of a guy doing a backflip, or girl getting hit in the head with a shovel - you aren't surprised that it happened, but does get your sympathetic nervous system pumping.

For the brave, the bored, and the masochistic, the full text of our chat follows.  It took probably an hour from start to finish.  Writing this article took less time than my ineffectual chat.  Enjoy.

Sunday, April 26, 2015

one box to stream them all: installing Kodi on the Fire TV

So I went evil and got a Fire TV.  True, Amazon tries hard to sell you crap at every turn, but the interface is nice, the remote is a dream, and the price is right.  Access to Amazon Prime streaming and Netflix?  Check.  The only thing it's missing is the ability to access local network content.

It turns out it's pretty easy to get Kodi (née XBMC) up and running.

  1. Download adbfire for your platform.  I used WattOS (a Debian-derived Openbox-based desktop Linux distro) and it worked just fine - all dependencies were already installed.
  2. Download the Kodi .apk for Android (ARM version).
  3. Enable "ADB debugging" and "Apps from unknown sources" on the Fire TV.  Get to them in Settings -> System -> Developer Options.
  4. Get the Fire TV's IP from Settings -> System -> About -> Network.
  5. Run adbFire:
    yo@mama $ ./adbFire &

  6. Click the "Device Setup" button at the top, and enter any description and your Fire TV's IP address, and click Save.  You don't need to change any of the other fields.
  7. Click the Connect button.  There will be no progress indicator, but after a few seconds you should see "Device connected" appear in the lower right corner of the screen.
  8. Click the "Install APK" button, and navigate to the Kodi APK file you downloaded in step 2.  The installation process will take a minute or two.
  9. Normally, due to Amazon's evil, you'd have to go deep into the settings to the "Manage all installed applications" menu in order to launch Kodi.  Fortunately, there is a workaround - we hijack an app called "Ikono TV" by installing it and stealing it's entry in the main menu.  So, search for and install "Ikono TV".
  10. In adbfire, click the "Llama options" button.
  11. Make the box look like the screenshot above.  You'll need to change several settings:
    • Check "Install Llama"
    • Tick "Link media center to program"
    • Tick "Replace program icon"
    If you want the Fire TV to launch on startup, then make those choices accordingly.  Click ok when done, and wait the few seconds for the process to complete.  The message about importing via USB is nothing to worry about, you won't need to connect anything to your Fire TV.
  12. On the Fire, go to Settings -> System -> Manage Installed Applications and launch Llama.  Click ok to get through all the first time startup messages.
  13. Navigate to the confused llama icon in the lower left and click it.  Scroll to "Import/Export Data", and choose "Import from USB storage".  I know there's no USB attachments, but this works anyway.  (I'm guessing that adbFire created some sort of virtual USB drive, loaded with the needed llama settings, but I'm not really sure.)  For me, after a few seconds, Llama exited with no message.
  14. Go back to the main menu and launch Ikono TV.  This should stutter for a half second and then actually launch Kodi!  Once you exit, the Ikono TV name and icon should be replaced with those for Kodi!
And you're done!  Don't you sort of feel bad for the Ikono TV developers?  Their app has been hijacked and parasitized by us rogue local content lovers.  Oh well, thanks Ikono TV guys!  And a big thanks to the devs who keep Kodi awesome as well!

Monday, January 19, 2015

make the google domains dyndns work with pfsense

So, it's actually pretty easy.  You already have pfsense set up, and a domain running on Google domains.  Open up the Google domains page and the pfsense page so all informations are readily available.

In the domains list in Google domains, click the DNS icon.  Scroll down to synthetic records, and from the drop down menu choose "Dynamic DNS."

Assuming you're running a single site, you'll want to add entries for the bare URL (mysite.com) and the www subdomain (www.mysite.com).  The bare URL entry should get an "@" sign in the "subdomain" field, and the other should get a "www".

Click the little sideways caret next to the domain name on each entry, and then click the "view credentials" link.  This should reveal the credentials for this subdomain - the username and password.

In pfsense, go to the Services tab and choose Dynamic DNS.  Click the little plus icon to get a new DynDNS entry.

Under "Service type", choose "Custom".  Setup the interfaces according to your network configuration.  In the username field, paste the "username" entry from the Google DynDNS page.  Copy and paste the username and password from the Google DynDNS page into pfsense.

In the Update URL field, paste this: "https://domains.google.com/nic/update?hostname=".  After that, type the name of the domain you are configuring DynDNS for - leave off the "@" for the bare URL version, but be sure to add the "www" or whatever other subdomain.

In the "Result match" field, copy and paste this: "good %IP%|nochg %IP%".  This says that the IP update succeeded if Google's servers responded indicating that there was no change to your client's IP, or if they indicated the change was successful.

For a simple webserver setup, add two entries: one for the base domain ("@" in Google DynDNS) and another for the www prefix.  They both use they exact same setup in pfsense aiside from the small change to the hostname field, but each will have its own set of credentials assigned by Google.

A previous version of this post erroneously instructed users to leave off the subdomain in the hostname field in pfsense.  Updated 1/31/2015 to indicate that the subdomain should be left off for the bare URL, but preserved for "www" and any other subdomain.