Mail Server Update

Standard

Well I have been using the Mail-In-A-Box mail server solution for a few months now, and actually quite enjoying it. ¬†Nearly %100 migrated over. ¬†A few old messages that haven’t been pulled over yet, and Google Drive is still my primary for those things. ¬†But by year’s end, I should be %100 self sufficient, and essentially Google-free.

Although I do not hate or have any ill will towards Google, I am finding that for pure cross-platform compatibility at %100 parity, I need to host using standards. ¬†Doing so will allow for greater flexibility for very little more effort than what I am expending currently (sometimes feels like swimming up a waterfall…)

So here is where I am sitting:

  • For mail, MIAB is doing the job. ¬†MIAB is postfix + dovecot + DNS + LetsEncrypt automation + Roundcube. ¬†There is a webmail solution, plus I can use any IMAP client to tap in and access or manipulate the messages. ¬†Refer to the next part for what is actually doing my webmail. ¬†In the MIAB web GUI, there is also a Munin monitoring instance, which seems quite nice and useful, although I think I may integrate things into my Cacti instance that is hosted elsewhere.
  • For contacts and calendars, MIAB ships with OwnCloud. ¬†However, I have a serparate installation of Nextcloud elsewhere, which has all that plus plugin manipulation enabled (MIAB does not), and I have it doing word processing, spreadsheets, notes, calendars, contacts, files and webmail. ¬†Quite nice. ¬†And yes I am actually syncing to both OwnCloud and NextCloud quite nicely.
  • On the phone side of things, I had some fun getting things taken care of but its done and will sync now. ¬†The issue that I ran into was Google not playing well with others, and not being aware that I needed a separate contacts list for syncing. ¬†Copy Contacts fixed that, and now I just default to using that second addressbook for all contacts. ¬†One fringe benefit is web manipulation in a better way via *Cloud’s interfaces. ¬†DavDroid + Tasks + Copy Contacts is the toolchain for syncing, along with either GMail or Aqua apps for doing the actual emails. ¬†Going forward, I will be hosting my own contacts and calendars, that if I want can be shared out or duped to Google.
  • On the PC, GNOME, Evolution and Chromium are taking care of everything there. ¬†I have a Google account added to tap into Google’s stuff. ¬†And Evolution can tap into both Google and Nextcloud that way. ¬†For the web side, any browser will suffice.

So what can I say about this — its not free, as the Android side was ~$12 in apps, but thats not too bad. ¬†And I will of course require at least one VPS to handle this all, and I am actually doing it with much more for redundancy and sanity sake. ¬†But running my own is a benefit of being away from companies that, although today may care about my interests, may not tomorrow. ¬†With a business class line, I can actually host at home for slightly more if push comes to shove (clone the VPS VMs and host em on a local KVM instance).

Although I still am working on getting things ported to EL7, this is going to be a very nasty undertaking and am drafting up a proposal for the MIAB team to add in a few things that, even without official support, will make doing things alot saner, and future-proofed.  But that is at least 6 months away at this time.

OnePlus One woes and resolution

Standard

About a year ago I ran into some corruption issues with my OnePlus One and that issue rendered ADB to have a fit, as did OTA updates, USB mode, and fastboot.  Tech support wanted me to use Windows to fix it via a remote desktop session.  Needless to say I was not about to let them goad me into going Windows.  Well this corruption was abosulutely annoying and I was short on time and willpower to devote much of either into a literal spare device.

Alas I did.  Sorta.  I figured since I just got this ironed out I would share the ever so pleasant experience in getting this nightmare sorted out.  I installed 8.1 onto an old disk and got it up and running long enough to get the thing fixed.  If you find yourself in the same boat,below is what you want to do.

FIXING THE BOOTLOADER

  1. Download and install the Samsung Drivers for Windows.  Reboot afterwards.  And yes do reboot, its necessary.
  2. Download and install then the 15 Seconds ADB Installer. ¬†Do not need to install the drivers, just ADB and fastboot. ¬†Reboot just in case it is necessary. ¬†I did. ¬†Besides we all know Windows needs to be rebooted every 5min or so ūüėÄ
  3. Plug in your phone.  Windows should ask about drivers, if not no biggie.  If you do get the dialogue, you can cancel it.  Go into Device Manager and find the ADB heading -> The ADB device -> Right Click to the Update Driver Software -> Choose Browse my computer -> Choose Let Me Pick -> Show All Devices -> Have Disk -> Point to where the Samsung drivers were installed.

That should get things working again.  Once you can copy stuff to the phone, you can reinstall your recovery and OS via fastboot and the recovery respectively.

 

FLASHING OXYGENOS

Download links are in the forums.  You need to grab the build and then the Windows package and instructions.  The instructions will add a few details to what I have laid out, but same process.

https://forums.oneplus.net/threads/oxygenos-2-1-4-for-the-oneplus-one.425544/

In my case I went from a broken CyanogenMod 12.1 to OxygenOS 2.1.4.  The process was then, once adb/fastboot was fixed, was:

  1. Run from an Administrator mode command prompt “fastboot oem unlock”
  2. Run the patch companion download’s batch file to flash the recovery, AllInOne.bat and choose option 2.
  3. Reboot phone and then copy over the Bacon OxygenOS 2.1.4 zip to the phone if its not already there
  4. Reboot phone to recovery and do a cache and data wipe
  5. install the bacon zip.
  6. After you reboot into Oxygen and go thru the new device setup song and dance, enable developer mode (Settings -> About Phone -> Tap Build Number a bunch of times until dev mode unlocks)
  7. Turn on ADB support (Settings -> Developer Options -> Usb Debugging

 

FIXING ADB ON THE LINUX SYSTEM

Now replug the phone into your Linux rig and you should see two partitions pop up, including the drivers partition.  Run the ADB fix for linux shell script:

Pre script, as you can see the only device is my other phone:

[root@big-red-wireless ~]# adb devices -l

List of devices attached 
5VT7N15A25000587 device usb:1-2 product:angler model:Nexus_6P device:angler

 

Post script, and it sees but can’t do anything with the phone yet:

[root@big-red-wireless ~]# '/run/media/andrew/OnePlus Drivers/adb_config_Linux_OSX.sh' 
android home is exist!
config adb ...
OK! You can use adb now!
[root@big-red-wireless ~]# adb devices -l
* daemon not running. starting it now on port 5037 *
* daemon started successfully *
List of devices attached 
d6dea608 offline usb:1-1
5VT7N15A25000587 device usb:1-2 product:angler model:Nexus_6P device:angler

Accepted the connection on phone:

[root@big-red-wireless ~]# adb devices -l
List of devices attached 
d6dea608 device usb:1-1 product:bacon model:A0001 device:A0001
5VT7N15A25000587 device usb:1-2 product:angler model:Nexus_6P device:angler

[root@big-red-wireless ~]#

Done. ¬†Working great now. ¬†Would have been nice to know what I needed to do to get this functional on Linux from jump street, as it would have saved me a good 4hrs of my time screwing around with this. ¬†I do firmly believe that there was an issue somwhere with an update to the recovery image. ¬†The image I have now is much nicer than the one from the factory. ¬†And there was a setting that was ticked in CM that allowed for recovery updates. ¬†Just that nothing worked. ¬†My first stab was to download and install the latest CM13 for bacon and use the recovery to install it. ¬†Nope, didn’t work. ¬†Thats how I eventually stumbled across ADB and Fastboot being borked here. ¬†For safe keeping I did back up my recovery files partition so if things go wonky again I am not forced to deal with this level of fun again. ¬†And by fun, I mean 6 hours of pure pain of having to touch Windows and screw around with their miserable driver model. ¬†And 8.1 sucks golf balls thru a garden hose.

 

EL7 mail guide situation and some general site news.

Standard

Long story short, short on time.  Shorter on usable information.  What I kept running into were database related snags.  Currently I have running a few different out of the box solutions, which the only one that I care for is Mail In A Box.  Unfortuately its sitting on Ubuntu 14.04.  Not my ideal distro platform, but the team has done a magnificent job across the board.  This is what I would like to have for EL or SLE.  My current implementations are bare metal, but I think I may try out these solutions as docker containers.  As for now, I would recommend for a very nice webmail + multiple domains + sanity checks to go with MIAB.

As far as the site has been, I have in recent months added a Plex Media Server repository, almost done with migrating my private GitLab instance to a public server, and pretty much done with a massive site cleanup.

With regards to the cleanup, you may have noticed that my “notes” pages are gone. ¬†Moved them into draft status, and when I get done cleaning them up, I will make each one a guide where appropriate. ¬†I also noticed that some of my pages the copy-paste wouldn’t directly duplicate properly. ¬†Should be fixed now.

Guides currently on deck:

  • oVirt/kvm
  • Raspberry pi related topics
  • Mail In A Box bare metal / docker installation guide
  • Using Linux to mix audio for your band (EL is a little lacking here, and will be more of a generic guide, favoring either OpenSUSE or Fedora)

A few updates on things

Standard

I have been working rather feverishly as of the last few weeks on doing up a nice EL7 script that will get one from a fresh EL7 install to a working mail server setup on a database so one can do virtual domains, virtual users, and webmail.  Once I get this up and running I will also have the ability to publish a few other things that require the use of a local mail server that will be of interest (surprises inbound).

Once that is up and rocking, I will be doing the same for a wordpress install (actually not hard, but lower on my priority list).

Then the same for getting an LDAP solution implemented with SSO capabilities.  This one is actually not that easy, but should keep me busy and productive in the shell scripting department.

Eventually each of these will have a GUI component (good thing to get back into PyGTK) so one will only need to shovel it a few answers to some questions (shell acct, domain info, etc.) ¬†If there are any particular requests for things related to these topics, feel free to contact me one way or another. ¬†Otherwise I will just keep chugging along with the battle plans that present themselves based off my needs, and several clients’ needs. ¬†These are solutions to the needs of me primarily, but a few have asked for automation tools that I think can be done for these tasks quite nicely, as there are a rather finite number of variables for each that can rather sanely be foreseen and dealt with.

As for the former’s roadmap, I think by June I should have something rather well tested using local vms and via vagrant scripts. ¬†The biggest hurdle for me is doing the regex calls to replace lines of text, and for insertion. ¬†It has indeed been that long, but thankfully this is not rocket science and can be handled quite well, once I get more time to sit and tinker and refine and finally test.

Plex Media Server playback issue fix

Standard

One of the issues that cropped up rather recently was a bunch of recent rips just stopped working completely. ¬†Couldn’t stream them at all whether it be the HTML5 web GUI, my Roku, Android App, via the LAN, or WAN. ¬†Nothing. ¬†Would just get an error about the file could not be played back. ¬†They used to work, why not now?

So naturally, after swearing at my server for a while and fidgeting fruitlessly, I let it sit for a bit, with hopes it was a bug and an update would fix it.  Nope.  Google yielded me a rather fast result that normally I would have dismissed, but since I tried almost everything I could think of, I tried it.

The fix was simple — create a new transcoding temp dir. ¬†Yeah it was that stupid simple. ¬†Settings -> Server -> Trancoder -> Transcoder temporary directory.

Screenshot from 2016-03-06 05-32-07

New EL7 Repo : Plex Media Server

Standard

I am now hosting the public download¬†versions of Plex Media Server in a separate repo. ¬† I haven’t decided if I will host publicly the Plex Pass files or not.

https://schotty.com/yum/plex_el7/

https://schotty.com/yum/plex_el7/repoview/

TO INSTALL

As root:

yum install https://schotty.com/yum/plex_el7/schotty-plex-el7-release-1-1.noarch.rpm
yum makecache
yum install plexmediaserver

systemctl enable plexmediaserver
systemctl start plexmediaserver

firewall-cmd --permanent --add-port=32400/tcp
firewall-cmd --reoad

# This is at your discretion as to what to name it, but you will need a
# dedicated plex group.
# Ensure that your media folder(s) have group ownership of this media
# group account.
groupadd media
usermod -G media plex
usermod -G media YOURPRIMARYUSER


Then in a browser navigate to your server in web browser:

http://IP_OR_DOMAIN_NAME:32400/webindex.html

Then proceed to configure Plex as you see fit.

 

Updated page on installing The Elder Scrolls Online on Linux

Standard

I have updated the installation instructions for getting TESO installed on Linux.  Specifically, the instructions for Crossover needed some changes since Crossover 15.  I have also updated the screenshots associated with Crossover 15 since the UI got such an overhaul.  Hopefully this is helpful for anyone doing a fresh install or fixing a broken bottle.

Installing The Elder Scrolls Online on Linux

Upgraded to SSL

Standard

I have upgraded the site to use SSL.  I made an attempt to change all internal links here to use https rather than http, so forgive me if one or two got missed.

But while I had this opportunity, I also overhauled most of the pages here and cleaned up stale information with correct information, and condensed some others.  There are a few sections that are so old I am not sure where to begin, and will put that cleanup off for another day.

Anything I missed, please make me aware.  I will get it corrected ASAP.

New Package: tuxboot

Standard

Rolled up tuxboot RPMs for BOTH (yes BOTH) EL7 and Fedora 22.

tuxboot is a tool for making bootable disks (USB primarily) of various disk utilities such as Clonezilla and gParted

I have no FC22 system to test the RPM out on, so let me know if there are any issues. ¬†The only difference in the spec file is the EL7 package requires epel repo to be installed (which should be anyway if you are using anything in my repo to begin with). ¬†This of course would be pointless on Fedora as there is no epel for Fedora ūüėÄ ¬†Either way mock was very happy with it. ¬†There is one bug that is of note — the linux version has the wrong version tag within the app. ¬†Will be fixed in a future update from upstream.

https://schotty.com/yum/el/7/repoview/tuxboot.html

https://schotty.com/yum/fedora/22/repoview/tuxboot.html

RHEL 7 with OpenVPN in NetworkManager

Standard

OK, put simply there are issues immediately due to SELinux in getting NetworkManager to connect up to your VPN properly.  If you setup your connection and certificates as follows you will have no issues whatsoever connecting as any user.

1)Copy all your certificate files into ~/.cert

2)Check your SELinux context and validate it is appropriate:

unconfined_u:object_r:home_cert_t:s0

You need to have something like this:

[andrew@big-red-wireless .cert]$ pwd
/home/andrew/.cert
[andrew@big-red-wireless .cert]$ ls -Z
-rw-r–r–. andrew andrew unconfined_u:object_r:home_cert_t:s0 andrew.crt
-rw——-. andrew andrew unconfined_u:object_r:home_cert_t:s0 andrew.key
-rw-r–r–. andrew andrew unconfined_u:object_r:home_cert_t:s0 ca.crt
[andrew@big-red-wireless .cert]$

3)If you need to reset the contexts, issue the following command as root:

restorecon -R -v /home/$USERNAME/.cert

4)Create a NetworkManager VPN entry with your cert files from the ~/.cert folder.

5)Connect!

Took me a few minutes to understand why the connection was barfing out. ¬†Once I noticed some SELinux alerts it dawned on me — I never set the contexts. ¬†A quick Google showed me also, that there is a convenient location to dump all certificates into. ¬†Double win!

Learnt something new : cu

Standard

Inside the uucp package is a nifty tool that I just picked up on called cu.  This is for serial connection management and it works beautifully, and simply.  I had a few old machines that I needed to tap into via serial and a friend admin I was with recently mentioned her love of this command.  So having a selectively great memory, I gave it a stab.

1)First get uucp installed.  I see that it comes from EPEL on RHEL7.

[andrew@big-red-wireless Desktop]$ sudo yum info uucp
Loaded plugins: langpacks, nvidia, product-id, subscription-manager
Installed Packages
Name : uucp
Arch : x86_64
Version : 1.07
Release : 41.el7
Size : 2.7 M
Repo : installed
From repo : epel
Summary : A set of utilities for operations between systems
URL : http://www.airs.com/ian/uucp.html
License : GPLv2+
Description : The uucp command copies files between systems. Uucp is primarily
: used by remote machines downloading and uploading email and news
: files to local machines.

[andrew@big-red-wireless Desktop]$

2)Secondly determine your connection parameters. ¬†I have a USB to serial hodgepodge of cabling and adapters that I tote around and stash in places useful. ¬†So in my case its /dev/ttyUSB0 for the device, and of course each connection has its flow control and bitrate and parity, etc… ¬†Know that all. ¬†In my case the devices preferred to speak 19200b, 8N1, FC Off (manuals say so). ¬†But being hasty I just slapped into the terminal:

[andrew@big-red-wireless ~]# sudo cu -l /dev/ttyUSB0 -s 19200

And things just worked.  For a full reference guide for various settings and such,:

http://linux.die.net/man/1/cu

http://www.jann.cc/2013/02/10/the_cu_command.html

RHEL 7 + Intel 7260 AC Card

Standard

There are known issues with older kernels that cause major issues with the 7260 with regards to power management and the 5GHz spectra.  After dealing with a dying i7 IMC issue, I finally could spend time troubleshooting my issues.

For starters, ensure your router/AP isn’t crap. ¬†I had that issue regardless of devices and their respective OS’s. ¬†Once you are sure that there isn’t anything dead/dying or just plain flaky, there are a few things that can be done, one of which you have a potential choice as to how to deal with it. ¬†Supposedly kernels beyond 3.16/3.17 have this rectified. ¬†As we are pegged on EL7 to 3.10, until Red Hat backports the changes we have to deal with it ourselves.

For starters there are several lines we need to put into your /etc/modprobe.d/iwlwifi.conf

Here is how I went, and I will detail what each line means:

$ cat /etc/modprobe.d/iwlwifi.conf

options iwlmvm power_scheme=1

options iwlwifi bt_coex_active=N swcrypto=1 11n_disable=8

The power_scheme option is to set the power to full bore at all times.  The default is to 2 on most installations which is the adaptive mode.  This can cause issues with the device going to sleep at rather inopportune moments.  Although not necessarily your issue, something to keep note of.

The bt_coex_active option is for coexistence of BlueTooth and wireless.  They operate on the same frequency range and can cause issues.

The swcrypto option is for forcing the cryptography off of the card and onto your system CPU.  High workloads

The 11n_disable option is where I was referring to a choice.  Setting the disable to 1 will force the card into G only speeds.  This does indeed work on AC networks just fine, but will limit your connection to 54Mb/s speeds max.  Setting this to 8 will not cause that restriction but rather a link aggregation.  Here is the information from the module on this parameter:

$modinfo iwlwifi | grep 11n_disable
parm: 11n_disable:disable 11n functionality, bitmap: 1: full, 2: disable agg TX, 4: disable agg RX, 8 enable agg TX (uint)

Once you have made a choice as to what of these option you are going to implement, you can do the following commands (as root) to unload/load the kernel, of course taking the new parameters on reload.

$ sudo rmmod iwlmvm

$ sudo rmmod iwlwifi

$ sudo modprobe iwlwifi

Hopefully this information is as useful to you as it was me.  This fully rectified my constant disconnections while on AC networks with this card.

Docker & desktop applications

Standard

I decided to play around a bit with running applications that aren’t packaged up for RHEL7 via docker. ¬†I mean, why not? ¬†That’s one of the perks of containers, so it should be doable without much fuss right?

Sorta.

I did manage with little difficulty to do so, but there is a bit of work that will go into it first, and you will need to work with Dockerfiles, selinux, and sudo.  Here we go.

I found a great tutorial on this and pretty much copied the work verbatim.  I know that not all applications are going to need everything in the template, but its just a container, so to hell with it.  If it is mission critical, one should be a bit more discriminatory anyhow.
http://fabiorehm.com/blog/2014/09/11/running-gui-apps-with-docker/

 

Now, what I use is the following for my stub for a Ubuntu 14.04 LTS base:

FROM ubuntu:14.04

RUN apt-get update && apt-get install -y firefox

# Replace 1000 with your user / group id
RUN export uid=1000 gid=1000 && \
mkdir -p /home/firefox && \
echo “firefox:x:${uid}:${gid}:firefox,,,:/home/firefox:/bin/bash” >> /etc/passwd && \
echo “firefox:x:${uid}:” >> /etc/group && \
echo “firefox ALL=(ALL) NOPASSWD: ALL” > /etc/sudoers.d/firefox && \
chmod 0440 /etc/sudoers.d/firefox && \
chown ${uid}:${gid} -R /home/firefox

USER firefox
ENV HOME /home/firefox
CMD /usr/bin/firefox

 

As you can see not alot to it. ¬† But lets go over a few things that should stand out a bit. ¬†First off, the variables should bark out at you very loudly. ¬†For things to work, we need to setup a few maps between your actual system UID and GID and the fake one we are making in the container. ¬†Second, I need to work on making this a bit more portable. ¬†I will eventually play around with a variable for the appname, and propagate that thru the entire Dockerfile as we are doing with the UID and GID variables. ¬† Lastly a general clean up of everything there so its a bit more logical as to what is going on and can be commented out for those that just don’t need it.

That said, usage! ¬†I am going to presume you can install docker (yum install -y docker). ¬†The way docker builds docker images via dockerfiles is a bit odd for those not used to it, but you will want to make a directory structure that includes a folder for each docker image’s dockerfile. So a ~/Dockerfiles/firefox/ and put the above into it in a file called … (wait for it) … Dockerfile. ¬†Yes with a capital D. ¬†The result using the above demonstration info would be ~/Dockerfiles/firefox/Dockerfile.

Next via a terminal, cd to that directory and enter in:

sudo docker build -t firefox .

Dont forget the trailing space and period.  The image should be getting built.  Once completed, you can enter in on the prompt:

sudo docker images

That should show a firefox image.  Now we can invoke this firefox instance as we need via:

sudo¬†docker run -ti –rm -e DISPLAY=$DISPLAY -v /tmp/.X11-unix:/tmp/.X11-unix firefox

If you are like me and love your selinux, it will have a fit.  Open up setroubleshooter and fix it as you deem best for your tastes.  Rerunning it should then work fine.

Here is a quick screen recording demonstrating what I went over.  I have already built the image, so that was of course not there.  But I do show my dockerfile I made, the launch script and running firefox.

 

Video Card Benchmark Site

Standard

Found a bit back a nifty site for figuring out relative performances of all the video cards out on the market. ¬†Posting in lieu of poor memory and assumption I am not the only one interested in this information ūüėÄ

http://www.videocardbenchmark.net/