:::: MENU ::::

Rocket.chat on OpenShift 3

I’ve been on a tear recently getting various applications to run on the next-generation OpenShift Online preview. Yesterday I did some work with Node.JS and Gulp, and today I decided to give Rocket.chat a try, since we’re possibly going to use it as part of some cool demos.

Rocket.chat is self-described as “The Ultimate Open Source Web Chat Platform”. It’s 100% open source, and is built using Node.JS and Meteor, among other technologies. While the folks at Rocket.chat make a Docker image available, I generally don’t like to try to use them. They’re not usually built using best practices and require a lot of futzing to make work. They often use non-Red Hat friendly operating systems. Rocket.chat provides a downloadable tarball release. This, to the best of my understanding, is the “output” of using the Meteor build system. In looking at the installation instructions for Rocket.Chat on CentOS, it appeared that you could just run something like the following to get the requirements installed:

cd Rocket.Chat/programs/server
npm install

Then you simply export your environment variables and execute Node.JS with the application:

export PORT=3000
export ROOT_URL=http://your-host-name.com-as-accessed-from-internet:3000/
export MONGO_URL=mongodb://localhost:27017/rocketchat
node main.js

I’ve seen all of this stuff before. The requirements installation seems a lot like the normal assemble process with a slight change. I figured I would give our good friend source-to-image a try again.Yesterday’s article on Node.JS and Gulp talked about customized assemble scripts. Please visit that article for a quick refresher, and check out the source-to-image documentation, too.

Here’s how I walked through getting this app to run on OpenShift.

Make it Build

Since I was going to use source-to-image for this application, I needed a Git repository to build against. The tarball from Rocket.chat contains the release, so I simply put that into a GitHub repository: https://github.com/thoraxe/rocket-built

Since the Node.JS package installation required being in a different folder, I knew I had to customize the assemble script. You can find the whole assemble script here, but the relevant changes are just:

cd programs/server
npm install

I was able to fire up a build using the included OpenShift Node.JS 0.10 builder, and everything worked so far.

Make it Run

Not so fast. As I indicated in the introduction, Rocket.chat wants to be instantiated by executing Node.JS against the application file. However, the default run script for Node.JS uses this:

# Runs the nodejs application server. If the container is run in development mode,
# hot deploy and debugging are enabled.
run_node() {
  echo -e "Environment: \n\tDEV_MODE=${DEV_MODE}\n\tNODE_ENV=${NODE_ENV}\n\tDEBUG_PORT=${DEBUG_PORT}"
  if [ "$DEV_MODE" == true ]; then
    echo "Launching via nodemon..."
    exec nodemon --debug="$DEBUG_PORT"
  else
    echo "Launching via npm..."
    exec npm run -d $NPM_RUN
  fi
}

Just like how we overrode the assemble script by placing one in our repo, we can do the same with the run script, too. Here’s the entire script, but this is the relevant change to the function:

run_node() {
  echo -e "Environment: \n\tDEV_MODE=${DEV_MODE}\n\tNODE_ENV=${NODE_ENV}\n\tDEBUG_PORT=${DEBUG_PORT}"
  if [ "$DEV_MODE" == true ]; then
    echo "Launching via nodemon..."
    exec nodemon --debug="$DEBUG_PORT"
  else
    echo "Launching..."
    exec node main.js
  fi
}

This probably won’t work in the debug case, but I wasn’t trying to do that right now. We can fix that later!

Still Not Quite…

With the change to the run script, we now could get the application to run… sort of. If you look back in the original instructions, you see that Rocket.chat expects certain environment variables to be set. If they’re not, Rocket.chat will fail to start. Fortunately, OpenShift makes it easy to manage environment variables that get automatically injected into a container. Most of the variables are actually related to the database. I launched a MongoDB instance using the OpenShift UI, and then looked at the user, password and other variables that were auto generated for me.

Then, in the OpenShift UI, I was able to edit the Rocket.chat deployment and add the environment variables I needed. Yeah, I had to use a little YAML-fu to get things right. The other option would be to have deleted all of the Rocket.chat stuff, and then gone back and re-created the build and specified the desired environment variables from the beginning. The OpenShift UI team is constantly improving the user experience, and I fully expect to have better control over environment variables from the UI in an upcoming release.

Ready, Set, Chat!

Remember that you will need to provide the user and password in the environment variable that contains the database connection string. Once you’ve got all that set, your Rocket.chat instance should be up and running and usable!


Node.JS, Gulp and OpenShift 3 – Custom assemble script FTW

I’m heading to India for workshops with some of Red Hat‘s big SI partners, and one of them requested some use case information around Node.JS and Gulp on OpenShift 3. Since I have never worked with any of these technologies, I had to do some research.

Gulp is kinda-sorta a build… uh… system… for Node.JS. It supports a number of plugins and other things that can be used during the build phase to produce your Node application. Seems simple enough. However, OpenShift’s source-to-image process for Node doesn’t know about Gulp out of the box. So, a little bit of customization is required. And by “a little bit” I mean two lines. First, a refresher.

OpenShift 3 introduces the concept of source-to-image. Source-to-image is the process that OpenShift uses to combine an existing Docker image that has a runtime already installed with your code. Red Hat calls this runtime image a “builder”. I’m using one of the Node.JS images from Red Hat’s registry:

rhscl/nodejs-4-rhel7

The build process involves a script called assemble. Here’s the Node.JS assemble script that comes with the Node.JS builder image:

#!/bin/bash
 
# Prevent running assemble in builders different than official STI image.
# The official nodejs:4.4-onbuild already run npm install and use different
# application folder.
[ -d "/usr/src/app" ] && exit 0
 
set -e
 
# FIXME: Linking of global modules is disabled for now as it causes npm failures
#        under RHEL7
# Global modules good to have
# npmgl=$(grep "^\s*[^#\s]" ../etc/npm_global_module_list | sort -u)
# Available global modules; only match top-level npm packages
#global_modules=$(npm ls -g 2> /dev/null | perl -ne 'print "$1\n" if /^\S+\s(\S+)\@[\d\.-]+/' | sort -u)
# List all modules in common
#module_list=$(/usr/bin/comm -12 <(echo "${global_modules}") | tr '\n' ' ') # Link the modules #npm link $module_list echo "---> Installing application source"
cp -Rf /tmp/src/. ./
 
if [ ! -z $HTTP_PROXY ]; then
        echo "---> Setting npm http proxy to $HTTP_PROXY"
        npm config set proxy $HTTP_PROXY
fi
 
if [ ! -z $http_proxy ]; then
        echo "---> Setting npm http proxy to $http_proxy"
        npm config set proxy $http_proxy
fi
 
if [ ! -z $HTTPS_PROXY ]; then
        echo "---> Setting npm https proxy to $HTTPS_PROXY"
        npm config set https-proxy $HTTPS_PROXY
fi
 
if [ ! -z $https_proxy ]; then
        echo "---> Setting npm https proxy to $https_proxy"
        npm config set https-proxy $https_proxy
fi
 
echo "---> Building your Node application from source"
npm install -d
 
# Fix source directory permissions
fix-permissions ./

The above script is pretty simple. It basically just sets some config options and then runs:

npm install -d

In your source code repository, you can create a folder, .sti/bin, and insert your own assemble script in it. When the source-to-image process is executed, it will run your assemble script instead of the built-in one. As you can see, the assemble script is simply a Bash script in this case. It could be a script written in any locally executable language. Probably even in Node!

I am using a forked version of a Node+Gulp application written by Grant Shipley located here. Grant didn’t design the app to run on OpenShift, so I simply took it and added an assemble script. You can find my repository here: https://github.com/thoraxe/nodebooks

Since the assemble script is just a Bash script, we can actually run scripts from scripts. The built-in assemble script is located in the folder:

/usr/libexec/s2i/

Since Gulp itself is written in Node, we can launch our Gulp task with Node. Here’s the entirety of my customized assemble script:

# vim: set ft=sh:
#!/bin/bash
 
# original assemble
/usr/libexec/s2i/assemble
 
# gulp tasks
node node_modules/gulp/bin/gulp.js inject

That’s all there is to it! The script above calls the original assemble that’s built-in to the image. This causes the Node.JS dependencies to be installed. That ends up giving us Gulp. Then, we use Node.JS to execute the locally-installed Gulp, and to run the task inject. Since the Gulp tasks are very specific to my application, this actually makes sense. Not only does Gulp allow us to treat configuration as code, as we create additional Gulp tasks we can simply update which tasks are run by changing the assemble script.

Neat, huh? If you want to try OpenShift, head on over to www.OpenShift.com


Disconnected Ruby demos on OpenShift 3

I’m headed to China soon, and the Great Firewall can present issues. S2I builds on OpenShift 3 generally require internet access (for example, pulling from Github or installing Ruby Gems), so I wanted to see what it would take to go fully disconnected. It’s actually surprisingly easy. For reference, my environment is the same environment as the OpenShift Training repository. I am using KVM and libvirt networking and all three hosts are running on my laptop. My laptop’s effective IP address, as far as my KVM VMs are concerned, is 192.168.133.1

Also, I have pre-pulled all of the required Docker images into my environment, like the training documentation suggests. This means that OpenShift won’t have to pull any builder or other images from the internet, so we can truly operate disconnected

First, an http-accessible git repository is required for using S2I with OpenShift 3 right now. Doing a google search for a simple git HTTP server revealed a post entitled, unsurprisingly, Simple Git HTTP server. In it, the instructions suggest using Ruby’s built in HTTP server, WEBrick. Here’s what Elia says:

git update-server-info # this will prepare your repo to be served
ruby -run -ehttpd -- . -p 5000

One thing to note – you must run the update-server-info command after every commit in order for webrick to actually serve the latest commit. I figured this out the hard way. On Fedora and as a regular user, you usually want to use a high port for stuff, so I chose a really high port — 32768. I also had to open the firewall. Fedora, by default, uses firewalld. Your mileage may vary:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 32768 -m conntrack --ctstate NEW -j ACCEPT

With the firewall open, the git repo lives at http://192.168.133.1:32768/.git — not too shabby! Next, we need to make the Ruby Gems accessible via HTTP locally as well. Some Google-fu again brings us to something useful. In this case, Run Your Own Gem Server. While the article indicates that you can just run gem server, I found that this produced strange results and I filed bug #1303. I was using RVM in my environment due to some other project work, so, in the end, my gem server syntax looked like:

gem server --port 8808 --dir /home/thoraxe/.rvm/gems/ruby-2.1.2 --no-daemon --debug

Of course, this is going to serve gems from your computer, which means the gems have to actually be installed there in the first place. In the case of the Sinatra example, you would have to gem install sinatra --version 1.4.6, which would bring in the gem dependencies. Of course, this requires that you have ruby and rubygems, but you already have that, right?

Running the gem server also requires opening a firewall port:

firewall-cmd --direct --add-rule ipv4 filter INPUT 0 -p tcp -m tcp --dport 8808 -m conntrack --ctstate NEW -j ACCEPT

Note again that these firewall changes will not be permanent. You would need the --permanent option to persist these changes. You now have gems accessible at http://192.168.133.1:8808.

At this point you have:

  • A git http server running on port 32768
  • A gem server running on port 8808
  • Open firewall ports

In your OpenShift 3 environment you can now create a new application whose repository is the git HTTP server you set up with Webrick. Again, that’s http://192.168.133.1:32768/.git But, if you just do that, your build will fail if you don’t have internet access. A standard-looking Gemfile probably defines https://rubygems.org in its source. For example, the Sinatra example that OpenShift provides:

source 'https://rubygems.org'
 
gem 'sinatra', '1.4.6'

Without internet access, we’ll never get to https://rubygems.org. So we can change the Gemfile’s source line to point at our new gem server, which lives at http://192.168.133.1:8808. Feel free to clone the example repository and try it yourself. Remember, once you change the Gemfile you will need to run git update-server-info and then (re)start your Webrick server. Also, be sure you are doing this on the master branch, or you’ll need to point OpenShift at whatever branch you decided to use. This totally tripped me up a few times.

At this point, you should be able to do your build in OpenShift. In your build log you will see something like the following (ellipses indicate truncated lines):

...
I0703 19:44:33.264627       1 sti.go:123] Performing source build from http://192.168.133.1:32768/.git
...
I0703 19:44:34.010878       1 sti.go:388] ---> Running 'bundle install '
I0703 19:44:34.339680       1 sti.go:388] Fetching source index from http://192.168.133.1:8808/
I0703 19:44:35.019941       1 sti.go:388] Resolving dependencies...
I0703 19:44:35.281696       1 sti.go:388] Installing rack (1.6.4) 
I0703 19:44:35.437759       1 sti.go:388] Installing rack-protection (1.5.3) 
I0703 19:44:35.617280       1 sti.go:388] Installing tilt (2.0.1) 
I0703 19:44:35.841344       1 sti.go:388] Installing sinatra (1.4.6) 
I0703 19:44:35.841381       1 sti.go:388] Using bundler (1.3.5) 
I0703 19:44:35.841390       1 sti.go:388] Your bundle is complete!
I0703 19:44:35.841395       1 sti.go:388] It was installed into ./bundle
I0703 19:44:35.862289       1 sti.go:388] ---> Cleaning up unused ruby gems

And your application should work! Well, assuming all the rest of your OpenShift environment is set up correctly…


Transferring Windows 7 to a new computer

I purchased a new motherboard and CPU in an effort to upgrade both my system processing capability and my hard disk space. My original plan was just to clone an existing 1TB drive onto part of a 2x2TB RAID array, but I was struck with many issues, even with disk cloning. I went through a lot of trouble trying to find a method that worked. So after much pain, here’s what I found:

1) The current stable Redobackup is too old to detect the effective RAID device that my new mobo bios was creating. It refused to select a target device.

2) The current stable Clonezilla also has issues. It detects an md device, but then has issues determining its size and refuses to actually write data to it.

3) The GParted livecd seemed to work best. I used gparted to copy partitions from the original drive to the new drive. I then used dd to copy the boot sector, just in case.

What I found is that Windows 7 gets *REALLY ANGRY* when you just pop an existing installation into a new mobo/cpu. It basically is unstartable. I found an article that suggests running Sysprep with “generalize” and “out-of-box” as a part of transferring to a new machine:

http://www.sevenforums.com/tutorials/135077-windows-7-installation-transfer-new-computer.html

Following these instructions and running Sysprep I then found an issue with the Windows Media Player Network Sharing service – it needed to be stopped in order for Sysprep to work right.

https://social.technet.microsoft.com/Forums/windows/en-US/8f5002e1-95b4-47bf-b031-4b72b3eb388a/sysprep-fails?forum=w7itproinstall
(that link may not work without a login).

So, what I ended up doing thus far:

  1. Clone existing 1TB drive onto new, temporary 1TB drive.
  2. Boot old mobo system with cloned 1TB drive, run sysprep per instructions.
  3. Put sysprepped temporary 1TB drive into new mobo system, boot, let Windows do its first startup, finally install (most) drivers.

I found some issues with some of Asus‘ drivers, so I had to do these steps *AGAIN* in order to get to a working system.

My next step is to clone this now updated 1TB drive onto a 2TB bios-based RAID array and hope for the best. I hope someone finds this information useful!


SQLBuddy RPM for RHEL, CentOS, Fedora and etc.

SQL Buddy has been a tool I’ve used a lot lately for simple MySQL administration of servers. It’s a much lighter alternative to phpMyAdmin and can be installed very quickly via a zip. But I wanted an RPM. RPM just makes things a lot easier installation-wise. I don’t have to wget/unzip/etc every single time I want to deploy it. So I built a quickie RPM.

Here’s a link to download the SQL Buddy RPM I’ve created. The source RPM is there, also, if you feel like looking at it and making suggestions. Eventually I’ll get around to submitting it to Fedora for a real package review, and perhaps get it into EPEL. But this was the critical first step for me.


Sharing a Linux printer to Windows with Samba and Cups

So I recently have been setting up a new Fedora 14 Linux machine at home, which used to run Windows as my primary desktop. I figured that I would keep the printer connected physically to this machine, even though it would no longer be the primary desktop. That meant I had to figure out how to get printing working first with Linux, and then printer sharing.

Getting printing working in Linux was fairly easy. In fact, the printer had already appeared in the list of printers without really doing any work. I recalled from a previous attempt a while back that there are some neat specific tools for HP printing in Linux, and I found them again at the HPLIP project. A quick install of that software in Fedora and I, at least, had that up and running.

Sharing the printer via Samba and CUPS is where it got a little tricky. I had ended up fighting a bunch with the specific configuration of Samba, finding lots of conflicting tutorials with different information that didn’t make sense. I tried a few things, and kept getting permissions errors.

I finally realized that, at least for printing, smb is running/executing as the user “nobody”.  I also noticed that there happened to be a samba-specific folder in /var/spool. I put two and two together and figured that SELinux would be happiest with Samba talking to that folder. So, here’s, ultimately, the set-up I ended up with for smb.conf:

[global]
  workgroup = YOURWORKGROUP
  server string = Samba Server Version %v
  security = share
[printers]
  printing = cups
  printcap name = cups
  browseable = yes
  printable = yes
  public = yes
  create mode = 0700
  use client driver = yes
  path = /var/spool/samba

Adding the printer from Windows proved to be a snap:

  1. Browse to the computer name (//yourlinuxmachinename)
  2. Double click the printer to connect to it
  3. Find the driver it needs
  4. Done!

Hopefully this will help some of you if you find yourselves fumbling around trying to make this sort of thing work.


Creating a Windows 7 bootable USB device from Linux

This really should not have been as hard as it was. I tried in vain to take the Windows 7 Ultimate 64-bit ISO, that I had downloaded from MSDN, and put it on a USB HDD that I had laying around. I have just built a new computer and did not bother to buy an optical drive. Unfortunately, my existing Windows machine was 32-bit Windows XP. This meant running any files from the Windows 7 CD (like the boot sector program) was not a possibility.

I tried various tools like UNetbootin, WinToFlash, MultiBootISOs and others. I also tried some tricks with xcopy that did not seem to work. Since I work for Red Hat and am a Linux person, I happened to have a Linux machine at my disposal. Here’s what I found that worked:

  • I created a bootable (IMPORTANT!) 4GB primary NTFS partition on my 40GB external USB HDD
  • I formatted this partition with NTFS
  • I mounted the Windows 7 ISO and the NTFS partition, and copied the files from the ISO to the USB HDD
  • I used ms-sys to write a Windows 7 MBR to the USB HDD

There was at least one caveat here. I saw, in a place or two, suggestions to use ms-sys against the partition itself. When running ms-sys against a partition, it complained, so I ran it against the base device (in my case, /dev/sdb).

Hopefully this will help someone out there!


    How to set the text with formtastic, select and collection

    I’ve been on a tear again working on Riding Resource. We’re trying to do something interesting and slightly social, but I can’t give it all away just yet. There are some forms involved, and I decided that I was going to try and save some time by using Justin French’s formtastic plugin. Well, it surely saved some time, but, as with anything new, there’s a learning curve.

    Since one of the big things that Riding Resource does is help stables see who is searching for them (by storing lots of demographic information), I wanted to make sure that any data these forms captured would be easily reportable. In the case of select lists, that means having models for them with integers and text associated. But when poking around with formtastic, I couldn’t figure out how to make a specific field of the model display in the dropdown for the select. Here’s an example:

    f.input :preferred_discipline, :as =>, :select, :collection => DemographicPreferredDiscipline.all

    melc in #rubyonrails on Freenode suggested that I try using a map. I’d seen these before, so I figured I’d give it a whirl:

    f.input :preferred_discipline, :as => :select, :collection => DemographicPreferredDiscipline.all.map { |dp| [dp.text, dp.id] }

    Text is the name of the field I wanted to display in the select. What do you know? It worked! I figured I would share this here for posterity and Google indexing.


    Random thoughts on net neutrality and free markets

    This is basically a copy of a comment I made on Fred Wilson’s blog, but I wanted to put it here so that other people (who might possibly pay attention to me) might see it, too.  So here are some random thoughts:

    – Wireless technologies (WiFi) have evolved extremely quickly because they are largely “unregulated”. No one really owns the spectrum and every company can make a device that can access that spectrum, so they all compete to offer better performance/features/etc. in that space.

    – The only organization that can create a monopoly is a government. Even if one company were to buy up everything and become the sole provider of a service, it still is not a monopoly. Either people will substitute something else in place of that service (walking instead of taking the train, even though it takes a long time), or someone will determine that the barrier to entry, no matter how significant, will ultimately provide a competitive alternative to the existing monopoly.

    – Cable and telephone companies have “near” monopoly over internet access, but it is only because they have already eaten the tremendous costs of infrastructure over time, and happened to be able to retrofit this infrastructure for use as a data transport infrastructure.

    – Verizon seems to think that, despite the start-up cost, there is competitive benefit to them setting up a new higher-speed data transport infrastructure, as one example. Companies like Clear have decided that, despite the lack of comparable performance to other options today, there is a competitve to their investing in the infrastructure for their wireless data service infrastructure.

    – “Net Neutrality” and spectrum auctions will likely serve to neuter the inevitable explosion in over-the-air as an alternative to existing wired data service infrastructures. Instead of net neutrality making the internet and data services better, it will ultimately serve to further reinforce the near monopoly that the cable companies and phone companies already have by eliminating the competitive benefit that the wireless providers can exert over the cable companies by being net neutral. If Comcast were allowed to really really manipulate its network traffic, customers who did not like this would move to services like Clear in favor of a neutral experience as a trade-off to performance. Forcing the net neutrality hand means that this inevitable movement is going to be stifled.


    Updating Air on Fedora 12 breaks it… hell ensues

    After getting messages about updating Adobe Air for a while, I finally decided to bite the bullet and do it.

    Big mistake.

    Crazy hell ensued, in that nothing from Air would work any more after that, and all I got was cryptic core dumps. I tried to uninstall Air and Tweetdeck, and failed at that for a while, too, until I figured out the following:

    1. Air and Air applications like Tweetdeck actually end up as RPMs.  You can (should) remove them using rpm -e as the root user or with sudo.  (found via Adobe’s page, sort of)
    2. I found the rpms by grepping: rpm -qa | grep ado — or — rpm -qa | grep weet
    3. You may have to remove or move your certificates folder in /etc/opt

    So, if you decide to update Adobe Air on your Fedora 12 box and suddenly everything seems borked, you might just want to uninstall everything and install from scratch.  I just did this and it worked well, and I’m up and running with the latest Tweetdeck for Linux.

    http://www.adobe.com/products/air/

    Pages:1234