Sunday, December 13, 2015

Embracing Reactive

I spent this week working on my webapp project.

For this project I had chosen to use ReactJS for the client side code. Additionally I was using a home-grown flux implementation for “gluing” the application together.

I was reading some general training material on Reactive programming and Flux and I realized my home-grown implementation might not have been as true to the Flux mindset as I originally thought.

So this week I explored whether to continue to use my existing implementation or choose one of the many pre-canned implementations.

For my implementation, I had three types of Actions, ServerActions, ResultActions, and ViewActions. I had one ActionCreator class that would create an Action object using one of these types and then that Action would be sent to my single Dispatcher class that would handle all actions.

Each Action had a type attribute that specified what action to take.

My Dispatcher class would inspect the Action type attribute and then directly call the appropriate method whether it be on my web service provider or a store directly.

This worked well but after reading more about general Reactive programming, I realized that although my intent was good I was probably not fully embracing the reactive style of programming and more particularly a “correct” Flux implementation.

This made me worry if I might hit a “maintenance wall” as my application grows.

In my mind, a “maintenance wall” is a situation in your code where you
made a design decision early on that later down the road you realize was a
poor decision. You are then left with a decision to either continue
using the incorrect design or refactor like a mad-man until you fix
it.

If you can avoid these poor decisions early it is a “win” for you future self. When using a new technology you are just learning, I believe, the way you “win” is by following the path and advice of more seasoned developers in that particular technology.

As you get more seasoned in that technology you build up a toolset and come to your own conclusions of how best to use the technology but until then you have to rely on the expertise of others. Sometimes those “experts” will lead you in the wrong direction, sometimes not, it just comes with the territory.

That was the situation I was facing.

The problems I saw with my current implementation was there was too much coupling between the Dispatcher and the Action handler code.

Additionally, my web service handler was directly calling methods on the various Store objects when it got results back from the server. Again another needless coupling.

In short, my implementation had a lot of coupling between classes. This seemed very much like an anti-pattern when using reactive programming. So I decided it was time to find a more “correct” design for my flux implementation.

I knew I didn’t need a whole sale replacement of what I had, just a slight course correction. I eventually ran across the ‘flux-rails-assets’ gem and decided this was what I was looking for. Or at least it was the start of what I needed

This gem provides a Dispatcher class and an EventEmitter class. You create one Dispatcher implementation and all your Stores are instances of the EventEmitter class and each one registers a single “action handler” callback function to the Dispatcher to listen for when Actions are sent.

This way ALL Stores “see” all Actions but only handle the ones they are interested in.

This is a lot better than my implementation as the code for handling actions and the Stores they affect are all collated together. A maintenance win.

Finally, components (think UI components) register with the Store instances to be notified when the Store changes.

Unfortunately, since all of my home-grown Flux code was written in CoffeeScript and the gem was not, it presented a bit of a challenge on how to integrate it.

Because of this I was not able to use the “extends” keyword from CoffeeScript to extend the EventEmitter class like I had hoped for. Well, at least I couldn’t figure out how to make it work.

If someone knows how please let me know as I don’t do a lot of
CoffeeScript in my day job so I may have missed a nuance of the
language that would allow me to do it.

So what I chose to do, was for each store I had, have it own an instance of an EventEmitter. So for my SessionStore the top part of the code looked like this:

root = exports ? this

class SessionStore
  loggedInProfile = null
  emitter = new EventEmitter

To allow components to be notified when the Store changes I wrapped calls to the emitter instance like so:

emit: (type) ->
  emitter.emit(type)

Then when a component registered with the Store for an event I would add that to the embedded emitter. Here is the add and remove listeners for a user’s profile in my SessionStore object:

addProfileUpdatedListener: (callback) ->
  emitter.addListener(Events.PROFILE_UPDATED, callback)

removeProfileUpdatedListener: (callback) ->
  emitter.removeListener(Events.PROFILE_UPDATED, callback)

The next big hurdle was how to attach a Store to the single AppDispatcher class supplied by flux-rails-assets. For each Store a action handler needs to be registered in the AppDispatcher.

The AppDispatcher will then call the action handler for each registered Store and each Store will do something with the actions it is interested in. The key point here is each Store gets every action.

The big issue I had here was how to attach the Store to the dispatcher. Here was the implementation for SessionStore I got to work after much trial and error.

root.SessionStore = new SessionStore

root.SessionStore.dispatchToken = AppDispatcher.register(SessionStore.handleAction)

The problem always revolved around the issue of what “this” was at the
time of the call

Now it was just a matter of wiring everything together. I will use the feature of the user updating their Profile as an example.

The SessionActionCreator for the update profile action looks like this:

updateProfile: (profile) ->
  action = 
    type: ServerActions.UPDATE_LOGGED_IN_PROFILE
    profile: profile
  AppDispatcher.dispatch(action)

This is called by a component when the profile needs to be updated.

In the SessionStore’s actionHandler method it handles the UPDATE_LOGGED_IN_PROFILE Action like this:

@handleAction: (action) ->
  type = action.type
  console.log("SessionStore is handling: "+ type)
  switch type
    when ServerActions.UPDATE_LOGGED_IN_PROFILE
      WebAPIUtils.updateProfile(action)

The updateProfile method in the WebAPIUtils class is called next and looks like this:

@updateProfile: (action) ->
  console.log("WebAPIUtils.updateProfile called")
  $.ajax({
    url: "/update_profile"
    dataType: 'json'
    type: 'PUT'
    data: { profile: action.profile }
    success: (data) ->
      SessionActionCreator.profileUpdated(data.profile)
    error: (xhr, status, err) ->
      console.error("/update_profile", status, err.toString())
   })

Notice the success handler funnels the result back through the SessionActionCreator. Here is the profileUpdated method:

profileUpdated: (profile) ->
  action = 
    type: ResultActions.LOGGED_IN_PROFILE_UPDATED
    profile: profile
  AppDispatcher.dispatch(action)

This gets passed back to the SessionStore via the AppDispatcher. Here is the relevant part of the actionHandler method:

  when ResultActions.LOGGED_IN_PROFILE_UPDATED
    SessionStore::setLoggedInProfile(action.profile)
    SessionStore::emit(Events.PROFILE_UPDATED)

This code updates the profile stored in the store and them emits the PROFILE_UPDATED event which registered components are listening to.

So now after making this slight course correction I have the following results.

  • All Actions are created by “action creator” class instances
  • The generated actions all go through the single AppDispatcher
  • All Stores see all Actions
  • There is a Store instance for each of the types of objects the app has
  • Components register with the various Stores for events they are interested in
  • Home-grown code has been removed

One thing that I don’t like is the registering of the component listening functions. In my old implementation a component would register using a key and a callback function. So when it was time to remove itself, it just used the key.

Now with this implementation I only use a function (because that is all the EventEmitter takes) so for a component to deregister it must pass the original function it registered with. I’m not sure why but that doesn’t feel right to me.

I will need to “noodle” on that one a bit.

Till next time.

Sunday, November 29, 2015

The Results

So last week I laid down the gauntlet that I was going to use this down week from work to rework Pain Logger to use the Realm database engine instead of CoreData and sync it all with CloudKit. Here are the results:

Monday

So I started looking at the existing code and I realized I needed to reorganize it a bit. In order to make it easy to manage and maintain, my idea was to create a single DataService class and have all persistence requests go through that.

The original Pain Logger code had a fairly standard CoreData stack being stood up in the AppDelegate (capturing the MOC on the way) and then it used helper classes for each of the managed objects. These helper classes isolated persistence logic away from the managed object instances and out of the AppDelegate. There was one managed object type per table and therefore one helper class per object type.

So my first order of business was to move all the helper methods and the CoreData stack initialization to the new DataService class.

I actually decided to concentrate on one UIViewController (VC) at a time, and move just the persistence methods it used. That way I could see progress.

After moving the methods for my first VC, I tested and found everything was still working.

The first VC just shows a list of the top level objects in my database so it wasn’t that hard.

Now I needed to install Realm. My idea was when the DataService stood up it would not only configure it’s existing CoreData stack, but it would also stand up the Realm database in order to migrate all the records.

I installed Realm for Swift 2.1 per the documentation Realm provides when you download their code.

I also installed the other tools, such as the Realm plugin for XCode and the Mac app Realm browser.

The only catch was after adding the “Run Script” (per Realm’s documentation) and trying a build, the build failed because the “strip-frameworks.sh” script didn’t have execute permission. So I opened up terminal and added execute permission to that file and all was good. Note: after you make this permission change, you next need to do a project clean so you won’t be using a cached version of the script.

My next hurdle was I needed to create Realm representations of the managed objects. So for example my Category class (which extends NSManagedObject) got a sister class called PLCategory (which extends Object).

Here was my first dilemma (and opportunity for improvement). CoreData doesn’t support enums, so all the enums in my data objects had to be converted back and forth between NSNumber objects. In Realm the answer to this is to overide the variable’s getter/setters. For example in the CoreData model assume you have this variable:

@NSManaged var line_color:NSNumber

I have an enum for this variable called LineColorType, so in the Realm object this becomes:

private dynamic var line_color = LineColorType.LINE_COLOR_GREEN.rawValue
var lineColor:LineColorType {
    get {
        return LineColorType(rawValue:line_color)!
    }
    set {
        line_color = newValue.rawValue
    }
}

The big change here is the application code will use the more standard camel case variables while Realm will store the raw value variables in it’s database.

My guess is I probably could have done this same thing with CoreData, but I’m not on that right now.

Delimma: There is a lot of boiler plate code left around to support CoreData and now adding Realm, although the new code is tighter, it is still MORE code. When do I get rid of the boiler plate code? I think the best approach is to finish all the changes and submit an update to the app store then after about 6 months (once I feel confident all existing users have upgraded and opened the app so the database has been migrated) I’ll remove the old CoreData code.

Day 1 Progress:
At the end of the day I have the model migrating to Realm. A good start I think. Tomorrow, I’ll start on the CRUD operations.

I ran into two issues for the day. First, how do I browse my Realm database. This StackOverflow link shows how to do that.
Second, my computed fields showed up in the database as well. It turned out I needed to mark them as non-persistent.

Delimma: Do I need a unique id on my objects? I decided I did, but I couldn’t just use an int because Realm currently doesn’t have an autoincrementing id scheme (it supposedly is coming). Anyway I choose to use NSUUID.UUIDString in the interum. We’ll see how that goes.

Tuesday

For today, my goal was to flesh out the persistence layer. My idea was to, along with the default Realm, set up a Realm for caching data when the user is offline, and a Realm to simulate the eventual online CloudKit support.

For testing purposes I plan to have a flag that I can turn on and off to simulate the app being in offline mode. Eventually this will be replaced by real code to check the availability.

As usually happens for plans like these, I ran into a snag.

I needed to add some test data, so I started using the app and realized there were parts of the UI that after the conversion to Swift didn’t functionally work, although they did compile.

It turns out I had used a automated conversion program to convert some of the original Objective-C code so that I didn’t have to type as much and it didn’t convert it as well as I would have liked.

So I spent most of the day fixing those issues.

By the end of the day I was able to add records as before, with them getting saved both to the CoreData database and the default Realm.

Wednesday

Today was a short day due to the preparations for the Thanksgiving holiday. My plan was to regroup and get more of the things I had planned to accomplish the previous day working.

One of the nagging issues I had was I feel the need to keep the old CoreData code active, while also adding the new Realm support. That way my existing ViewControllers can consume the old model objects until I am ready to make the transition to the new model objects. But this has turned out to be more problematic than I had hoped.

Another issue is parent-child relationships. In the existing code, when adding a new child, the child object would first be added to the database and then some updates were done on some computed values on the parent object and it would be saved.

This caused me to have multiple completion handlers that “chained” the updates. With Realm I can do all of that in one write transaction which is very helpful, but untangling the mess I created in the old code will take a bit of time.

Side note: Argghh! I’ve been “working” for about 3 hours and only got about 30 minutes of work in. I need to fill all these people up with tryptophan and me with caffeine. Unfortunately that won’t happen till tomorrow.

One thing that I don’t understand right now is should I cache the Realm’s I create? Are they expensive to stand up. Reading the documentation it feels like they are not. So to be thread safe right now I will make the Realm accessors in my DataService be computed attributes like this:

private var defaultRealm:Realm {
    get {
       return try! Realm()
    }
}

Is this a bad idea? I’m not sure.

Thursday - Thanksgiving

Not sure I will get much done today. Too much food, family and football!!

Friday - Saturday - Sunday

Well as expected, too much family time and not enough development time. I’ll have to continue this effort next week.

I didn’t quite make the goal I had originally started out to do, although now, I am much more comfortable with using Realm and I think this will be a very fruitful effort. I’ll make another post in a couple weeks to update my progress.

Till next time,

Sunday, November 22, 2015

The Plan

Well, I made it through last week’s annual run of Competition Manager, pretty much unscathed. The software worked flawlessly, however, as usual there were last minute requests for changes.

Competition Manager is a little different than other products, I have worked on, as (after the registration period closes out) it has to be rock solid for a frantic 24 hour period and then it is done until next year.

I always get requests for changes during that 24 hour period. It has always been my worst fear that a change request comes in that HAS to be implemented in the current run.

So far that has never happened and although I got another change request this year, we were able to work around the concern and put it off till next year’s competition. Whew!! another crisis averted.

So with that behind me for another year, I need to turn back to my mobile app, Pain Logger.

I completed the conversion to Swift about 2 weeks ago. Now it’s time to upgrade it.

Fortunately for me, the holidays provide time away from my day job and allow me to (while of course spending time with family and resting up) look closer at some of my side projects.

I read somewhere that to really be productive, you should state what you intend to do and your goal date so others can keep you accountable. So that is REAL goal of this post.

My Goal
My goal this week is to rewrite Pain Logger’s persistence layer to use the Realm database engine instead of Core Data. I intend to write it in such a way that an existing install will automatically migrate the existing CoreData database to Realm when the application launches and then on subsequent runs it will use the Realm database and not CoreData.

Once that is done I intend to stand up CloudKit support for the app. I intend to use Realm as the offline cache for the CloudKit database supporting the app.

So that is my plan, I intend to blog about my progress (which I hope to be complete) next week.

There, I now have placed the proverbial stake in the ground.

Now, why did I make the decision to go with Realm instead of using CoreData?

First, I wanted to learn something new.

Second, while CoreData works, I’ve always been put off by all the boiler plate code that needs to be done to stand a stack up, along with all the other moving parts you have to keep in mind as you are working with it. It has always felt so “2000s”ish to me. I want something more modern.

Realm seems to have that modern feel that I am looking for.

Having said all of that, I do, however, reserve the right to change my mind if this just turns out to be a really bad idea after getting into this.

So there you have it, until next time, here’s hoping for progress.

Sunday, November 8, 2015

Legacy Prawns

Ok, so I am coming to the close of my annual deployment of my Competition Manager application.

Right now registration is closed and the actual competition will happen this Friday.

In a way this is a bitter sweet time. In one way I am excited to see the culmination of my effort, but in another way it is a distraction to the other projects I am working on.

The project is a legacy app using Ruby on Rails version 3.2. I know I should update it to the latest version of Rails, but since it is not a paying project it’s hard to justify the effort.

At any rate, when the actual competition occurs this Friday, everything must work seemlessly, as the competition occurs over about 20 hours and all the scores and results must be collected, entered, calculated and reported on during that time.

This is the critical time for Competition Manager as there really is no time to fix any bugs if they were to arise.

So I was doing my due dilligence by testing the scoring and reporting modules of the application yesterday and I realized there was an annoyance for the scorekeepers I should try to address.

In the past after the scores for an event were entered, the user would save the scores and print the report. This caused a pdf file to be downloaded and shown in the browser.

Unfortunately this takes the scorer out of the application and forces them to save the report manually for later printing or print it right then.

I figured a better approach would be to download the file to the scorer’s computer as a separate pdf file without taking them out of the screen they were on. That way they could deal with all the reports at one time.

To do this I needed to do two things:
1. Give each event report a separate file name
2. Download the report instead of opening it in a separate browser window.

So this takes me to the crux of this post. My overall intent of these posts is to document things I learned or had to research to solve so that I, for one, won’t have to re-learn the issue again and maybe also in the process it will help others.

Competition Manager uses an older gem called “prawn” for it’s pdf generation and “prawnto” to support templates.

Yes I know there are better solutions and even “prawn” has a new version but one week out from the actual competition I am not about to change out a major component of the product.

So I had to figure out how to fix this with the current legacy code.

The way this works is I have a route set up to serve the reports that once called retrieves the correct data for the report then uses prawnto to load the template and generate the pdf. The original controller method looked like this:

   def event_results
       @event = Event.find(params[:event_id])
   end

So what would happen is the client would call this method with the event id and then the template named “event_results.pdf.prawn” would be used to generate the pdf file that was then returned to the client.

I knew I needed to set the filename and stream the file back to the client, setting the correct headers, but how to do it was hard to find. Here is what I eventually found that would work:

  def event_results
      @event = Event.find(params[:event_id])
      prawnto :filename => @event.name + ".pdf", :inline => false, :template => "event_results.pdf.prawn"
  end

So now what happens is the filename is set to the name of the event (with a .pdf extension), it is marked as inline false so the document will be downloaded, and finally the template to generate is specified.

So in the end a one line change solved the problem. I tested it, deployed it and the product is ready for action this Friday.

Till next time.

Sunday, November 1, 2015

Solving From A Different Direction

As of late I have been a bit remiss in getting these blog posts out the door.

Part of the issue has been I didn’t have a good blog creation solution. I have tried standalone apps, the provided editor from my blog provider and I even tried using different plugins to get the results I wanted.

This week I was documenting the REST API for my new web project and I realized that what I was doing there might solve the problem I was having here.

The problem has been how to show code snippets. So far all the standalone blogging apps I have tried have failed in one way or the other when I tried to attach code. In fact it was so bad that in my last post I had to post screen shots of the code.

That’s not right, so I have been hampered by this problem for a while.

As I said, I was documenting the REST API for my new web project and I have been doing it in Markdown so that I could view it, nicely formatted, from the git repository. In it, I had to show an example of the REST call in CoffeeScript as well as show the resulting JSON that was returned.

Markdown has a very simple way of showing code snippets, but for me it wasn’t working exactly right. It was delineating the code, like I wanted, but it was showing it all on one line.

What I learned, after some investigation, is Markdown has different flavors. Oh the joy of the open source world we live in ;-/

Anyway once I figured out the syntax for the particular flavor of Markdown my git repository supported I was able to get the code snippet formatted properly. So my CoffeeScript code looked like this.

     $.ajax({
        url: "/goals",
        dataType: 'json',
        type: 'POST',
        data: {
          goal: ...
        },
        success: function(data) {
          ...
        },
        error: function(xhr, status, err) {
          ...
        }

With this working it got me to thinking. What if I just wrote Markdown documents, and then exported them to HTML and pasted them into my blog? Would it work?

So today’s post is mostly a proof of concept of that. I can already see one downside and that is I’ll need to keep the Markdown versions locally, if I want to make any edits. It pretty much makes the editor on the blogging site useless.

I found several online editors that can take Markdown and export the HTML. Another requirement was that this HTML file had to be a single file, otherwise it would be hard to cut and paste it into the blog application.

After trying JavaScript, Swift and Ruby code I was pretty confident this could work. However, I also needed to show ReactJS code as well.

This has been the code that has presented the most challenge to the various solutions I have tried. The reason I think (I use JSX syntax) is the code starts out as JavaScript but then in the “render:” method turns into XML/HTML.

All this works because of the “JSX” compiler.

However, I have not found a standalone app that has handled this well. Admittedly I do need to go back and see what support the standalone apps have for Markdown, since I now think that is right format to use. At any rate, here is a simple JSX file:

var Page = React.createClass({
  getInitialState: function() {
    return {goals: []};
  },

  componentDidMount: function() {
  },

  render: function() {
    return (
        <div className="col-md-10 main defaultheight">Page
        </div>
    );
  }
});

I was pleasantly surprised how well this worked.

There is another advantage in using this scheme and that is any documentation I write for my iOS projects can also be done in Markdown (well a flavor of it).

So in the end the fix to a problem I was having for a different issue (that of documenting the REST API) may also solve the problem I have with including code snippets in blog posts. Anytime I can have one solution that solves two issues, I call that a win!

Till next time.

Sunday, October 18, 2015

Reacting with Rails

This week I returned to my new web application.  I decided a few weeks ago, the best solution for this app would be to use ReactJS for the client.

The reasons for this decision were:

1. I had an interest to learn ReactJS
2. I felt I only needed a client side solution and not a full MVC stack. 

Unfortunately, I don't know ReactJS, but how hard could it be, right?  

To be honest, I found it fairly natural.  I had a few basic questions that I thought if I could answer I would be on my way.

  • How to create a ReactJS component?
  • How to connect ReactJS components into an application?
  • How to serve a ReactJS application from RoR?
  • How to load data from the server and update a ReactJS component's state with that data?
  • How to update a ReactJS component's state with data?

Before I could start answering these questions I had to decide whether I would use the Rails asset pipeline to build my ReactJS application or should I build the app using a separate build "eco-system" inside or outside my Rails application. 

I had read suggestions that I should keep my client build system separate from my server build system since npm packages would be more up to date than their counterpart  Ruby gems, and some JS libraries I might want to use wouldn't even be implemented as Ruby gems.   

I contemplated this for a while, but in the end I decided that even though it seems cleaner to keep the build systems separate, I really don't know NodeJS well enough to develop a production level application nor do I have the time right now to get to that level of expertise.  I do need to delve into that arena, in the future, but not right now.

So I decided I would see how far I could go with just leaning on the standard RoR tool chain.

This turns out to be a MAJOR decision.  I did find a few links on making this decision.  One of the best discussions of the different alternatives, I found, was a blog post by Blaine Hatab (link here).

Using Blaine's classification I chose what he calls method 1.  In his post, he turned this method down as he wanted to do server side rendering.  My goal was to avoid server side rendering by serving up the client in one call and then have the client make AJAX calls to the RoR application for it's data.   

With that decision made, I was ready to tackle my list of questions to get going.

How to create a component?

There are plenty of tutorials on how to do this so I won't go into details about this.  The big decision I had to make for this step was should I use JSX or JavaScript.  

If I chose JavaScript then I had the choice of straight JavaScript or CoffeeScript (since it is baked into RoR).The tutorials I found were a little bit of both.

In the end I chose to use JSX for my ReactJS components and CoffeeScript for any other code.  

To make this work all I had to include was the 'react-rails' gem.

After the asset pipeline is run by RoR you end up with straight javascript files anyway, so it felt natural to use the JSX syntax for ReactJS components, since it looks a lot like HTML with javascript mixed in.  

So on to the next question.

How to connect React components into an application?

This one turned out to be real easy.  Since I was going for a SPA type application I needed to create the root component of the application and serve it via a view.

My top-level component looks like this (Note: I have had a wail of a time showing code in my blogging application, so for now I will just show images so I can get this post out, if anyone knows of a good client that actually works with blogger I'd appreciate it):



















In the end I created a router via the 'react-router' gem but for this post I won't go into that.  I'll talk about that in a later post.

From here all the other components of the application just hang off of this one.  On to the next hurdle:

How to serve a ReactJS application from RoR?

This turned out to be easy as well.  By using 'react-rails' all I had to do was connect up the main application from above.  'react-rails' comes with an easy way to do this so my index.html.erb looks like this:





Yep, it's a one liner.  I also removed all the extra stuff from my layout as the ReactJS component was going to supply it all so my layout ended up looking like this:













Again, extremely simple.  On to the final challenge:

How to load data from the server and update a component's state with that data?

This is where things got a little complicated.  ReactJS components have 'state' and the idea is that they bind to this state in order to always keep it up to date.  Additionally, a component can pass this state to child components which is then available to child components in their 'props' array.  

I found several tutorials that used the ability to pass in properties in what I considered a bad way.  For example they would pass in callback methods so that when an action was taken in the child component the parent's callback function (which was passed as a property to the child) would get called.

I didn't like this idea as it seemed to couple the components together very tightly.  What if I create a panel component to show an object and want to use it somewhere else? Will I remember to wire everything up appropriately?  Knowing me, probably not.  

This led to a design pattern that has been espoused for ReactJS applications called Flux.  To be honest when I first read about it, my first reaction was "why did anyone need to come up with a new pattern just to replace MVC?"  

I definitely had an aversion of learning this new way of thinking.

I tried to go down the route of passing in properties and coupling with callbacks, as outlined above, but as I started to segment my components into what I felt was logical components it got unwieldy.

For example, in my application's main view I have a header component and two list components.  In the first list component it shows a series of panels, one for each object in my application.  In the second list component it shows children of the currently selected panel in the first list. 

The main application was the only component that knew about both lists.  In order to wire up the first list so that the second list would get updated when the selected panel in the first list changed, I would need to pass in a callback method to the first list that was owned by the parent component and then the parent component would communicate to the second list what to do.  

It got even more complicated if actions in the second list affected the state of the selected panel in the first list.  What I needed was an event system.

This is where the Flux design pattern came in.

So, reluctantly I started to learn about Flux.  

The idea with Flux is the components are not dependent on each other.  Instead they communicate by firing and reacting to events.  Since Flux is a design pattern, there really isn't any code to install, so the implementation is left up to the developer.  

I'm going to talk about how I implemented it, I'm sure others will have different opinions on how to implement it.  Probably my way isn't even correct, but it is working for my needs so I am going with it.

Essentially the way I have it implemented is as follows:

- All state is stored in 'Store' objects.  These are essentially singleton objects. Right now I have a SessionStore for session type objects (user's profile) and an AppStore for everything else.  So if I load something from the server, say the list of objects to show in my first list component above, they are stored in the AppStore object, NOT in the list component's state object.  

- When the list component mounts, it registers as a listener to the AppStore component. Specifically it registers for it's interest for the objects being loaded.  An example would be:







Notice this is CoffeeScript.  Like I said above, any JS class that isn't a ReactJS component I wrote in CoffeeScript for the succinctness and the safety that CoffeeScript affords.  

A couple of things about this method.  When a component registers as a listener it passes a key that represents itself (I use the component's display name) that way it can deregister itself when it unloads.  
Also it passes in a callback method.  This callback method is the magic.  So the idea is when the event occurs the callback is called and the component then takes the appropriate action based on the event that occurred.

- If an action occurs in the component it calls a method on a ActionCreator class specifically for that event which is responsible for collecting any parameters about the action, packaging it up as an action and sending that action to a Dispatcher.

- The Dispatcher class acts as, well, a dispatcher and determines if the action is a server action, in which case the action is passed to a WebAPIUtils class which communicates with the server tier or a view action in which case it dispatches the action to the appropriate 'Store' object which in turn fires the appropriate event for the action that occurred. 

At this point the circle is complete.

I realize this was a little vague so I'll finish this post with code from the example above. One important note about this before I start into it.  This is not exactly how the code is implemented.  I have removed calls to helper methods, usages of constants, and I sanitized the code to be more generic than the actual implementation.

First here is my list component.  It's job is to show a list of objects retrieved from the server:



























Note in the 'getInitialState' method the loading of the objects from the store class.  Initially it is empty.  Also note in the 'componentDidMount' method how the component attaches itself as a listener to the AppStore class.  It's also important to look at the callback method that is registered when the component is added to the AppStore's listeners.  This callback sets the state of the ObjectPanelList which will cause the render method to be rerun which in turn refreshes the view with the latest state.

Next let's look at the pertinent methods in the AppStore class.

  







As I mentioned earlier, this is a CoffeeScript file.  Notice how the add/remove methods hide the event names from the caller and register/deregister the listener in an internal hash of listeners.

The important method is the 'objectsLoaded' method which gets called when the objects are loaded from the server.

Now when the application is loaded an event is fired to load up the objects.  This is done on the parent component of ObjectPanelList.  Here are the pertinent methods of that component:







Essentially when the component is mounted it calls an internal method to tell the ActionCreator to load the objects.  One cool side effect is, it doesn't really matter whether this action completes before or after the ObjectPanelList renders.  If it happens before the ObjectPanelList gets the correct list when it renders from the AppStore, if it happens after then the callback method the ObjectPanelList registered in the AppStore listeners is called and it picks up the right objects then.

The pertinent method of the ActionCreator class is as follows:






This is a class level method that packages up the action and calls the appropriate Dispatcher method.  Here is one point I am not sure about.  The ActionCreator (at least the way I have it implemented now) knows that this must be a server action.  Another approach might be to have a handleAction method on the Dispatcher that encapsulates this knowledge.  

The pertinent Dispatcher methods are as follows:












Again we find class level methods.  The '@handleServerAction' just repackages the original action.  I'm not sure that is the best approach but in the tutorial I was (loosely) following that was how I understood it to be implemented.  Seems like an unneeded level of indirection.

Finally, here is the method on the WebAPIUtils class:










This is the final piece of the puzzle.  An AJAX call is made to the server and when the response is received (hopefully successfully) the AppStore's 'objectsLoaded' method is called which in turn calls the attached listeners.  One concern I have here is, should I have this call an ActionCreator method to put the action into the system.  It would seem that might be the case as it would match the architecture when making the remote call.  But at this point it seemed like it was unneeded.  I'l need to monitor the code to see if that is a refactoring step I need to make.

Well there you have it.  A complete round trip from the JSX components to the server and back.  This avoids almost all coupling of components together and sets up an event system that can be used for both remote calls as well as inter app calls.  

For example in another case I need to update a dependent list when an object in the first list is selected.  This uses the same pattern, but with a different event type and there is no remote server call.  

As I continue to scale out the features in this new app, I find this pattern to be holding up quite well, a and more importantly the code feels natural and well separated.

That's it for this week.  

Sunday, October 4, 2015

Rinse and Repeat

This week starts what I call my "silly" season.  Except this year it is more crazy than normal.  

Each year I deploy my Competition Manager product to support the state wide North Carolina Nazarene Youth International Teen Talent Festival. 

Besides doing this, I also continue working on my existing apps, and this year I also have a second web app I am working on (more on that in a later post).

Competition Manager uses Ruby on Rails for the server side, JQuery UI for the client side and MySQL for the database layer. It is used to manage the registration, scoring, accounting, and reporting for the event.  Pretty much any aspect that could be automated it does.

Fortunately, each year (we are going on 6 years now) I learn steps to streamline the process.  This year those gains have helped out as the organizers want the system up and running two weeks earlier than normal.  I just found this out last week, yikes!!

I have gotten the steps down to these five:

  1. Deploy the production server with last year's image
  2. Configure my development machine with last year' code and data
  3. Make and test any requested changes on my development machine
  4. Deploy changes to the production server and do a sanity test
  5. Hand control to event organizers and provide any necessary training
 I got through the first step with ease.  Probably the easiest I have ever done.  So I figured the next steps would go as smoothly. 

I was wrong.

The night before, I had upgraded my dev machine to the latest version of OSX, El Capitan.  I had heard about it adding stricter security settings but I figured that was for the normal user, Apple wouldn't do anything to hinder a developer, right?

Over my years I have used Windows, Linux and OSX systems for development and I have come to the (very opinionated) opinion that OSX is the best operating system for my development needs.  

It gives me the robustness and configurability of Unix under the hood should I need it while also giving me the 'GUI-ness' of windows without all the hacks and quirky setups the Linux windowing environments have.

I absolutely hate administering a computer when I should be developing. 

Don't get me wrong, over time each OS has advanced in features and usability.  But for me, as long as I can afford it, I will stay with OSX for my development needs.

At any rate, I figured "what could go wrong?", I'll just install El Cap, and start off fresh and new with standing up this year's version of Competition Manager.

Well, it turns out the security settings that El Cap comes with don't allow you to change some files, like those in /usr/lib.  

The problem I had was the mysql2 rails gem was looking for a mysql dynamic library in /usr/lib but it was in /usr/local.  

The traditional fix for this is to setup a symlink from /usr/local to /usr/lib.  No problem I thought, I'll just do that. 

ln -s /usr/local/mysql/lib/libmysqlclient.18.dylib /usr/lib/libmysqlclient.18.dylib
Operation Not Permitted
What! I can't do something this basic! But I have sudo privileges!  
The first Google entries I found dealt with this problem back when El Cap was in beta.  Their solution was to turn off the security settings, but there was also a caveat mentioned that this ability would not be allowed in the gold release.  
Arggh, I found instructions on how to reboot the machine and do this in recovery mode. But it didn't look like something I really wanted to do. Not to mention I didn't want to waste the time.  I wanted to program!! 
Right before I was about to reboot into recovery mode I decided to search one more time and I found a more recent solution saying that I should just reinstall MySQL using homebrew.
So rather than take the time to change my security settings and go through all that rigamarole I decided to try the second suggestion first.  I fully expected it not to work.  But wouldn't you know, after about 15 minutes, it was loaded and I was back in business.
Whew!, I dodged a bullet there.  Anyway, from there it was all down hill. By the end of the day on Saturday I had plowed through steps 1 through 3 and I am just about ready to finish steps 4 and 5 up and release it for production.
A good weekend's work!!  Till next time.  





Wednesday, September 30, 2015

UI Testing

This week I spent time looking at UI Testing using iOS 9 and XCode 7.

I'm not going to go through the details but I did find these two links to be good enough to get going.


The first one is a little light on the details but I found it useful in giving simple steps for adding UI testing to an existing app.

The second one, I thought, did a better job of showing how to actually assert test results once you got everything up and going.

So what was my intent with this and why should I take the time to add this to my current project(s)?

Well to be honest, it didn't take more than about 30 minutes to read the articles and get going.  I did do a test project prior to adding to my existing production, or to be production, apps.

For the conversion to Swift of Pain Logger I see this as an invaluable tool to verify that once I switch a view controller over it still works as it did before.

For my "in work" app I see adding this functionality now as a way of validating and protecting from regressions in the future.

In short, setting it all up, and running the first tests was fairly simple, and I think will be a valuable tool in the tool chest.

I spent most of my time diving into ReactJS this week.  There is a lot to learn here and I'm not convinced it is the right tool for the web app I am building.  Time will tell.  One thing I find hard is finding good training on the subject.  If anyone knows of good sources for this, particularly when using with Ruby on Rails, please pass them along.

There are a few articles, and I have probably read them all, but like most things, especially in the UI world, lots of folks have their opinion on how it should be done, with each one having different tradeoffs.  More investigation is needed.

Till next week.

Sunday, September 20, 2015

Security . . . Really?, Arggh!!

Prelude: This is a post I started writing a couple of weeks ago about my experience in adding authentication to a project I am working on.  It basically chronicles my thought process and how I explored and came to the decision of how I was going to implement this feature.  Whether it has much value to anyone other than my other team members I don’t know.  In the end I decided to post it mostly just to “complete the circle”.

Anyway, here goes:

Well, the Rails project I started a week ago has become a bit of a monster.  Basically it is a proof of concept right now.  The minimum goals are to allow a user to login to a web site or from a mobile device with either a registered account or through a service they are already a member of such as LinkedIn or Facebook.  

After last week’s effort, I had the test server up, so I decided the next step was to look at authentication since I had a feeling it might have impacts on the data model.  Also, since I needed to authenticate to other services (Facebook, LinkedIn, etc) I knew I would not be able to easily roll my own authentication scheme.  So I struck out looking for gem that could meet my needs.

It didn’t take long to run across the Devise gem as it seems to be used everywhere.

Vanilla Devise

I installed Devise into my application per the instructions on their site

It was fairly painless and worked great for authenticating the rails web site but this application also has the requirement to provide secure REST calls for mobile apps.  

That’s where things got complicated. One way to support this, and the one I zeroed in on, was to use token authentication.  The Devise gem used to have support for that baked in but in the latest versions it was removed.  

In the Devise documentation they explain why token authentication support was removed and provided links to two gems that would add it back in, devise_token_auth and simple_token_authentication.

Devise_Token_Auth

I first started with devise_token_auth as it looked to be much more robust and, quite frankly, when something has the word “simple” in it’s title like simple_token_authentication does, I read that as indicating while it may be simple it isn’t necessarily good for production.  

I spent several hours adding the devise_token_auth gem in.  The first hurdle was that it sits on top of Devise and as such when I originally installed Devise, the database migration wasn’t quite the same (at least out of the box) as the database migration for devise_token_auth. 

Since I am just learning the intricacies of Devise and token authentication I blindly followed the installation and configuration instructions for devise_token_auth. That was probably my first mistake, not surmountable, but a mistake none the less.

I was expecting it to create a migration to add to my existing user schema, but instead it created a new migration that created the same table as my original Devise authentication table, which obviously would fail if I ran “db:migrate".  Uh oh!

So to get back to a stable state I decided to remove the original Devise migration and use the one devise_token_auth generated instead.  This caused me to have to dump the database and recreate it.  I also had to manually change the order of the generated migrations so the user account would be created first.
 
This gave me one of those “what would happen in production if I had to change the authentication scheme” moments.  I decided that would not be a good day for me.  At this point in my exploration I was getting the feeling I was doing something wrong or didn’t understand something very fundamental.

It turns out devise_token_auth really is Devise with token authentication added back in. So, in hindsight I should have just started with the devise_token_auth gem, and not included and configured vanilla Devise beforehand. 

Once I corrected my mistake and got everything back up, reran the migrations and seeded the database I ran into an error in the devise_token_auth code itself.  It seems that the version of the devise_token_auth gem I had wasn’t compatible with the version of Devise I had originally installed.  Welcome to gem hell.

And, because the devise_token_auth gem is packaged as an engine, many of it’s inner workings were hidden from me.  To be honest I like the idea of packaging gems as engines, if they work, but this concept was foreign to me and made troubleshooting a bit harder when troubles arose. That didn’t feel good to me.

At this point I could have removed the Devise gem from my Gemspec and let the devise_token_auth gem install what it needed, but at the time I guess I was too dense to know that was the best course of action, so...

Simple_Token_Authentication 

To make progress I decided to switch to the other gem, simple_token_authentication.  As the name states it was simpler.  Another thing about it, that felt more comfortable to me, was it isn’t packaged as an engine so it was more what I was used to.  Finally it doesn’t replace Devise, instead it just enhances it a bit. Again, what I was expecting.

I followed the install instructions, added the before action, and created the migration as outlined on their site.  I was up and running again, the rails web site was now secured again and I had token authentication added in.  Next hurdle was a mobile client.

Mobile Integration

To do this I decided I would create a test iOS app that would first call the REST api on the rails app to sign-in, get the authentication token and then call a different REST method to get some data.  I figured if I could do this then I would have a basic setup that could be fleshed out further.

Unfortunately, I quickly ran into Apple’s ATS (App Transport Security) changes made in iOS 9.  These changes require the following of the server and client communication (from Apple’s site):


  • The server must support at least Transport Layer Security (TLS) protocol version 1.2.
  • Connection ciphers are limited to those that provide forward secrecy
  • Certificates must be signed using a SHA256 or better signature hash algorithm, with either a 2048 bit or greater RSA key or a 256 bit or greater Elliptic-Curve (ECC) key.
  • Invalid certificates result in a hard failure and no connection.


The implications of this was that I would need to switch my test server to use HTTPS.  Which I did by reconfiguring the web server (sitting in front of the rails app) and installing a self-signed certificate.

This created a different problem that I didn’t expect.  I could sign-in and get a valid authentication_token, but on the second REST call to get data I would get a string in my JSON saying "the certificate was not secure would I like to proceed anyway?"  

Apparently, a self-signed certificate isn’t good enough to pass the check list in ATS.  I googled around how to add exceptions to my app’s configuration.  There appear to be several ways to get around the problem.  I feel like I explored all of them but to no avail.

First, you can just allow all connections and disregard the invalid certificate problem by adding NSAllowsArbitraryLoads to your NSAppTransportSecurity section of your info.plist.  I did this first and my proof of concept app was up and running, albeit without any security checking.  

But, I know this isn’t the way to ship, and I figured if our project did go into beta production we would probably be using a self-signed certificate, so I dug deeper into the ATS configuration options.

According to their documentation, I should be able to set the NSExceptionAllowsInsecureHTTPLoads to by-pass the invalid certificate exception I was getting when making the second REST call.  I tried many different variations and tried a lot of other suggestions I found on Stack Overflow.  But I just couldn’t get it working, if anyone knows how (and has successfully done it) I would be very interested in what you did.

Conclusion

In short, this is a short synopsis of how painful adding token authentication has been.  Looking back at this post it doesn’t appear to be as bad as it was in reality.   I think one of the things that really tripped me up, and continues to this day, is all the terminology the security gurus use.  Just reading through the documentation on the various gems, they throw out a lot of new terms I was not familiar with so that probably added to my frustration.

So in the end, I think once we get closer to production and we have a legitimately signed certificate all of this will go away.  But for now, until I can figure out how to do it the right way, we’ll have to have our iOS app continue to “punch” through the security settings with the NSAllowsArbitraryLoads option.  Not ideal but expedient.

My final thoughts are that if I was doing this over again, and had more time to research and try things out, I would start with devise_token_auth as it feels much more robust and thought out but, at least at this point, I’ll go with the simple_token_authentication gem so I can make progress on the rest of the app.