Sunday, November 29, 2015

The Results

So last week I laid down the gauntlet that I was going to use this down week from work to rework Pain Logger to use the Realm database engine instead of CoreData and sync it all with CloudKit. Here are the results:

Monday

So I started looking at the existing code and I realized I needed to reorganize it a bit. In order to make it easy to manage and maintain, my idea was to create a single DataService class and have all persistence requests go through that.

The original Pain Logger code had a fairly standard CoreData stack being stood up in the AppDelegate (capturing the MOC on the way) and then it used helper classes for each of the managed objects. These helper classes isolated persistence logic away from the managed object instances and out of the AppDelegate. There was one managed object type per table and therefore one helper class per object type.

So my first order of business was to move all the helper methods and the CoreData stack initialization to the new DataService class.

I actually decided to concentrate on one UIViewController (VC) at a time, and move just the persistence methods it used. That way I could see progress.

After moving the methods for my first VC, I tested and found everything was still working.

The first VC just shows a list of the top level objects in my database so it wasn’t that hard.

Now I needed to install Realm. My idea was when the DataService stood up it would not only configure it’s existing CoreData stack, but it would also stand up the Realm database in order to migrate all the records.

I installed Realm for Swift 2.1 per the documentation Realm provides when you download their code.

I also installed the other tools, such as the Realm plugin for XCode and the Mac app Realm browser.

The only catch was after adding the “Run Script” (per Realm’s documentation) and trying a build, the build failed because the “strip-frameworks.sh” script didn’t have execute permission. So I opened up terminal and added execute permission to that file and all was good. Note: after you make this permission change, you next need to do a project clean so you won’t be using a cached version of the script.

My next hurdle was I needed to create Realm representations of the managed objects. So for example my Category class (which extends NSManagedObject) got a sister class called PLCategory (which extends Object).

Here was my first dilemma (and opportunity for improvement). CoreData doesn’t support enums, so all the enums in my data objects had to be converted back and forth between NSNumber objects. In Realm the answer to this is to overide the variable’s getter/setters. For example in the CoreData model assume you have this variable:

@NSManaged var line_color:NSNumber

I have an enum for this variable called LineColorType, so in the Realm object this becomes:

private dynamic var line_color = LineColorType.LINE_COLOR_GREEN.rawValue
var lineColor:LineColorType {
    get {
        return LineColorType(rawValue:line_color)!
    }
    set {
        line_color = newValue.rawValue
    }
}

The big change here is the application code will use the more standard camel case variables while Realm will store the raw value variables in it’s database.

My guess is I probably could have done this same thing with CoreData, but I’m not on that right now.

Delimma: There is a lot of boiler plate code left around to support CoreData and now adding Realm, although the new code is tighter, it is still MORE code. When do I get rid of the boiler plate code? I think the best approach is to finish all the changes and submit an update to the app store then after about 6 months (once I feel confident all existing users have upgraded and opened the app so the database has been migrated) I’ll remove the old CoreData code.

Day 1 Progress:
At the end of the day I have the model migrating to Realm. A good start I think. Tomorrow, I’ll start on the CRUD operations.

I ran into two issues for the day. First, how do I browse my Realm database. This StackOverflow link shows how to do that.
Second, my computed fields showed up in the database as well. It turned out I needed to mark them as non-persistent.

Delimma: Do I need a unique id on my objects? I decided I did, but I couldn’t just use an int because Realm currently doesn’t have an autoincrementing id scheme (it supposedly is coming). Anyway I choose to use NSUUID.UUIDString in the interum. We’ll see how that goes.

Tuesday

For today, my goal was to flesh out the persistence layer. My idea was to, along with the default Realm, set up a Realm for caching data when the user is offline, and a Realm to simulate the eventual online CloudKit support.

For testing purposes I plan to have a flag that I can turn on and off to simulate the app being in offline mode. Eventually this will be replaced by real code to check the availability.

As usually happens for plans like these, I ran into a snag.

I needed to add some test data, so I started using the app and realized there were parts of the UI that after the conversion to Swift didn’t functionally work, although they did compile.

It turns out I had used a automated conversion program to convert some of the original Objective-C code so that I didn’t have to type as much and it didn’t convert it as well as I would have liked.

So I spent most of the day fixing those issues.

By the end of the day I was able to add records as before, with them getting saved both to the CoreData database and the default Realm.

Wednesday

Today was a short day due to the preparations for the Thanksgiving holiday. My plan was to regroup and get more of the things I had planned to accomplish the previous day working.

One of the nagging issues I had was I feel the need to keep the old CoreData code active, while also adding the new Realm support. That way my existing ViewControllers can consume the old model objects until I am ready to make the transition to the new model objects. But this has turned out to be more problematic than I had hoped.

Another issue is parent-child relationships. In the existing code, when adding a new child, the child object would first be added to the database and then some updates were done on some computed values on the parent object and it would be saved.

This caused me to have multiple completion handlers that “chained” the updates. With Realm I can do all of that in one write transaction which is very helpful, but untangling the mess I created in the old code will take a bit of time.

Side note: Argghh! I’ve been “working” for about 3 hours and only got about 30 minutes of work in. I need to fill all these people up with tryptophan and me with caffeine. Unfortunately that won’t happen till tomorrow.

One thing that I don’t understand right now is should I cache the Realm’s I create? Are they expensive to stand up. Reading the documentation it feels like they are not. So to be thread safe right now I will make the Realm accessors in my DataService be computed attributes like this:

private var defaultRealm:Realm {
    get {
       return try! Realm()
    }
}

Is this a bad idea? I’m not sure.

Thursday - Thanksgiving

Not sure I will get much done today. Too much food, family and football!!

Friday - Saturday - Sunday

Well as expected, too much family time and not enough development time. I’ll have to continue this effort next week.

I didn’t quite make the goal I had originally started out to do, although now, I am much more comfortable with using Realm and I think this will be a very fruitful effort. I’ll make another post in a couple weeks to update my progress.

Till next time,

Sunday, November 22, 2015

The Plan

Well, I made it through last week’s annual run of Competition Manager, pretty much unscathed. The software worked flawlessly, however, as usual there were last minute requests for changes.

Competition Manager is a little different than other products, I have worked on, as (after the registration period closes out) it has to be rock solid for a frantic 24 hour period and then it is done until next year.

I always get requests for changes during that 24 hour period. It has always been my worst fear that a change request comes in that HAS to be implemented in the current run.

So far that has never happened and although I got another change request this year, we were able to work around the concern and put it off till next year’s competition. Whew!! another crisis averted.

So with that behind me for another year, I need to turn back to my mobile app, Pain Logger.

I completed the conversion to Swift about 2 weeks ago. Now it’s time to upgrade it.

Fortunately for me, the holidays provide time away from my day job and allow me to (while of course spending time with family and resting up) look closer at some of my side projects.

I read somewhere that to really be productive, you should state what you intend to do and your goal date so others can keep you accountable. So that is REAL goal of this post.

My Goal
My goal this week is to rewrite Pain Logger’s persistence layer to use the Realm database engine instead of Core Data. I intend to write it in such a way that an existing install will automatically migrate the existing CoreData database to Realm when the application launches and then on subsequent runs it will use the Realm database and not CoreData.

Once that is done I intend to stand up CloudKit support for the app. I intend to use Realm as the offline cache for the CloudKit database supporting the app.

So that is my plan, I intend to blog about my progress (which I hope to be complete) next week.

There, I now have placed the proverbial stake in the ground.

Now, why did I make the decision to go with Realm instead of using CoreData?

First, I wanted to learn something new.

Second, while CoreData works, I’ve always been put off by all the boiler plate code that needs to be done to stand a stack up, along with all the other moving parts you have to keep in mind as you are working with it. It has always felt so “2000s”ish to me. I want something more modern.

Realm seems to have that modern feel that I am looking for.

Having said all of that, I do, however, reserve the right to change my mind if this just turns out to be a really bad idea after getting into this.

So there you have it, until next time, here’s hoping for progress.

Sunday, November 8, 2015

Legacy Prawns

Ok, so I am coming to the close of my annual deployment of my Competition Manager application.

Right now registration is closed and the actual competition will happen this Friday.

In a way this is a bitter sweet time. In one way I am excited to see the culmination of my effort, but in another way it is a distraction to the other projects I am working on.

The project is a legacy app using Ruby on Rails version 3.2. I know I should update it to the latest version of Rails, but since it is not a paying project it’s hard to justify the effort.

At any rate, when the actual competition occurs this Friday, everything must work seemlessly, as the competition occurs over about 20 hours and all the scores and results must be collected, entered, calculated and reported on during that time.

This is the critical time for Competition Manager as there really is no time to fix any bugs if they were to arise.

So I was doing my due dilligence by testing the scoring and reporting modules of the application yesterday and I realized there was an annoyance for the scorekeepers I should try to address.

In the past after the scores for an event were entered, the user would save the scores and print the report. This caused a pdf file to be downloaded and shown in the browser.

Unfortunately this takes the scorer out of the application and forces them to save the report manually for later printing or print it right then.

I figured a better approach would be to download the file to the scorer’s computer as a separate pdf file without taking them out of the screen they were on. That way they could deal with all the reports at one time.

To do this I needed to do two things:
1. Give each event report a separate file name
2. Download the report instead of opening it in a separate browser window.

So this takes me to the crux of this post. My overall intent of these posts is to document things I learned or had to research to solve so that I, for one, won’t have to re-learn the issue again and maybe also in the process it will help others.

Competition Manager uses an older gem called “prawn” for it’s pdf generation and “prawnto” to support templates.

Yes I know there are better solutions and even “prawn” has a new version but one week out from the actual competition I am not about to change out a major component of the product.

So I had to figure out how to fix this with the current legacy code.

The way this works is I have a route set up to serve the reports that once called retrieves the correct data for the report then uses prawnto to load the template and generate the pdf. The original controller method looked like this:

   def event_results
       @event = Event.find(params[:event_id])
   end

So what would happen is the client would call this method with the event id and then the template named “event_results.pdf.prawn” would be used to generate the pdf file that was then returned to the client.

I knew I needed to set the filename and stream the file back to the client, setting the correct headers, but how to do it was hard to find. Here is what I eventually found that would work:

  def event_results
      @event = Event.find(params[:event_id])
      prawnto :filename => @event.name + ".pdf", :inline => false, :template => "event_results.pdf.prawn"
  end

So now what happens is the filename is set to the name of the event (with a .pdf extension), it is marked as inline false so the document will be downloaded, and finally the template to generate is specified.

So in the end a one line change solved the problem. I tested it, deployed it and the product is ready for action this Friday.

Till next time.

Sunday, November 1, 2015

Solving From A Different Direction

As of late I have been a bit remiss in getting these blog posts out the door.

Part of the issue has been I didn’t have a good blog creation solution. I have tried standalone apps, the provided editor from my blog provider and I even tried using different plugins to get the results I wanted.

This week I was documenting the REST API for my new web project and I realized that what I was doing there might solve the problem I was having here.

The problem has been how to show code snippets. So far all the standalone blogging apps I have tried have failed in one way or the other when I tried to attach code. In fact it was so bad that in my last post I had to post screen shots of the code.

That’s not right, so I have been hampered by this problem for a while.

As I said, I was documenting the REST API for my new web project and I have been doing it in Markdown so that I could view it, nicely formatted, from the git repository. In it, I had to show an example of the REST call in CoffeeScript as well as show the resulting JSON that was returned.

Markdown has a very simple way of showing code snippets, but for me it wasn’t working exactly right. It was delineating the code, like I wanted, but it was showing it all on one line.

What I learned, after some investigation, is Markdown has different flavors. Oh the joy of the open source world we live in ;-/

Anyway once I figured out the syntax for the particular flavor of Markdown my git repository supported I was able to get the code snippet formatted properly. So my CoffeeScript code looked like this.

     $.ajax({
        url: "/goals",
        dataType: 'json',
        type: 'POST',
        data: {
          goal: ...
        },
        success: function(data) {
          ...
        },
        error: function(xhr, status, err) {
          ...
        }

With this working it got me to thinking. What if I just wrote Markdown documents, and then exported them to HTML and pasted them into my blog? Would it work?

So today’s post is mostly a proof of concept of that. I can already see one downside and that is I’ll need to keep the Markdown versions locally, if I want to make any edits. It pretty much makes the editor on the blogging site useless.

I found several online editors that can take Markdown and export the HTML. Another requirement was that this HTML file had to be a single file, otherwise it would be hard to cut and paste it into the blog application.

After trying JavaScript, Swift and Ruby code I was pretty confident this could work. However, I also needed to show ReactJS code as well.

This has been the code that has presented the most challenge to the various solutions I have tried. The reason I think (I use JSX syntax) is the code starts out as JavaScript but then in the “render:” method turns into XML/HTML.

All this works because of the “JSX” compiler.

However, I have not found a standalone app that has handled this well. Admittedly I do need to go back and see what support the standalone apps have for Markdown, since I now think that is right format to use. At any rate, here is a simple JSX file:

var Page = React.createClass({
  getInitialState: function() {
    return {goals: []};
  },

  componentDidMount: function() {
  },

  render: function() {
    return (
        <div className="col-md-10 main defaultheight">Page
        </div>
    );
  }
});

I was pleasantly surprised how well this worked.

There is another advantage in using this scheme and that is any documentation I write for my iOS projects can also be done in Markdown (well a flavor of it).

So in the end the fix to a problem I was having for a different issue (that of documenting the REST API) may also solve the problem I have with including code snippets in blog posts. Anytime I can have one solution that solves two issues, I call that a win!

Till next time.