Inspiration while in a physical rut.


I’ve been working as an iOS Developer, professionally, since October 2016. Before deciding to take that journey, I was a teacher. It was actually something completely unrelated to either career that helped give me the courage to make a career change at 37–joining a CrossFit gym.

For most of my 30s I was pretty complacent in my job as a teacher. I wasn’t bad, but I wasn’t motivate to be particularly good at it. I knew my content, had a decent rapport with  students and earned a respectable living. However, I was bored and frustrated.

After nine years of teaching, I didn’t feel like there was much more for to me learn. Actually, a better way of putting it was that I didn’t have any desire to learn more. Hence the boredom. So, I quit, went to a bootcamp in Utah, and wound up starting my dev journey at Emory University in October 2016.

Those 21 months at Emory I was a newbie, surrounded by fellow junior developers and we just hacked away at solutions as we came upon different problems. Eventually we started to figure out how to structure our code better, learn some advanced concepts, embrace cleaner code and generally move away from the “junior” status.

During that time, CrossFit remained a fixture in my life. It helped wrap my heady day-job with some much needed physical demands that are programmed in the workout  so that mind and body were well worked out by days end. For me, programming and CrossFit just go together.

Last June I took a new position with Rheem Manufacturing and while I had to change CrossFit gyms to accommodate the new work location, life pretty much went the same way. My workouts were now in the morning rather than evening, but I managed to make sure I was using both mind and body in my day to day life.

Whether the advent of turning 40, years of competitive swimming, or maybe I tweaked something in CrossFit, in December I had to undergo surgery to repair my right shoulder. At first, it was thought to be some arthritis and a bone spur, but once in surgery they found more damage–namely a torn labrum.  Recovery time was thought to be 6-8 weeks, but after the extent of the tear, it went to 3-6 months.

While my relationship with code and learning hasn’t subsided, my physical daily routine took a massive hit. Where I once was able to lift a fair amount of weight for several reps, I’m now at where I couldn’t do a simple pushup. I go to CrossFit when I can, but lately, knowing how little of it I can do, my effort has been half-hearted. That mind-body balance has suffered.

Strangely, it was a slow process. I had not even noticed it until I was stuck on a problem at work and realized that I was not putting in the required effort to solving it. Where I once spent days figuring out how to do something early in my career with dogged determination, I was now trying to rationalize why I couldn’t get something done. I went home from work on Friday pretty dejected. The drive and motivation I’d obtained through workouts and finding a career that I truly didn’t tire of learning from was dying.  I went to bed on Friday night wondering if I was in a rut, or if this is just as far as my talents will take me.

Inspiration, though comes from some surprising places. I spent this past weekend watching some documentaries on Netflix and Hulu. I’d heard about “Free Solo” winning the Academy Award for Best Documentary and Hulu had it available, so I watched as Alex Honnold became the first person to free solo El Capitan in Yosemite National Park. For the uninitiated, free soloing is climbing alone, without protective aids. He accomplished the feat in just under four hours.

The documentary, though, shows the years of preparation he went through to achieve this climb. He actually has a great Ted talk on the difference between his successful free solo of Half Dome versus El Capitan. He explains how the level of prep for El Capitan left him feeling much more satisfied with the achievement.


I also watched “The Dawn Wall“. Another documentary about climbing up El Capitan, though a different section of it. Tommy Caldwell and Kevin Jorgeson spent over 3 weeks attempting to scale the Dawn Wall section of El Capitan. The documentary marks their six year journey to achieving this feat.

Both movies describe the ups and downs, the years of prep, and the mental stamina required by the climbers. While I don’t necessarily have the ability or desire to climb El Capitan, I am still in awe of their achievements and look to apply those same traits to my own life–namely, how adversity affects our ability to move forward from difficult circumstances.

So it appears I’ve been having a sort of pity party lately. The physical part of my life that the mental needs to work at its peak hasn’t been getting the attention it needs, and I had just given up attempting to give it what it needed unless I could do it in the same way I always had–namely CrossFit. My shoulder is about 85% recovered, but extremely weak. I’ve been cherry-picking workouts because some of them I couldn’t do, even if I had modified them. Modifications have gotten boring. I could continue with excuses, but suffice it to say, I’ve been taking the easy way out with getting my body the exercise it needs. Watching these three gentlemen approach their difficulties and continuing to try new things to get past them may be exactly the inspiration I’ve needed to remind myself to do the same. After all, history shows my mind is better at solving problems when my body is worked out properly.

It also occurs to me that I need to get back out to Yosemite sometime soon to reconnect with the magic that exists in that valley.

P.S. “Valley Uprising” was also a great film to watch and I highly recommend it, but it didn’t connect with me quite like the other two did.


Still coding along…


My last blog post was in March. I currently have 3 posts sitting in my drafts folder waiting to be revised and published. The nature of work and family, though, usually winds up pushing these posts back to a future, undetermined date.  Needless to say, I’m still coding along.

Much has happened since my last post on here. I took a new position in late June with Rheem Manufacturing working on their app, so my time at Emory (and my first real dev job) ended in mid June. I still look back on amazement at the amount I learned during this time from when I finished The Iron Yard. 21 month in a dev job just sort of demands it. Still, I was nearing the end of a project and ready for a new challenge. After several online interviews, a few phone interviews, and a couple face-to-face interviews, I accepted a position with Rheem. If you’re not already aware, Rheem manufactures air conditioners, furnaces, boilers, and probably most well-known for, water heaters.

Manufacturing is beginning to embrace the mobile space and our app, while consumer facing, is actually geared toward a different consumer–the contractor or plumber installing your a/c unit or water heater. It’s an interesting viewpoint and as Rheem has spent the time to immerse me in the different industries, it has given me time to think about how to help best serve the users of app, in an installation environment.

Part of my training was pair programming with the development firm, Stable|Kernel. They initially built the app and I was hired to assume future development. They’ve spent the better part of 3 year on it, so it was enjoyable to look at their development process, ask questions on how they attacked different requirements, and ultimately learn their approach to problem solving. Code reviews, both formal and informal, discussions of design patterns, explaining some of the more intricate parts of Xcode I had never really dove into before–all felt like my 21 months at Emory compressed into 12 weeks. One of the biggest takeaways, was that they switch back and forth between architectures as the requirements see fit.  While most of it uses MVC, there were some elements where they went with MVVM as the nature of the task made it more readable to me and anyone else that might look at the project in the future. I liked the flexibility. I was so stuck with trying to make my previous work fit into one architecture, I’m sure I actually made things worse. It’s nice to see a different viewpoint.

Now that my time with them has ended and the training wheels have been taken off, so to speak, I feel much more grounded in my abilities. Emory was a great jumping off point, Rheem is the next step in progressing. I look forward to the challenge.

On a side note, I may discontinue my  use of wordpress for my blog.  I’ve discovered and I like their markdown capabilities more, especially as I write more on code.  When that first blog comes, I’ll link from here.  My queue of drafts on Codable, Interview musings, and Coordinators are still forthcoming…

Wandering Down The Merge Sort Recursion Trail…

Last year I wrote about recursion. It was a concept that baffled me. How was it keeping track of all of those return values? I learned what a stack was, and how the calls hung out in the stack until they were ready to be used/popped off/returned…pick whatever verb you like. However, that post wasn’t my ultimate goal.

My ultimate goal was to figure out whas was going on behind the scenes in a merge sort algorithm. I had been introduced to this algorithm in a Coursera course I took last October and though the course didn’t require us to implement it, they discussed it in detail and I was lost.

I don’t mind getting lost, but I do mind staying lost, and unfortunately these past few months I just haven’t made the time to sit down and learn this properly. I tried half-heartedly at the first of the year, but I found something else that piqued my interest and this topic was pushed aside.

Well no more! A coworker has been working feverishly on algorithms and data structures the last couple of months and it inspired me to give this a try, so here goes….

The Merge

Merging two items seems straightforward. I have a set of numbers here, I have another set there and I want to merge them into one set (I’ll use the word array from here on out, but in non-programmer/math parlance, it means the same; it is a collection of something).

When first introduced to the merge-sort algorithm, this was the second part taught, which when I thought about it later, seems backwards. This function does a great deal of work, and it’s explicit. Reading the code, you can easily identify what it’s doing. While it is a function called at the very end of the mergeSort() function, without it, nothing would happen outside of a stack overflow. So to help me explain it to myself, I prefer to look at this function first.

Here is what goes on in the merge function:

  1. It takes two arrays of values (we’ll use numbers for our examples)
  2. It will merge those two arrays into one array, ordered from smallest number to largest
  3. It will run in O(n) time as it has to look at every value in the two arrays
  4. It will return an ordered array at the end

That’s pretty much it, though step 2 is quite involved. We need to evaluate a number in each array and compare them to each other. Whichever number is smallest, we will add that number to a new sorted array. Step 2 needs to handle a case for if the number in array1 is smaller than the number in array2, a case for if the number in array2 is smaller than the number in array1, and a case for if both numbers are equal.

So how do we keep track of two numbers at the same time? How do we know the when the comparison is done? Well, if we’re not exactly sure when something will end, a while loop makes sense; we just need a terminating condition so the loop actually stops.

Credit: Ray Wenderlich Algorithm Club for the code for merge and mergeSort.

To keep track of which value we are at in both arrays, we set index values. This also serves as our terminating condition for the while loop. The first while loop is primarily where most of the work is happening. The two remaining while loops handle cases where the arrays are of differing lengths and there is no more comparisons needed to be made to the two arrays; they simply just append all the remaining values to the sorted array.

So all of this happens when two arrays are passed into it. However, the arrays that are passed in need to be in sorted order. By that, I mean we can have an array such as [2, 4, 9] and [3, 5, 7] and the merge function will work just fine, but we can’t have something like [2, 1, 0] and [7, 5, 8], because really, all the merge function does is just swap out values at indexes for two different arrays. The number 2 in the first array is never compared with the second number, 1, in the same array. It’s only evaluated against the numbers in the other array. So how do we fix this?

The Merge Sort

What if the leftPile array and rightPile array that is required by the merge() function only had one value each in it? The arrays wouldn’t have to worry about values within their own array, and the comparison made against each other would work fine. I mean merge() could handle [5] & [2] without in issue and add those numbers into the orderedArray pile without problem. So if we divide up an original array of say [2, 1, 5, 4, 9] where merge only had to compare two arrays with one number each, the merge() function would work without worry. So how do we divide our single array into a bunch of small arrays with only one value each? Recursion

And this is where I struggled mightily to understand what was happening. I knew what the program was designed to do. Break one large array into a bunch of single value arrays, but I didn’t understand how it was return values from our orderedArray in the merge function all the way back to original function. It was magic and my brain was stumped; which led to the study of how recursion was working on just a small level such as the Fibonacci sequence. From there and learning about the call stack, I finally discovered what was happening. With merge sort, the same thing is going on but because it’s still all so new to me, it took me awhile to understand.

In order to explain, let me show the code that is actually called to run merge sort.

leftArray is calling the function it resides in again….and again…and again until a value eventually is returned to it.  Credit: Ray Wenderlich Algorithm Club for the code for merge and mergeSort.


  1. Line 38 is our terminating condition. Since this function calls itself, it needs someway to know when to stop. If the array only has one element in it, return that array and end the recursive call.
  2. Line 39 split the array in ½. This forms our left array and right array as seen on lines 41 and 42
  3. Line 44 returns the merged, sorted array

Deceptively simple right? Well, yeah, it was to me. I immediately looked at the code and thought, okay, the leftArray = [2, 1] and the rightArray = [5, 4, 9]. But I thought for merge() to work, the arrays have to already be in sorted order? Yeah, I was confused.

As it turns out, we need leftArray to be = [2] and rightArray to = [1] for our merge to work–and it does. The recursive call is doing exactly that. Allow me to explain.

Pardon my outdated PowerPoint skills…I haven’t had to use them in awhile…

leftArray continues to call the mergeSort function until it eventually receives a value back. If there are too many recursive calls, the program will crash (stack overflow), but the picture above basically represents the call stack. Red arrows denote that the leftArray (steps 1 & 2), or in one case, the rightArray (step 4) has not finished evaluating yet and makes another call on itself. The green arrows represent a value returned back to leftArray (steps 3 & 6) or rightArray (step 5). NOTE: this picture only represents finding a value for leftArray in the intial call on the left. We haven’t even started evaluating the initial rightArray variable. (I’d encourage you to do this on paper just to walk through the process : ) )

I would not recommend using a red pen — looks angry!  : )

Nested call, continued call, whatever you want to call it, this is what was lost on me. I could not visualize it in my head. Only when I sat down and started sketching it out on paper, did it start to make sense. The biggest takeaway for me is: in the initial call, the rightArray never even gets evaluated until the leftArray finally has a value returned to it from the recursive calls, and that value returned is in sorted order. It may take 2 recursive calls, or it may take 15, but lines 42 and 44 in the code snippet above remain waiting to be called until line 41 finally has a value.


If none of this makes sense, I’d invite you to read my initial post on recursion here. I think the only way I’ve been able to keep track is to write it on paper and tack it up on a wall so I can follow the recursion trail. Keep in mind that I’ve been trying to make sense of this off and on for a few months, and I’m sure as soon as someone asks me a question, I will still probably mess up the answer! But, a lesson I learned years ago in college struggling through sequences and series in Calc 2 is this: there is an answer, I can find it, but it might take awhile. Don’t measure others pace against your own.

For those more advanced in this than I, I did stick with a top-down approach for merge sort. I know a bottom-up exists, but I want to make sure I have a thorough understanding of this implementation first before moving on.

Recursion…a slow-learner approach…



I think everyone has topics they pick up quickly. In high school, trigonometry just made sense. I attribute most of that to an amazing teacher, but sometimes you can just “see” the answer as you’re looking at a problem. It’s intuitive and very hard to explain why you know that your approach is going to work. It’s frustrating to your friends who ask “How did you do that?” The token answer is “I’m not really sure. I just followed the steps I was taught.”

I feel that way sometimes when I read others explain certain approaches to algorithms. I was asked once to write an algorithm that finds a sequence of Fibonacci numbers. I had memorized a recursive formula and reproduced it quickly…feeling proud of my mad memorization skills (note: I’m actually horrible at memorizing things. I have a biology teacher in high school that can attest to it…as well as a lit teacher who saw me struggle mightily to recite a monologue from Romeo and Juliet).

Then he asked me to explain what the algorithm was doing. I balked. “uh….well, the program keeps calling itself until it finds the answer.” “How does it know what the answer is?” “Well…..”

To his credit, he could have really lit into me for using a procedure I didn’t fully understand, but he didn’t. He also recommended me for the next round of interviews, so it wasn’t a deal-breaker, but it taught me an important lesson. Memorizing to “fake it while you make it” only works for the short-term. If asked to create another problem using recursion, I’d have had to end the interview. I had no clue.

So here I am a year and some months later, and recursion is still one of those magical concepts that looks like voodoo, but with some extra understanding of memory usage, is not a particularly difficult concept to comprehend, even if implementation still remains a mystery.

Borrowing some explanations from Aditya Bhargava’s “Grokking Algorithms”, I’m going to attempt to explain the mad science of recursion.

My daughter is constantly losing her clothes. She’ll always have items scattered all over the house, but has no idea where everything is. As a seven year old, her sleuthing skills are lacking. She gives a cursory glance of her bedroom, and immediately gives up and declares her clothes lost. So I ask her to make a list of all the rooms she’s been in since taking her clothes off yesterday and putting on her pajamas.

  • kitchen
  • bedroom
  • closet
  • bathroom
  • my office

So I ask her, well, you know the clothes have to be in one or more of the rooms, so how are you going to find everything you need?

  1. Go into room
  2. Look in all nooks and crannies of that room
  3. If all items found, I can go play
  4. If not, go to the next room on my list and repeat

Note: this still only works 1/2 the time because life is super stressful for a seven year old who doesn’t want to look and only wants to play, but still, the process remains true

So she goes into the kitchen, checks the pantry, checks under the table, checks the chairs, and checks the recycling bins (don’t ask, but seven year olds are weird creatures) and finds her socks.

“I found socks.” “Okay, go to the bedroom”

She starts off in the closet, searches under the bed, searches her bathroom she shares with her sister and then her chest of drawers. Lo and behold, she’s found her shoes. The logic behind this eludes me.

Now that she’s properly shoed, she proceeds through additional rooms until she eventually has been able to clothe herself. Finding one item is useful, but she’s not leaving the house with some of her items and not others. Satisfied that she can face her friends without ridicule, she goes off to enjoy her afternoon fun time where she solves big problems and plays in her make believe world.

What she’s really done is a recursive search for her clothes. She has a list of criteria to search, goes through the criteria on her list and stops when she’s found what she needs. Each item found represents the base case. It tells her when the search can stop.

So recursive functions have base cases. In that regard, they aren’t much different than a while loop. While some scenario is false, keep searching. When it’s true, stop the search. The recursion application comes from each item being needed in conjunction with each other item. A shirt is helpful, but a shirt with pants, shoes, socks, and jacket are all necessary before she’s ready to venture out the door.

In programming, something similar happens. But what happens with all those continued recursive calls? Are the numbers magically lost? Do they somehow keep track of themselves in the background? Is it part of the voodoo? Not exactly.

What I didn’t know is that the function is keeping track of its calls in something called a stack. A stack is short-term memory usage. When the function is done, the memory that was allocated for the function is done away with and gone. However, while the function is still calling (not yet completed), it still utilizes space on the stack. With recursive functions, each recursive call puts some new function on the stack. The stack behaves in a “last in, first out” order. Whatever the last allocation was made, it’s the first one back out when the function finally completes and the memory is no longer needed. To put another way, the top function has to finish first before any of the other calls below it in the stack can finish processing. I like to think of a can of Pringles in this case. The last Pringle chip put into the can will be the first one back out when the can is opened.

In this case, every time you call a function, the computers saves the values from its variables in memory on the stack. When you call a function from another function, the first call is on the bottom of the stack, with it’s values still being held. The next call goes on top of it, holding it’s values too. Here is a graphic example for the fibonacci function.

As an example, the first few Fibonacci numbers are: 0, 1, 1, 3, 5, 8, 13, 21

First: the function…

func fib(n int : Int) -> Int {
    if int <= 2 {
        return int
    } else {
        return fib(n: int - 1) + fib(n: int - 2)

Note: my code above will not give an accurate answer for fib(0) which should be 0. I purposely did this to simplify my demonstration below.

So if I call fib(4), the resultant answer will be the fourth number in the fibonacci sequence, which is 3. Here is how it works…

This is the first call. fib(4) is typed in and n is assigned the value of 4. Looking through the function, n is not less than or equal to 2, so we go straight to the else statement. We need fib(4-1) + fib(4 – 2)…which we don’t yet have.

Our stack looks like this. fib(4) sits on the stack. In fact, it’s the only thing on the stack. It’s holding the parameter value that we called in the function, fib(4). Well, we don’t have any sort of answer for fib(4-1) handy, and the same holds true for fib(4-2), so we start with fib(4-1) or fib(3) and go through the same process. Now, we’ve kept, fib(4) on the stack to return to later so we can finish out that original problem, so now we evaluate fib(3).

fib(3) is now also placed on the stack, on top of fib(4) in fact, and it goes through the same evaluation process. Our stack now looks like:

fib(3) keeps ahold of its parameter variable on the stack, which may not seem important at the moment, but we’ll get there. The parameter 3 is not less than or equal to 2, so we move to the else statement and we’ve instructed it to return fib(3-1) + fib(3 -2) or rather fib(2) + fib(1). Since we don’t yet know the value for fib(2) or fib(1), we evaluate fib(2) and push another function onto the stack.

As you can see, our stack is starting to grow. If these recursive calls continued, we’d eventually run out of memory and cause a “stack overflow”–meaning there is no more room to put our function calls on the stack. However, every time a function can finally complete, in our case, the base case, it gets popped off the stack. In our case here, fib(2) will pop off the stack because it does satisfy the base case of being less than or equal to 2, so it returns the value of 1. This value can be substituted in the fib(3) call made earlier in place of fib(2) in the return statement and fib(2) gets popped off the stack.

So now our stack contains only fib(3) and fib(4). fib(3) asked us to return fib(2), which we found to be 1. Anytime a function completes, it is removed from the call stack. It’s values that were required by other functions are acquired so they can be used to finish their respective needs. In the case of fib(3), we’ve returned half of what it needs by find fib(2), but it also requires fib(1). This means another function call, fib(1), and another addition to our call stack. So fib(1) goes onto the stack and awaits evaluation.

fib(1)’s evaluation meets the base case, so it returns 1 as well. With evaluations for fib(2) and fib(1), fib(3) can finally be completed and return an answer of 2. fib(3) returns ‘2’ and leaves the call stack so we’re now left with fib(4).

At this point, you think, “Great! I have everything I need”. Not quite. fib(4) requires a return of fib(3) + fib(2). Well, we’ve only evaluated the fib(3) part. fib(2), although evaluated earlier as we travelled down the fib(3) trail, is no longer available to us as it returned its value to fib(3) and was removed from the stack. So now we must evaluate fib(2) again, obtain the returned value of 1 and have that answer returned as part of the fib(3) + fib(2) answer or 2 + 1 which gives us 3. The fourth fibonacci number is indeed 3.

Here is a broader picture of what is going on, step-by-step:

With recursion, small recursive calls aren’t that big of a deal. Yeah, it takes up more memory and repeats calls already made, but with small numbers, the runtime differences aren’t even noticeable. However, if you tried to compute fib(100), you’d run out of space in the call stack. The reason is this repeatedly having to find values you found before. Just to calculate fib(100), you’d have to find fib(99) + fib(98). fib(99) would have to work all the way down to fib(1) and then you could start working on fib(98) and repeat the whole process.

This is a better example of how many calculations must be computed using recursion.

Notice how many times fib(4) has to be computed just to solve fib(7).

There is a work around called a memoization table which stores each compute fib value if it doesn’t already exist and then checks the table first to see if a value exists before trying to compute. If it does exist, it uses the value stored. If not, it computes the value, and stores it in case it’s needed later. It reduces the number of computations needed and is very quick. I won’t be demonstrating it today, but will in a future post.

I blogged about this topic to get to a topic I’ve really been wanting to discuss: sorting algorithms. Merge sort was the first sorting algorithm introduced to me that used recursion and it completely baffled me. Quick sort actually made more sense, but it was introduced later. I wanted to blog about both, more to make sure I could explain it to myself than anything, but also to help those poor souls who suffered as I suffered when trying to wrap my head around what was going on. In order to do that, I needed a better understanding of recursion. It wasn’t until the stack was explained, that recursion made any sort of sense. Up until then, I considered recursion to be magic. It still is magic, just not as baffling as before!

How I wish I had learned algorithms…

I’d like to watch Sheldon in an algorithm interview…

It was pretty clear after a couple of phone interviews last year, that coding your way through a problem testing your knowledge of algorithms is a common screening practice for most companies.  Truthfully, I don’t blame them. The quality of instruction in software development varies, whether it’s online tutorials, bootcamps, university courses, or what have you.

So after a few attempts at these questions, and performing my way out of a follow-up interview, I started asking others how to go about learning them. I received various answers from: “Buy ‘Cracking the Code’ and study it” to one interviewer telling me to go through a Algorithms textbook for about six weeks and I should be good to go. Some suggestions were more helpful than others. “How to Solve it by Computer” by R.G. Dromey was actually helpful for some introductory chapters. The person who suggested that also suggested I hold off on Steven Skeina’s “Algorithm Design Manual” until I felt a little more comfortable with understanding the basics.

I purchased a yearly subscription to the website “Interview Cake” and went through 8 or 10 problems over the year, but I always felt lost and didn’t know how to set up the problems, much less solve them.  I felt I was memorizing things without really understanding them.  If the problem changed slightly in an interview, I would probably have been lost.

I even made attempts at Coursera’s offerings.  Stanford, Princeton, and UC San Diego all had Algorithm courses and at one point or another I signed up and promptly forgot about it. It wasn’t until I had to pay for a monthly membership to Coursera that I took it more seriously.

Last week I completed the Algorithm Toolbox course through UC San Diego. Weeks 1 – 3 were pretty straightforward and I had little difficulty completing the required tasks, which a year ago were very difficult for me, which was encouraging!  Weeks 4 – 5 were much more confusing. Part of that was due to my watching the tutorials late at night when my brain was done for the day. Part was due to not really understanding the pseudo-code presented, nor the the mathematics of the algorithm being explained in the video.

I feared at the end of week 4, where I struggled mightily with solving a merge sort problem, that this would be another course I didn’t complete, though not for lack of effort. Try as I might, the recursive tasks being performed were baffling me. The videos, nor pseudo-code made sense and StackOverflow showed answers to similar merge-sort problems, but no real explanations. (Note: my next post will go over the 4 problems that I need to understand, but had difficulty figuring out.)

Week 5 was even worse. The concept of dynamic programming made sense, but implementing it was just baffling. While we normally get a week to solve our homework, the course is 6 weeks long with only 5 weeks of assignments to allow for a cushion to complete any of the weeks’ tasks. Week 5 took me two weeks to get through.

The first week was similar to week 4. Watch video, see the demo, take notes, try to implement the exact problem on my own before even looking at the homework. Sadly, I couldn’t even complete the sample problems they had. The pseudo-code confused me.

Fortuitously, I texted my bootcamp instructor from The Iron Yard just to check in and see how things were going. He and I chatted back and forth for a bit and then our topic landed on algorithm questions.  He mentioned he had just bought a copy of “Grokking Algorithms” by Aditya Bhargava. He said if I was struggling with dynamic programming, that I might want to give this book a try.  It took the math out of the book and used pictures and demonstrations to give just an overview of the topic instead of a deep dive.

I didn’t know this, but ‘grok’ means “to understand intuitively and thoroughly; to communicate sympathetically”. How true it turned out to be as I thought the book was written with me in mind!

Over the weekend between weeks 5 and 6 I flew through my copy of the book.  I started at the beginning even though most of the stuff was familiar with me. This book presented the best explanation of recursion I have seen. It drew pictures, it made me draw pictures, it asked good questions that helped me see the concept better. Finally, I got to the dynamic programming section, and sure enough the example he went through was almost identical to one of my homework problems.  The difference though, was that he offered no code, just a walkthrough of how the algorithm works–again, with pictures and a couple of practice examples for me to complete–also by drawing it out on paper.

I was able to complete the homework assigned for the final week and complete the course. The book didn’t give me the answers, or even any sort of pseudo-code, but it did give me an image to look at to understand what is happening with the memoization table being used to solve the question. As an added bonus, it went through Big-O notation for each chapter. This was something I felt pretty good about before I read it, but it further cemented my understanding.

So if you’re like me and struggle to stay focused when seeing single letter math variables representing pseudo-code that baffles you a bit, then here is what I wish I had known a year ago when I started trying to shore up this deficiency. Again, you may take a different path, but this is what I wish I had done.

  1. Buy and read “Grokking Algorithms” first. Buy a notebook and take ample notes. Attempt the exercises, take your time. Maybe even read the chapters twice. Time requirement: 1 – 2 weeks. Read it, and then read it again and take notes.
  2. Find a pdf or a copy of “How to Solve it by Computer” or a similar guide. It’s a really old book, and the program code is Pascal, but it’s a great transition from Grokking to math-based pseudo-code. It walks through the logic both in setting up the algorithm and how it works after implementing. It starts with small problems and builds from there. Time requirement: 1 – 2 months. I think working through a couple problems per day–AFTER READING THROUGH GROKKING!–would make this last a couple months
  3. Coursera course – The Algorithm Toolbox course was helpful in thinking about about both time and space complexity as the assignment submission did have limits on both. It’s 5 – 6 weeks long but depending on your speed and time commitments, can be completed in less time. (The course has starter files in C++, Java, and Python for each assignment, but will allow any other language as long as you write the tests for it.  I just stuck with Python.)  Time requirement: 6 weeks. This is a subscription-based course so it’ll cost your $49/mo until you finish the course.
  4. Advanced/Practical Algorithm Practice  – I may give Interview Cake another try, or I might start looking at “Cracking the Code” or “Algorithm Design Manual”. I haven’t quite gotten here yet as I’m debating on taking the next course in the sequence that UC San Diego offers “Data Structures” Time requirement: ongoing? I haven’t quite gotten there yet. But starting soon!
  5. Keep those skills sharp by working problems on HackerRank, TopCoder, ProjectEuler, etc…. Time requirement: ongoing…

Some might wonder if all this is worth it. It is entirely possible the companies you interview with pass out a coding assignment rather than a whiteboard challenge, and a good many are headed that way. My take on it is that it’s better to have it and not need it than need it and not have it. And if you have aspirations to land a job at one of the big tech companies like Google or Amazon, you’ll definitely be going through the whiteboard. For me, it’s helped increase my ability to solve problems at work, and I think, made me a more careful programmer.

One final note, this process I described above works for me. Don’t consider it to be the “only way”. I have a family whom I want to spend time with. 8 – 12 hour marathon sessions on algorithms and problem solving just don’t get to happen for me. My time allowed is about an hour at night and a few hours on the weekends. This is what I wish I had done with my nights for the past year instead of fumbling and failing from each half-hearted, overwhelmed attempt. I’ve completed #1 & #3 and currently working on finish #2 (I’ll have a link to a github repo I’m doing this in soon.).  I hope to start #4 by the end of next week as I just received my copy of “Cracking the Code” from Amazon over the weekend.



Learning is enjoyable sometimes…

A few weeks ago, I signed up to take an Algorithm and Data Structures course on Coursera. Part of my reason for this is that I like having stuff explained to me, but mostly because the course has deadlines and I find it easier to stay motivated if the threat of a “0” is looming.  The coursework can be completed in any language but they have starter files and tests already set up for C, C++, Java, & Python.  We’re welcome to use other languages, but we’d have to write our own tests.  Fortunately for me, my first language was Python so I’m dusting off those skills to complete the coursework.

I’m in week 4 of 5 in course 1 of six-course series.  While I don’t know if I’ll complete all six, I have gained some pretty nice insight into how to tackle problems.  Greedy strategies, divide-and-conquer, among others are explored and the course requires us to think about the time-limits required when we submit our assignments.  The correctness of the assignment is not only “can you write an algorithm to complete the task, but  also “can you keep the algorithm from going over the time limit?”  No brute-force/naive answers allowed!

Getting back into Python has been entertaining. Initially I had to write out my algorithm in Swift using Xcode’s playground, and then translate it into Python.  After a couple of weeks I’ve been able to transition into just using Python.  I still write it in Swift afterwards just to compare the syntax differences of the two, but it’s nice to be able to pick something back up after not using it for almost two years.

So what problems have I solved? We started with Fibonacci, which I’ve done before but this was the first time I spent really exploring how it is computed recursively vs. iteratively. Even if not optimal from a memory allocation standpoint (most of the time), recursive functions are really beautiful in their simplicity of code. It reminds me of a more elegant while-loop. Initially I begin to wonder why even learn to use recursive functions if they suck memory so quickly when dealing with large numbers, until we got to the Greatest Common Divisor (GCD) problem.

Note: Recursion in programming is a function that continues to call itself until some terminating condition is met.  I write the function and call the function in the same code. For example:

someFunction(parameter A, parameter B) -> Value {
// do some stuff
return someFunction(parameterA, parameterB)

Screen Shot 2017-09-22 at 9.23.00 AM

GCD is is comparing 2 (or more numbers) and finding a divisor that both numbers share. For example:

gcd(6,8) -> 2

2 is largest number that six and eight can both be dividing by. 3 works for 6, but not 8, 4 works for 8 and not six; you get the idea.

But what if the numbers are really large? say 2,877,988 and 956,432 ?

can you imagine write a loop that starts at 1 and divides all the way up to those numbers checking to see if both have remainders of 0 when being divided? How long that must take?!

Well the course let us know a secret about large numbers. The remainder taken when dividing the first number by the second number has the same GCD as the original two numbers.

Screen Shot 2017-09-22 at 9.23.52 AM
I didn’t really understand this until I saw it in action. (See below) Also, how did people figure this stuff out?!

So even though the number is smaller, the GCD is still the same.  So basically, to find the GCD of the original numbers, keep dividing the first number by the second number until the remainder is 0.  Once you have that, the first number left is your GCD.

Here is a better visual of “trick”:

Screen Shot 2017-09-22 at 9.29.38 AM.png
This made much more sense to me than the Lemma above.

I have two numbers, A: 3,918,848 and B: 653,264. I divide 3,918,848 by 1,653264 and the remainder is 612,320, which we will call A-prime.  I replace the A with B, and put A-prime in B’s old place and perform the same operation again.  I keep doing so until A-prime is 0.

In this case, recursion actually doesn’t take terribly long as it’s reducing the size of the number pretty quickly, unlike when building numbers up during a Fibonacci sequence (1, 1, 3, 5, 8, 13, 21…).  So an efficient way to find the GCD of two numbers actually is to use recursion, though probably not the only efficient way…

Here is the Python function:

Screen Shot 2017-09-22 at 10.00.43 AM

It amazes me how short it is.

Using that algorithm allowed me to complete another assignment that finds the Least Common Multiple.  I won’t go into it but they provided another tip on how to calculate LCM using GCD and again, it provides a short elegant solution.

I think the decision to take this course, for me, was the correct one.  I’m being exposed to some handy tools, I’m obligated to finish assignments since it is costing me a monthly fee and the assignment are providing a nice warmup to my day for me while I drink my first cup of coffee.  (Note: if I haven’t finished the problem after 10-15 minutes, I wait until my lunch break to finish).

This week we began looking at binary and linear search algorithms.  In the past, when I would look at these, I just tried to memorize it so I could regurgitate it if it came up in an interview. Now, I have a list of questions I ask myself and can look at mathematical explanations and make sense of them, which aids in my understanding all the more.

None of what I’m doing may ever be needed, but I’ve finally gotten to the point where I view learning algorithms as not a means to an end in a job search…which is where I was a year ago.  Instead, it has become “I really want to learn this stuff. I find it interesting.” Perhaps at some point I’ll even do it without having to spend money to motivate myself!

Tripped up on closures…again!

I’ve posted before about learning closures. Earlier this year, I was really making an effort to understand how they work. My examples, though, were contrived, or rehashes of other peoples’ examples. As such, my ability to remember them suffers. Context is so crucial to understanding something. I was working on a problem at work today where I need to sort an array of data.  For some reason, I kept blanking on how to do it.  I could write the code that does it, but I couldn’t explain it to myself.  Here was my problem…

var meetings : [Meeting] = [
Meeting(startTime: 0, endTime: 1), Meeting(startTime: 8, endTime: 11),
Meeting(startTime: 2, endTime: 8), Meeting(startTime: 10, endTime: 12),
Meeting(startTime: 9, endTime: 10)]

I wanted to sort the array by startTime. Had this been just an array of numbers, I could have sorted it pretty quickly. For some reason, though, having instances of a Meeting class was throwing me off. I knew that this code would give me what I needed:

let sortedArray = (meetings.sorted { $0.startTime < $1.startTime }) 

This is really shorthand syntax. I know it works, but I could not explain to myself. So I tried the long hand way. Unfortunately, when I tried to write it in a way that made sense to me, I kept getting errors from the Swift Playground page I was working in. Undaunted, I went looking for answers. The Apple documents offer this when working with closures:

{ (parameters) -> (return type) in (statements) }

Hmmm…okay, the parameters I want to pass in, I thought, were the startTime of the meetings. So I wrote something like this.

let sortedArray = meetings.sorted(by:
 { (s1,e2), (s2, e2) -> Bool 
in return s1 < s2 }

What I thought I need to pass into the closure were the start times and end times and sort it that way. I was way off.

As it turns out, when I create a new array like I did with sortedArray, if I declare the type of Array it is, in my case [Meeting], that is the parameter that is passed into the closure, not the properties within the Meeting instance. I didn’t discover this until I explicitly typed it out, and used my old friend, Option-click, which told me what type sortedArray was. When I began typing in the sorted function on the meetings array, Xcode offered this little gem:

Screen Shot 2017-08-28 at 8.02.04 PM

As it turns out, I’m not passing in meeting.startTime or meeting.endTime into the closure. I’m passing the entire Meeting instance. In fact, the closure, as seen in the screenshot, takes two meetings from the array and compares them to return a Boolean value. What it evaluated is everything after in. Once I have the meetings passed in, I can access any of it’s properties to do my comparison.

Screen Shot 2017-08-28 at 8.12.52 PM

What I didn’t understand before is that m1 and m2 are just parameter names for the objects I’m passing in. They represent the meeting instance, not the properties contained within the meeting…which is what I really wanted to evaluate.

So in the end, in order to explain something to myself, my code wound up looking like this:

let sorting : [Meeting] = meetings.sorted(by: 
 { (m1, m2) -> Bool 
in return m1.startTime < m2.startTime}

Yes, it’s not very “Swifty” and in the future, I can use the shorthand trailing closure syntax and be able to look at it and know exactly what is going on. However, using code and being able to explain what the code is doing is a better indication of understanding than blindly trusting something I find on the internet, see that it works, but have no idea why.

I realize this topic is pretty junior for many developers, but it further reinforces what I discovered when taking calculus in college. If you don’t use something often, you forget it. Code is no different. It also helps when learning something, you have a problem you are trying to solve rather than seeing someone else’s example and saying “Yeah, that makes sense.” I do not expect this to be my last post on closures. Especially since I still have more functional programming to learn!

On the upside, at least as I see it in my growth as a developer, I went to Apple documentation rather than Googling for an answer in StackOverflow. As tempting as it was to ask my fellow iOS Slack channel mates, I decided to see if I could figure it out first. When time isn’t urgent, it’s amazing how much patience I have when solving a problem!