Image courtesy of Me

JavaScript Performance Analysis And The Mutating Interface: Part One

Dev Team, We've Got A Problem

You have a problem. People are starting to whisper that you web app is slow and unresponsive. In particular there is a page where the data is taking forever to load. The frontend developers are pointing fingers at the backend developers and saying the backend and database are too slow. The backend developers are pointing fingers at the frontend developers and saying the heavy JavaScript MVC framework they are using is too slow. Tempers flare and soon the real issue that the users are unhappy with the application is lost in the ensuing ego storm. Not one to get caught up in the blame game you dive into the code and start looking for answers.

What do we have here?

Let's take a look at a code snippet (generalized of course) where the team suspected the issue was.

This code is extremely basic and it happens to live inside an AngularJs controller. It requests some data from the server and then an AngularJs factory updates a scope variable so the UI can update accordingly.

Simplest thing that could work

Tempers are still flaring and team communication and trust is stretched thin so we just need the simplest thing that will work. I am not concerned with fancy coding tricks or technical feats of wizardry. I need simple, understandable, and trustworthy. Another thing I need is controllable granularity of performance profiling. I won't get much value at this early discovery stage from a peformance profiler that shows me the small detailed stats on event listeners and anonymous functions. I need large coarse measurements that can help me identify the areas with the most room for improvement. With that in mind let's take a look at what those requirements led me to build.

Detailed JavaScript profiling

Since I am talking about macro-optimization at the moment I am ignoring existing JavaScript profilers. For more information on those check out Chrome's profiler and Firefox's profiler. These are helpful when you figure out the obvious bottlenecks and need more detail but right now it is too much info to be usable.

No-frills Profiler

After doing a quick search for something someone already built that would fit my needs and coming up empty here was what I came up with.

Taking this simple code we can instrument the earlier example as follows.

High level performance

It helps to have a high level picture of what is going on in the page you are profiling. For the page I am focused on it breaks down into a few operations.

Some team members thought that a slow response from the server was holding the page render up. Others thought that the map function to format the data was too slow and should use an optimized for loop instead of the map function provided by lodash.

We ran the instrumented code and found the results that showed up in the console:

Get data time in ms 23
Manipulate data time in ms 15
Update datasource time in ms 90000

Once we measured with the instrumented code it became clear what the issue was - the data source update! The data load and manipulation were both sub second while the data source update was taking over a minute in Chrome. When we looked at the update function we immediately found the culprit:

Performance Analysis and Resolution

What do we have here? We create a new observableArray. We can mostly ignore what an observableArray is and just summarize it as an array that raises events when data changes so that client code can immediately refresh. We then iterate over the data and add items one at a time. Since this was an observable array it was raising events everytime we added an item. The events coupled with the inherent slowness of iterating one item at a time was wasting CPU cycles, eating memory as events fired, and it was destroying the responsiveness of the web app.

A little research into the API documentation for the observableArray and we replaced the update function with the following code:

Here are the results after running the code using the new update:

Get data time in ms 21
Manipulate data time in ms 19
Update datasource time in ms 10

It turns out that the observableArray had a function that would bulk load data and skip most of the event firing. Re-profiling the code showed that this operation turned the on average 1.5 minute update into a 0.01 second operation. It was no longer the bottleneck and everyone was relieved that we had found and solved the performance issue.

As a side benefit since we had encapsulated this operation into the update function other parts of the app also reported improved performance as everyone benefitted from the reduced update time.

Tune In Next Time

The team celebrated and congratulated each other. Not only had they solved the issue and made the users happy but they had done it together with data and not emotions.

In part two we will tear apart the stopwatch code that we created and see if we can make it's usage clearer and less bug ridden. Tune in next time and remember performance problems like to be measured more than guessed. It shows them that you care :-)



comments powered by Disqus

Disclaimer: The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© 2017 Frank Meola

Back to top