This post is a part of a series on application performance. To quickly recap, the previous post presented a high-level process for handling performance concerns in a Mendix application. It also outlined how and what metrics to gather to best diagnose performance problems.
The focus of this blog post will be xas. Now, some of you are probably wondering,
What on Earth is xas?
As you probably know, Mendix apps are what is called Single-Page-Applications (SPA). This means that navigating between different screens in the app does not reload the whole page, which gives a feeling of responsiveness. Of course, the layout of the screen and the data still has to be loaded somehow. This is done using xas, a Mendix technology for executing dynamical queries (which is part of the Mendix Client API).
Mendix uses xas to get pages, layouts, snippets as well as data (both by association and from database). Furthermore, xas is also used to commit and delete persistent objects and also to invoke microflows from the client. Given how ubiquitous xas queries are, it is no wonder that they are very important for the performance of your application.
How does xas affect the performance of my Mendix application?
HTTP is used to relay xas requests from the client to the server. This means that each request has at minimum to travel to the server, be processed and then travel back to the client.
First let us look into the travel time. Depending on the location of the server and the client the round-trip time alone can be significant (50 to 400 milliseconds). Quite often, one xas request leads to another, for example after loading a page with a data grid, next data for the grid needs to be loaded. Unfortunately, the data for the grid can only start loading after the page has loaded. The result? If a request to the server takes roughly 100ms, then loading the page will take at least 200ms because the requests have to be done sequentially, i.e. one after another.
Next, let us turn our attention to the second big performance factor in xas request: the time it takes for the server to handle the request. One of the most important things to note here is that there is only one database and a limited number of servers (in my experience usually one). If the server/database is already busy with other xas requests, then it will take longer to handle new requests. That is why it makes sense to minimize the number of xas requests.
Enough theory, lets see an example
As promised we are going to look at real-world application where I together with a colleague worked as a performance consultant. For obvious reasons, I am not allowed to use the analysis and model of the real application, so instead, I rebuilt a minimal working example (MWA). This MWA has all the essential features of the real application that I want to focus on in this post.
Exhibit A
The performance issue in this application that we were to fix was simple: the most used page in the app took an unacceptably long time to load. To give you an idea, the page loaded in 5 seconds when only a single user was logged in. This jumped to 8 seconds on a normal day with a handful of users. Just imagine having to wait that long for a page to load. This was driving the employees crazy, and so the company asked us to help. The page in question hosted a planning-timetable. Various resources were booked by time slots as shown below (the real app looked much better)
And so me and my colleague set of. Our end-goal was to bring the time down to 1 second.
We started by measuring the performance using APM. APM has an excellent feature for analyzing xas queries that measures the true load time as experienced by the user. It turned out that most of the time (roughly 3 seconds) was spent loading the time slots data! The data was loaded in many small xas requests, each of which only took a few hundred milliseconds or so. But, put together this amounted to several seconds. That was a strong clue. A quick inspection of the model uncovered the following design for the timetable
Can you see what was causing the performance issue? Can you think of a way to fix it?
Have a minute to think about it then read along to see what we did.
The problem is that the tasks are loaded per resource. This made a lot of sense from a design perspective and was the simplest possible solution. However, it was a considerable strain on performance as it resulted in many individual xas requests to load the task for each resource separately. You don't have to take my word for it, see for yourself in this nice visualization by APM.
An APM analysis of the page load time, showing the individual xas requests being made and their duration as experienced by the user.
The fix was to reorganize the page so that the time slots are loaded in a single big request. To preserve the layout we used some simple css¹. The resulting page model was
At this point, our confidence was high that the fix will give much better performance. But we did not go by gut feeling. Instead, we deployed the fix and ran it through APM again to check that it was really working. The results speak for themselves:
The load time of the page went from 1.7 to 0.7 seconds. The results for the real application were even more dramatic. The time slots load time went from 3 seconds to 200 milliseconds. This brought the total page load time from 5 seconds down to just over 2 seconds.
Before continuing with the next exhibit, let me point out that this is a classical example of simplicity vs performance. To improve the performance of the page we had to introduce additional complexity. Thus, it is very important that it was backed by exact measurements from APM.
Exhibit B
Encouraged by our success, we went on a mini crusade and to eliminate all unnecessary xas requests. What follows is something I am sure every Mendix developer can relate to.
The page had a nice message that showed the full name of the logged-in user. As is well known in the Mendix community the full name is not a property of the System.User entity, but is a property of the Administration.Account entity. So to display the full name, one needs to obtain a reference to the account object. In the application, this was done with the following microflow
The database retrieve is a very simple solution, but results in an unnecessary request to the database. Because Administration.Account is a specialization of System.User the $currentUser variable can be directly cast to an Account. This is an old trick that I picked up at Mansystems and I think it is worth sharing
This gets rid of the database retrieve, but we are still left with the microflow call. Surely, there is no way to get rid of it. Or is there?
It turns out that since the release of Mendix 8 there is. The trick is to use a nanoflow in place of a microflow as a data source and use a javascript action to get and cast the current user. The javascript in question is only a few lines long and is available in the app store.
/**
* Gets the current account from the session.
*
* If the currentUser does not exists or is not of type account returns empty
* @returns {MxObject}
*/
function CurrentAccount() {
// BEGIN USER CODE
return new Promise((resolve, reject) => {
try {
if ( ! mx.session ||
mx.session.getUserClass() !== "Administration.Account") {
resolve(); // return empty
return;
}
mx.data.get({
guid: mx.session.getUserId(),
callback: function(currentUser) {
resolve(currentUser);
},
error: function(error) {
reject(e);
}
});
} catch ( e ) {
reject(e);
}
});
// END USER CODE
}
This makes it possible to completely eliminate the xas request as confirmed by APM.
Incredible isn't it? The end result is a nice speedup of the page load time from 168 (using database) to 42 milliseconds (with nanoflow):
To be clear, the idea is not to demonstrate how to retrieve the current account, but more broadly how performance can be improved by replacing microflows with nanoflows. Honestly, with the introduction of javascript actions, there is almost no logic that can not be ported to a nanoflow. In the real application, we managed to shave off 400 milliseconds by replacing a few microflow calls with nanoflows. At this point, the page was loading in 1.8 seconds.
That concludes the examples. You can download a project with both examples from the app store.
Summary
In this blog post, we saw how minimizing the number of retrieves can greatly speedup loading pages. This can be done for example by replacing a microflow call with a nanoflow or grouping several retrieves in a single request.
Stay tuned for the next blog post in the series. To be continued...
¹ In our case, the css relied on the fixed ordering of resources. Inspect the app or check the source code for all the details. An alternative approach would be to flag the last object in the row or add dummy objects to signal the start of a new row (e.g. a task that always starts at 6 AM).