Although we absolutely understand that metrics are a key component for driving growth and the underlying guide to optimize for the most critical issues, since the beginning, we’ve always struggled when it came to measure success within our company.
Despite, we have to admit that during the first year and a half of iomando's operations, we were everything but data-driven. It’s not that we didn’t care, it’s just that the daily work sucked a lot of our time and we weren’t placing enough value in the measurements to take a break and start planning for it.
A Hunch Driven Company
But a few months ago, some mentors and advisors started noticing that we were not watching our key metrics at all, and they were astonished. All of them are well stablished and experienced investors or executives at prominent Spanish startups, and they could not believe we weren’t watching over the key health indicators of our business.
With their help we started an “emergency plan” to quickly identify the most important metrics in our business and start making decisions based on “what the data was saying”. Fast forward today, though, I must admit that I don’t know what we were thinking at the beginning and how the right decisions could be made without any kind of data to back it up. Fortunately, things have change around here.
After learning some invaluable lessons from our mentors and advisors, I immediately fell in love with data analysis and tracking the right data points in order to make better and informed decisions. It felt to me like a lighthouse that showed you the path to optimize for success.
We were growing, yes, but we weren’t obsessively measuring growth and trying to optimize for it… If only we knew all this before…
Believe it or not, until that point, data was not the driver for decision making. Instead, features where build in a intuition or common sense basis. Of course we asked our customers and we were so close to them, always asking for feedback, but we didn’t measure, we didn’t collect, and therefore we didn’t get quantitative conclusions of it. Maybe some qualitative points, but nothing farther than the usual hunch.
Our business it’s a curious one when it comes to measurements. That’s because our user is not our direct customer, and by that I mean that we get paid by the administrator or manager of the facility, the people that actually access the space don’t pay anything.
Despite, we place equal value on both sides. That’s because although the user is not paying, it’ll be the first to report a bad experience to the manager in case that something goes wrong. Of course the manager doesn’t want problems deriving from the access system, it’s supposed to just works. So despite the manager is the one who pays for the service, we equally care for both sides of the usability spectrum.
That being said, within this “emergency plan” we sought the most important data to watch over that better correlated with the health of the variables we wanted to optimize for. In our case we were looking for:
- Quality of service
Of course the number one indicator we want to optimize for is growth. Growth is what guides and shows us if we are really executing on schedule. For us there are two metrics that directly speak for growth and we should consequently look close to it: Number of Active Users and Number of Spaces.
We consider this two metrics the voice of growth for two reasons: first is that the more users we can activate, the healthiest the service would be, because it would mean that users are adding up. And the second one, number of spaces, it speaks for itself, is the vivid reflex of sales and installations, so a direct correlation with top line revenue.
We consider an Active User any user that has used the app (has accessed some place) during the last week. We know it’s a little aggressive because the time frame is narrow, but iomando is a service that is supposed to be used every time we access, and access is a common thing to do. We don't even count downloads of the app or leads from the sales department, because these actions are still way up the funnel. We just account for users that have been granted permission from an administrator and have accessed during the last week.
The number of spaces has been steadily going up in time, and that's a good thing. We don't see any kind of exponential growth in there, but we weren't expecting that either. The sales process is slow, and since recently we placed our focus on larger organizations the sales cycle is longer than before, but each sale represent more unitary revenue, too, which is good. Despite, we are working on the tactics to bend the curve and convert spaces at a higher rate.
Engagement is a tricky thing to measure. It can be plotted in a quantitative way, but there's also a qualitative touch to it. How the user feel about using their phone instead of a key? How secure they feel changing their access habits? These are questions worth asking that can't be drawn in a 2x2 matrix. But let's focus in what we could measure from the beginning, let's see what data has to say about engagement.
Open requests is our main indicator when it comes to engagement. Despite it has an inevitably link to growth, the total amount of openings translates directly in how engaged people is using iomando. The only flaw comes from, imagine, a growing phase where we are acquiring large customers, but although the administrator is activating users in the service, nobody is using it. The company would be growing, sure, but the open requests would drop. That's why open requests account for engagement, because it's a measure for value they extract to the service.
In order to discount the growth effects in the open requests, it is useful to take account of the AUs. This way, the total number of requests is not distorted because of huge growth. This metric is key to us, because it accounts on average how many open per user basis there are across the service. Growth in this metric is the holly grail of engagement. It's a little bumpy, but we are getting there.
Quality of the service
This is again, another qualitative data set that we tried to bring to the quantitative domain. Despite we engage on a regular basis with our customers (we have built a program where we perform interviews and checkins with all our customers) we don't have an effective way to get the most value out of this feedback.
Within QoS we want to make sure the time it takes to open the door is always decreasing or at least fluctuating between tolerable margins. Average open time is a critical parameter when it comes to the user experience. The less time the user spends in our app, the better, because that meant that she accessed quickly and smoothly. So, we measure the average gap between the tap and the server request reaching the device in the field. This metric is tricky because it depends on three (not necessarily related) factors.
- Mobile to server: the user could have poor connection so the request might take long to reach the server. It’s not on us, but we have to take accountability for that.
- Server: the time we take to process the order and launch the request to the device.
- Server to device: Again, here we are also a little bit in the dark because we can’t control the QoS for the network operator, but it’s important at least to understand what the averages stand for.
As you can see, the big drop corresponds with an updated communications protocol we introduced with the v1.1 of iomando. The second (not that significant) drop also corresponds with the introduction of v2.0, which also came with some improvements to the communications side.
And finally, the number of network failures we register as a percentage of the installed base. Because our technology is based on cellular network, we have certain dependencies on the quality of the network. If there's a failure on the network we can have a hard time recovering the device because most of the time is not on our side. But we can't explain that to our customer.
So while we work in an offline mechanism to open the door without the need of cellular network, we have also worked on improvements on the software that's running on the device in order to recover and get the most from poor cellular connections.
After the initial panic attack of our advisors, when we finally put all together, things didn't look that bad at all. Despite, this exercise has been to me one of the most valuable lessons in business and also managing product. I've learned that data is not valuable by itself. Is information that is valuable. But information is only useful if it points to the right assumptions.
So I found these three levels of abstraction in there that are a useful framework to analize data and put it to work for your business. Almost all of the time, data available and the variable you are optimizing for are not the same thing. The magic of data analysis remains on your ability to match different data sets in a way that their behavior is directly correlated with the variables you want to optimize for.
The problem, though, is that sometimes is hard to unlink the data from the variables. I mean, it's important to not get fixated into data and start optimizing so the only goal is to bend the curve. The goal should always be deliver a better product and improve the overall experience of the customer, and we need to keep that in mind.
1. Despite number of AUs is considered a metric for growth it also has something to say about engagement, because of their weekly recurrence, the more we add it also can mean that people is simply using more the service, for this reason is also useful the ratio with total user base.