POSTED : July 29, 2016
BY : Lou Powell
Categories: Business Optimization,Digital Engineering
Typically when we think about microservices we think about breaking down the monolith. Deconstructing the large custom system into small, autonomous functionally aligned systems. But microservices can also be new systems. And if you are just getting started with microservices and want to make a difference in the delivery of digital solutions, then building microservices instead of adding new functionality to the monolith may be just the ticket.
A pattern that we have seen emerge over time is that the larger digital presence you have the more you will need to query core business data. Let’s look at the evolution of how digital channels have exponentially increased the demand for viewing hotel room inventory.
Looking at this information for hotel room inventory we know that the data carried in the business systems to manage the hotel room has definitely changed, but the basic info is still the same as in 1965; location, price, number of beds and availability. There are many more hotel rooms in the inventory today, but the data per room is still pretty similar.
However, the number of queries being levied on the data has grown exponentially since 1965. And the patterns for selling the inventory have changed significantly since the emergence of the Internet and the adoption of the smartphone. This is the challenge that digital transformation has created and this is where segregation of data for Command (operational management of the data) and Query (data usage for servicing omnichannel experiences) will create significant advantages.
The concept of Command and Query Responsibility Segregation (CQRS) appeals to me. CQRS is a pattern that segregates the operations that read data (Queries) from the operations that update data (Commands). This is an excellent approach for improving Time to Interaction (TTI) on websites and apps. TTI is that magical moment when a user who is on their desktop, laptop, tablet or smartphone is able to learn something and take action (enter some data or click something). TTI is all about page load speed. And page load is dependent on fast data.
According to Radware’s Spring 2015 State of the Union for Ecommerce Page Speed & Web Performance, a good TTI is 1 second. And any TTI greater than 3 seconds is costing you opportunities. If you are above 9 seconds, forget about it. The study states that sites that deliver a TTI over 3 seconds experience 22% fewer page views and a 50% higher bounce rate. These effects are felt at companies of all sizes – from Internet giant Yahoo!, which found that making pages just 400 milliseconds faster resulted in a 9% traffic increase, to online auto parts retailer AutoAnything, which cut its page load times in half and experienced a 13% increase in sales. Just a few seconds – and sometimes even fractions of a second – can make the difference between online success and failure.
Here is the anatomy of a page request. Even if you are working in a native application, much of the data and page structure is retrieved from a web resource and your request and build process look a lot like this:
There are many things that an application or web developer can do to improve the TTI experience. Caching, minifying, image aggregation and compression, resource load order on the page, etc. can all make a big difference in page efficiency and faster TTI. However, there is only so much that is in the control of the developer. And when it comes to accessing enterprise resources the performance of the site will be severely restricted by the service data it is bound to. Whether it is the service calls that are hit from the application server to construct the response or the services that are called from the client app or browser to drive the UX, after all other optimizations are done, the speed of data access will determine your TTI.
Separated command and query structures: Master data contains historical records and often version control of data down to the field level. That’s really important stuff when you’re managing the master data store, but not so much when the majority of your queries are really only interested in current data in the current state. A microservice approach to creating a new data store specifically for queries would allow for the historical and version data to be stripped away and the data store to be simplified (denormalized).
Store less data: The second opportunity would be to potentially store less data. Not only can we get rid of historical and version data, but we can get rid of records and fields that are not relevant to the consuming audience.
Store additional demanded data: Another great opportunity is to add data. This could be aggregating many data sources to create a comprehensive data store, adding marketing data that’s not carried in your business systems, adding logistics and shipping data to products, adding specifications, or adding newly derived data to allow you to filter the data store based on consumer needs.
Choose data stores ideal for consumption needs: This is a big advantage. Now, RDBMS databases are not moot. When tuned correctly they can be very performant and the advantage may just be to denormalize the data and index it differently. Or maybe not. This is the beauty of the microservices approach. Let the development team test and determine the right data store technology that is the best fit for the microservices that they are creating.
Services constructed for consumption demand: There is a benefit in being able to construct greenfield code for the construction of new services that are created specifically for the consumption of the new data stores and are not constrained or bogged down by the legacy functions of the business systems. By creating a team focused on query there is no consideration for code simplification and management for both query and command that would cause a compromise for one or the other. The right approach to solve the right problem.
Autonomy: This is a little different than the previous point of segregated code. Although that does create autonomy. The bigger benefit is creating alignment of objectives and incentives for the team that is responsible for this new microservice, allowing them to understand business demand and manage their backlog, delivery priority and risk of delivery based on their consuming audience without the competing priorities of the demands on the business system.
Liberating your data so that you can compose it in an ideal way and deliver it at lightning speeds is an easy decision. It’s not so easy to execute. There are challenges with replicating data periodically or in near-real-time and then ensuring the veracity and timeliness of the data. The good news is that there are a lot of folks out there doing this and there are successful models to learn from.
If you’re not quite sure what microservices are, here are a few resources that will bring you up to speed:
Learn more about how to build APIs for consumers, not systems.
Lou Powell brings a steadfast drive for innovation to his role as partner at Concentrix Catalyst, a Google Apigee agency that was awarded a 2019 Apigee Partner of the Year distinction. At Concentrix Catalyst, he works closely with businesses to create pioneering experiences and accelerate outcomes, unlocking greater value and market leadership. He worked in advertising and digital marketing before launching his own business, Vanick Digital, which he led for 19 years before acquisition by Concentrix Catalyst. Lou is a lifelong student of technology pattern adoption and the practices of tech natives, and he brings a design-thinking approach to technology in all of his work.
Tags: API, autonomy, Data, ecommerce, microservices, TTI