Every entrepreneur in the technology space, starts his or her journey with the dream of building a product or providing a solution that addresses a strong need or solves a fairly pervasive problem using technology as the enabler. Technology today allows us to not only provide an elegant solution to a problem but to address it at scale and cost that would have been unimaginable earlier. And nowhere is this truer than in the consumer internet sector where it has played out multiple times with the often quoted examples of be Airbnb as the largest virtual hotel company or Uber as the largest virtual taxi fleet.
However, the trap that most young and enterprising entrepreneurs fall into is what I call the technology catch-22 – hitting the inflection point for growth with a product engineered to be a test-the-waters proof of concept. Open source technology stacks, low-cost public clouds and SaaS solutions for various peripheral services have meant that the barrier to enter consumer internet businesses doesn’t exist anymore. This fuels competition which is a great thing but it also drives a sense of urgency that tilts the balance in favour of the approach of faster to market vis-à-vis ready to market. At that point, the justification that most technology start-ups come up with is that they are following the lean startup methodology with an ‘iterative’ model of development – Build-Measure-Learn loop.
The cycle of Build-Measure-Learn is a great framework to ensure that teams are as much focused on understanding consumer preferences and market realities and analysing data as they are about building the product that they believe will bring about a paradigm shift. However, it is extremely important to be nuanced about where you should apply this framework and where you should take a long-term view from day one of your startup.
Every entrepreneur in the consumer internet space dreams of hitting that hockey stick growth where your users are doubling week on week or the visits to your app or website is exploding or your transaction volumes are surging. And the optimism (rightfully so) is that the product will hit the right note from almost day one. If that is what you one is aiming for, then why not engineer the product to meet the demands of hyper-growth from day one. The answer varies from over-engineering (which is expensive and time-consuming) to not being sure about the pivots the product/platform may take before finding the right fit.
The balance lies in taking a more enterprise approach to building some parts of your technology stack vis-a-vis parts that can be experimental/proof-of-concepts/MVP’s (minimum viable products).
The first step in doing so is to identify which parts of your solution ecosystem are core platforms.
The core platforms should always be built to scale for at least your 18- month projection if not more. The definition of the core platform is the sub-systems that power the rest of engineering. Some of the characteristics of core platforms are:
- They are infrastructural in nature or closely bound to the engineering infrastructure Eg. the transactional data stores, common stores, common libraries to manage data stores, cloud management stacks, build and deployment infrastructure, event queues, etc.
- They power primitive workflows and are generally not dependent on other sub-systems to complete these primitive operations. This basically means that they are the terminal node for workflows
- They implement important support functionality such as log collection, visualisation on the log data, anomaly detection platforms requiring data aggregation and data processing from across services in the eco-system
- They serve high-throughput, low-latency requests from the front-end client that are designed to spike with higher user activity, such as real-time personalisation engine for the mobile app
Once the core platforms have been identified, the next step is to ensure that these systems are not subject to growth-hacking ideas or reactionary changes to support short-term initiatives. Some of the good practices to follow are the following:
- Have a small team of engineers own the core systems over a long period of time so that they develop in-depth knowledge of the platform and apply learnings from experiences of scale events.
- Ensure that every change is put through the lens of whether it brings a generic capability to the core systems and how it impacts the current performance benchmark. This will help answer the question of whether the change should be part of the core system or handled as a verticalised solution in a peripheral service.
- Ensure that the Software Development Life Cycle (SDLC) practices are never compromised in the teams owning the core systems. Poor performance or functional failures in these services can bring down the entire product.
It is important to anchor the design choices for core platforms as early as possible so that there is consistency in the approach to technology selection, design and development of the critical sub-systems. Some of these design choices may prove to be wrong and will be changed.
However, not having them to start with will mean that the evolution of the core systems is ad-hoc leading to a fairly unstable and/or vulnerable base for the product.
(Disclaimer: The views and opinions expressed in this article are those of the author and do not necessarily reflect the views of YourStory.)