By their very nature and reliance on algorithmic or systematically programmed investment strategies, quant funds have a reputation as the most technologically innovative of all asset managers. They cover a wide range of thematic investment styles, but unlike traditional managers, deploy the industry’s most groundbreaking tech. This is due to their need to process large datasets and execute rapid trades – insurmountable tasks for human labor alone. As such, a reliance on technology has become their trademark. Moreover, unlike traditional asset managers who rely on technology predominantly for auxiliary operations (rather than alpha generating activities), quant funds weave tech into all facets of their operations.
This can create unforeseen disadvantages for quant managers. The nature of intellectual property at quant funds creates a hesitancy and mistrust towards third-party service providers. Traditionally, quant funds prefer to keep as much of their operations in-house as possible. This dilutes their attention from alpha-generating activities to general operations which other asset managers have already outsourced. More recently the proven success of the outsourced model for operations – especially for IT by traditional asset managers has caught the attention of quant funds.
There certainly are advantages in outsourcing – especially in tech-reliant environments.
Where RESILIENCY is a concern
Infrastructure (both hardware and software) is fundamental to quant fund operations. Unfortunately, human error, natural disasters, hardware failure and a myriad of other obstacles will at some lead to downtime scenarios; it’s a matter of “when,” not “if,” it will happen. In asset management, this is especially problematic when considering the cost of lost opportunity, especially for a firm that relies on systematized algorithmic sequences. Essentially, if one link in the tech chain goes down, the quant fund becomes vulnerable. In such an event, in-house teams would be scrambling to get the systems of operations back on-line, but by then the downtime event would have already made a costly impact. This is where Managed Service Providers (MSPs) offer an advantage.
Unlike in-house teams, which have limitations on available resources, products, and staff, MSPs have a more fluid resourcing approach and are “always on”. MSPs will often have multiple overlapping tools that monitor for infrastructure weaknesses and potential downtime situations. Some of their capabilities include the ability to reroute computing infrastructure to avoid downtime altogether, monitor and address threats in real-time, run redundant operations to ensure operations and more. In-house IT teams cannot compete with this. Even with consistently upgrading to the latest and greatest monitoring solutions, in-house teams will still be in a trailing position due to advantages of scale for licensing agreements and in terms of experience running resiliency programs. Thus, having an MSP partner will keep a quant firm ahead of potential threats to business operations.
Where EFFICIENCY is a concern
After resiliency, infrastructure optimization is crucial. Here it makes sense to pursue improvement across all IT operations. This means migrating to a cloud-based infrastructure, integrating networks and systems, evaluating the latest IT tools and other actions. The problem here lies in execution. Given the variety of vendors and system frameworks, not all solutions are equally compatible, and the overhead of proper R&D across relevant solutions categories is difficult to manage. Often, integrating new solutions includes a lot of research and back-testing in addition to the installation and migration processes. In-house IT teams will most definitely lack the necessary information to facilitate a smooth transition by default of executing each solution only once – in the very moment of its installation.
Given they likely will have worked with a similar product for a different client, MSPs are better positioned to have the necessary experience of integrating the best available solutions on the market. This is not to imply that they are integrating the same solutions to all clients. Quite the opposite. Given their knowledge of market tools, they are more likely to find a better solution for a quant managers individual situation and integrate it into the unique framework without compromising legacy systems. By keeping these types of processes in house, fund managers expose themselves to stalled initiatives and suboptimal integrations. They risk ending up with a solution that might not be the best fit.
Where INNOVATION is a concern
Finally, there is the question of innovation: how can a quant fund manager push it further? Without doubt, quant managers have done this for decades. Compared to more traditional managers, quant funds are ahead of the game. But, compared to the IT industry at-large, they won’t have deployed many of the readily available solutions used by other industries. As an example, take Netflix or YouTube. Both companies simultaneously spot test contrasting algorithms and APIs in real-time on their live data (ie: users). This, in turn, allows comparing the effectiveness of either solution against the other, bringing the optimal approach to market more quickly. Repeat this step a couple of times and the innovation process accelerates.
Given their manufacturer access and industry relationships, MSPs often have access to nascent applications before they become standard. Combined with their expansive DevServices teams, testing capabilities, and always-on team support, quant funds stand to gain a competitive advantage versus peers by working with such a partner. External DevServices teams, combined with access to better hardware and software infrastructure, can rapidly build bespoke architecture with a framework which sorts, processes and organizes data without the risks associated with emergent tech.
It’s clear: bringing in an outside expert can substantially bolster a quant firm’s operations. One example of this is when we partnered with a firm struggling to decrease the time needed to test new workflows in a simulated architecture. On average, the process from ideation to testing to roll out spanned a week. We at RFA were able to create a new DevOps process which allowed the quant fund to deploy new developer and testing environments. This decreased the time it took to create, test and implement new programs from a week to half a day. Leveraging our existing developer teams, legacy infrastructure and knowledge of emerging ecosystems allowed us to ultimately expediate the manager’s processes beyond their in-house capabilities. Accordingly, this gave them a competitive advantage to facilitate new alpha generating activities faster.