Technical Arbitrage Model
In 2018 we started to use static analyis software to identify underperforming crypto currencies to run predictions on the technical debt and velocity speed of cryptos. The predictions were very accurate and used Embold Technologies analyser. Given the total output and the impact on code quality, architecture and technical debt, we thought we had a predictor for arbitrage.
But what actually happened was human, all too human. We did not look into the team dynamics of the open-source based crypto currencies. Tracking the amount of clones and migration of developers to such „hard forks“, and tracking the variation of focus of contributors on the key issues in the currency would have been a good predictor on team coherence and commitment. Tracking contributor performance on other development projects – quality KLOC output as well as commitment over time -, we could have likely predicted the death of the teams behind some crypto currencies much earlier and filtered out almost sure losers in the crypto winter.
So our arbitrage play based on tech instead of marketing hype was looking at an insufficient data set.
Tech Arbitrage in Crypto is over.
The crypto arbitrage on the first wave is over. I think everyone is moving towards technology that is well-backed by partnerships, funding and professional developers with a clear stake in the success of the platform. The play is about predicting the ability of the stakeholders to build demand for the platforms and pull in liquidity. The technical relevance has become more of an issue of the consortium behind the project and can help only to assess if the consortium is on the wrong track. But that is less likely, given too many professional careers are at stake and the community is currently still active and innovator-driven. There will likely be more later adopter failures from consortiums driven to innovate into the BlockChain and relying on external „experts“ to make key architectural decision. The moral hazard and information assymetry will more likely to be an issue. If this is avoided by betting on system integrators and OEMs to provide ready-made solutions, we will most likely only see issues form the talent sourcing capabilities of the OEM and the negative impact of generic, refactored code that does not fit the use case.
Relevance to Enterprise Architectural Management remains
But the topic itself remains. The potential cost of critical open-source components folding in execution, or the key architects and leaders of a project leaving, can create a havoc impact on the enterprise architecture management of open-source stacks. This can have a pronounced impact on the cost of switching elements in the enterprise architecture stack and will likely lead to create some form of architectural debt.
Thinking of Architectural Debt
Enterprise architectural debt is produced if a company needs to re-calibrate its enterprise architecture stack under ressource constraints. This has traditionally lead to what we call „legacy systems“. Systems that are too hard to replace, but which are clearly out of date.
Just think Cobol, or Financial Giant Process Architectures. This form of debt is the exact reason we are looking at disruption today. All kind of disruption is in the end possible because of sunk costs and what I referred to as decision debt. Under margin pressure and cost restructuring restrictions that are traditionally created in the labor laws or now also are found in public review systems on like glassdoor. Any tool of reporting to external stakeholders that fails to create a differentiating picture of the relevance of certain actions will eventually create decision debt and builds the foundation for disruption potential.
Coming back to open-source. The open-source projects that survive have very well structured long-term visions behind them and the developers are actively management. But even more important is the development of communities.
Now how do you use community output to understand community debt and community-based arbitrage?
First, the relationship of such communities and innovation is critical. If you look at a development language such as python or erlang. If a company adopts such a language in the IT stack, it has to consider the life cycle of the technology first, just from an enterprise architecture point of view. Such a project „hard forking“ like python 2.7 to 3.0 or just dying like most alt-coins in the crypto winter are the first issue. What we learned from the first Crypto wave was how the market prices the belief in the longevity of a technology. The money pouring in was a direct measure of expectation of such technology and was directly driving the adoption of the technology by communities.
The size of the community ultimately impacts the available talent pool which drives both the ability of companies to find talent at all. And drives the ability of companies to find high calibre talent. High calibre talent is required for a suitable utilization of the technology asset. 100 developers who hardly know „Verge Coin“ will not generate a lot of return or understand how to use verge coin for a business case. The same is true for development languages and open-source frameworks.
Now looking at arbitrage. If you look at the technical debt of an open-source project, you can also look at what needs fixing. If the things that need fixing are not fixed, it is the result of one or several of the following factors:
(a) lack of talent
(b) lack of architectural oversight / product design
(c) lack of interest in the particular problem
The interesting part is (c). If the current trend for top developers is to upskill on A.I. and related data structures, there is a likelihood that such features are prioritized. In a time where parallelism or low latency was a thing, like when we saw the surge in big data technology, we would see fixes in low level and fast performance computation on GPU and threading was more likely to attract attention. The only way to understand what is going on with a technology is to understand what is hot and who are the ones that are top calibre talents. In such a scenario, all top engineers would fix issues in their favorite domain in a project-agnostic manner. An open-source architect would try to decide on whether to incorporate such trend into the product design or not. If not, the question is all around motivation of the team to work on less interesting problems.
Things really get interesting when the total crowd of contributors are all fixing problems that are not „hot“ and they are all not in line with the relevant drivers for developing the next iterations of the product while maintaining a healthy technical debt focus. In the case of crypto, we could see developers arguing about theoretical concepts of the core architecture of the blockchain to decide on features, while arbitrarily fixing bugs that were not ultimately driving the success of the project. This leads to an overhead of work for the better developers who try to own and operate the project.
So the technical arbitrage looks a bit like this. You look at the quantifable issues in a software project and the activities of the contributors. If there is a mis-alignment based on „hot trends“ and the architecture and roadmap reflect this, this might be a sign of health. If the leadership team is still able to keep the technical debt in control, then this might indicate a longer life cycle. If the community thereby is growing, this can be a good IT asset in the enterprise architecture stack. And if a company carefully selects such technology as the right one for making an innovative leap, then this might be a measure for estimating if the company is managing its innovation correctly. The technical arbitrage rests in the company ability to drive its innovation and sustain long life-cycle technology assets in its portfolio at decreasing cost of capital due to a growing community feeding a larger crowd of talent. This might not affect the share price of a diversified multi-national, but it might move the needle for a young company in a highly competitive space.