One of the struggles with blockchain today is that as the system gets larger, you have to have increasing numbers of agreement or consent to add a new piece of information to the blockchain. This additional “friction” slows transaction speeds and can thus slow the performance of the system. When you have three participants in a blockchain, waiting for three verifications can be trivial, but when it becomes 3,000 or 30,000, that adds up to a lot of friction.
At the heart of this is whether an item is trusted and how much you can trust the network or group of participants to verify its authenticity.
Many systems are assuming the extreme – that everyone needs to approve/consent of the trust of an item before it is added to the blockchain, but that does not make sense in many areas, in part because the risk if the item being “bad” is not very high in many instances.
Some have suggested that Byzantine Fault Tolerance is an effective method for managing this friction. The Byzantine Fault Tolerance model has the underlying assumption that you have to design for bad actors – people who intend to commit fraud / forgery – and in many instances that assumption is too extreme. Byzantine Fault Tolerance gets back to the Byzantine General’s problem which is when you have many generals and they ultimately need to decide whether to attack or not attack, you need 100% agreement on a decision, but when their are traitorous generals, they can mix up the vote and get a portion of the generals to vote attack, while some other portion votes no-attack, which creates enormous risk for the operation.
AuthenticID is a company that sets up a set of configurable rules that allow the organization to map the risk level with the level of friction for the transaction (in their case identity proofing, but the model can and should be applied to any situation).
I don’t think there’s any one solution to this issue, but it seems that as blockchain becomes an increasingly hot topic – having a set of rules or standards that indicate the level of rigor needed to achieve a certain level of trust makes sense so organizations don’t add too much friction to their respective models.
No matter what an organization is doing, whether a global software company or a small non-profit they need to maintain some level of momentum or velocity to function. If they can set up a simple matrix to score trust on a scale of 1-10 and risk on a similar 1-10 scale, they ought to be able to define a model where if you score Trust over Risk, as long as the result is 1.0 or greater, then it should pass or get a green light to move forward. If the velocity of the organization is too slow, then they probably need to ease up some of the risk scoring – they are probably too stringent, and if the velocity is too high, then you need to look at how many problems are occurring. If there are not too many failures (things got through the system that should not have), then it probably makes sense to revisit the trust scoring. On the other hand, if the failure rate is good, then that may be an indication that it is time to plan for growth. It could be set up as a simple formula that Trust/Risk=Velocity and that formula could be applied to the growth strategy of the organization.