Wednesday, December 12, 2018
Home Blog

What Is The Blockchain?

0

The word blockchain is on everyone’s lips right now. Although you might not really understand what “blockchain” means, chances are you’ve heard people talk about it (a lot).

Some people believe it will change our world for the better, and it will replace the banks that we currently know – or similar extravagant things.

Although the blockchain came to the general public’s attention when the crypto market absolutely exploded, the majority of the people still don’t understand how it works, what it’s used for or what all the fuss is about.

In this guide, I’ll answer all of your questions in regards to the blockchain, nodes, the ledger(s) and the security of the blockchain. The goal is to explain the blockchain in plain English, so anyone can understand it.

Note: In this guide, I’ll be explaining how the concept of the blockchain works – I won’t be explaining how the blockchain technology is implemented in detail, as that’s beyond the scope of this article.

Let’s get started.

 What Is the Concept of Blockchain?

First, it’s important to understand two different, basic terms.

1. Bitcoin (a digital currency)

It’s important to understand that Bitcoin is not a blockchain. This is something that has many people confused. Bitcoin is merely a digital currency based on the blockchain technology.

2. Blockchain

Blockchain is the technology that enables the movement of digital assets – Bitcoin, for example – from one individual person to another individual.

So, what is the concept of the blockchain, exactly?

To get a better understanding, let’s look at an example of an existing problem that the blockchain attempts to solve. I’m talking about transferring money.

Imagine I (Bill) want to send money to my friend Jenny. Traditionally, this is done through a trusted third party (bank or credit card company) between us.

For the sake of the example, let’s say that I live in New York, while Jenny lives in London.

When I want to send some money to Jenny, I ask the third party to send it to her. In return, the third party will identify Jenny and her bank account.

When that’s completed, the third party will transfer the money to Jenny’s personal account, while also taking a small transaction fee.

This process typically takes 3-5 days. Some banks process such requests faster than others, but it takes some time.

Now, the entire concept behind the blockchain technology focuses on the elimination of that trusted third party, the middle man.

In addition, the blockchain aims to complete this process much faster than the current system – almost instantly!

Finally, the blockchain attempts to do this process at a much lower rate (very low transaction fees).

Let’s take a look at how the blockchain technology addresses this money transfer problem. There are various different principles/concepts involved.

Ledgers

On Investopedia, a ledger is defined as:

“A company’s set of numbered accounts for its accounting records. The ledger provides a complete record of financial transactions over the life of the company.

The ledger holds account information that is needed to prepare financial statements and includes accounts for assets, liabilities, owners’ equity, revenues and expenses.

Simply put, the ledger is where a “chain” of transactions are linked to one another, creating a record of financial transactions.

The term “Open Ledger” means that anyone can join the network of the ledger, and all transactions are stored in one ledger. The network saves all the transaction data in one centralized ledger (storage).

A “Distributed Ledger” is essentially the same as an “Open Ledger” in that both are accessible to anyone. The main difference here is that a distributed ledger is decentralized, meaning that everyone in the network owns a “copy” of the ledger on a node.

A “node” is another word for a device that every participant in the network possesses, which contains a copy of the open ledger.

You can use the term “node” as shorthand for a participant in the chain of the distributed ledger. So, the examples below contain 4 “nodes.”

Open Ledger

In order to give you a crystal-clear understanding of this concept, I’ll first illustrate it with an example.

For instance, imagine that there’s a network of 4 individuals: Bill, Jenny, Mark and Justin. Every individual in this network wants to send and receive money to/from one another.

At the original formation of this network, I (Bill) have $20. The concept of an open ledger and how it’s implemented in this situation works like the following.

Imagine that there are a couple of transactions made between these 4 individuals.

  1. Bill transfers $10 to Jenny
  2. Jenny transfers $5 to Justin
  3. Justin transfers $3 to Mark

Every transaction is registered and linked to the already-existing transactions in the open ledger. That means that, in this example, there are 4 transaction links in the chain of the open ledger, which tells which transaction is made and to whom.

blockchain invalid transaction

The chain of transactions in a open ledger are open to the public and for anyone to access and see. That means that everyone in the network can identify where the money is and how much money every individual has.

In addition, every individual in the network of the open ledger can decide whether a transaction in the chain is valid or invalid.

Let’s go back to the example above, where I owned $20 at the genesis of the network (the start of the chain) and I transferred $10 to Jenny (so, I have $10 left). But now, I attempt to send another $15 to Mark.

blockchain Open Centralized Ledger

As a result, everyone on the network is able to identify this transaction as an invalid transaction, since I’m short $5. Therefore, this transaction won’t be added to the chain of transactions in the open ledger.

Distributed Ledger

One of the most important goals of the blockchain technology is to create a decentralized solution. The concept of a decentralized chain of connections is called the “distributed ledger.”

In other words, every individual (node) in the network will receive a copy of the ledger. That means that I will hold a copy of the ledger in my node, as will Jenny, Justin and Mark.

When the ledger is distributed across the network, every person in the network holds the chain of events (transactions) in their node, which means that the centralized “open” ledger is replaced by the decentralized solution.

blockchain no central ledger

At this point, the goal to eliminate the previously-mentioned centralized “trusted third party” has been accomplished.

However, by doing so, a new problem was created, because we now have 4 different copies of the ledger in the network of transactions. It’s crucial that every individual (participant) in the network owns a properly synchronized version, so every node contains the same copy of the ledger.

To solve this issue, let’s continue by looking at another principle of blockchain technology.

Mining

So, you now know that the distributed ledger is an open network which is accessible to everyone. The copy of the ledger is distributed across all the nodes (participants) within the network of the ledger.

But how are new transactions validated within a network?

For the sake of the example, let’s pretend that I want to move $10 to Jenny. When I make this transaction, I will automatically share and publish my “intended” transaction to the network.

As a result, every participant in the network will be notified of this intended transaction and will see that I want to move $10 to Jenny. Because my transaction has not been validated yet, it’s still an unvalidated transaction, which means that it’s not recorded in the ledger yet.

In order to get a transaction validated so that it’s recorded in the ledger, another principle is needed: “mining.” The principle of mining means solving calculations.

Miners (who do the mining) in, for example, Bitcoins, are special nodes. The node of every miner is able to hold the ledger (because it’s public).

For instance, imagine that both Mark and Justin are miners in this particular ledger. Miners execute a very important task. All the miners in a ledger compete with each other, so Mark and Justin will be miner competitors here.

Mark and Justin (the miners) will compete to become the first to take my unvalidated transaction to Jenny, validate the transaction and then add it into the chain of the ledger.

The miner that does this the fastest (first) receives a financial reward, like a prize. In this case, because it was an example of Bitcoin miners, the financial reward will be Bitcoin.

The concept of how bitcoins are generated is extremely complex, and beyond the scope of this article. All that you need to know is that, very simply, the financial reward of bitcoins is generated through the computational process of validating the transaction, not through Jenny or Bill paying the miners bitcoins.

For more information about the generation of bitcoins, visit AnythingCrypto’s website.

So, what are the rules for Mark and Justin to beat the competition and get a financial reward?

The miner has to do 2 things in order to take the intended transaction and put it into the ledger.

Step #1 – Validate the new transaction

Validating the new transaction is relatively simple because the information in the ledger is openly accessible, so the miner can instantly calculate whether I have sufficient funds in order to complete the transfer to Jenny.

Step #2 – Find a special “key”

In order to lock the new transaction in the chain of the ledger, the miner has to find a special key that will enable this process. The key itself is random.

Therefore, Mark and Justin need to use computational power in order to search for this random key, which takes time. By doing so, the miner will use the mathematical abilities of a computer to repeatedly guess new keys until the computer finds the first key that matches the random puzzle.

Again, if Mark is able to do that first, he will get the financial reward.

When this process is completed, Mark has to synchronize the distributed ledger across the entire network.

How Are “Ledgers” Synchronized Across the Network?

So, let’s continue with the example that we created in the previous section.

The question you might have now is:

How could each node be synchronized to have the same record of transactions?

This is an essential principle of the blockchain technology, because this will solve the problem of how to have the same copy of the ledger on every node in the network.

Mark was able to solve the puzzle first and was therefore able to add the transaction to the chain in his ledger. Now, Mark has to publish the solution that he found to the entire network.

That means that he’s telling Justin, Jenny and I that he solved the puzzle and validated the transaction (the transaction that I wanted to send to Jenny).

When Mark updates us about that, he will also provide the lock (a key) that will enable the rest of us (participants on this network) to take the transaction and add it into our own ledgers.

So, Justin (miner) will add this transaction to his ledger, because there’s no point anymore in trying to resolve this transaction, since it has been solved by Mark already, who got the financial reward. Justin will search for another transaction to work on (and solve) in order to get a financial reward for that.

As a result, the transaction will also be added the chain of transactions in Jenny’s and my ledger. Jenny will receive the $10 that I initially proposed to transfer to her in the network, too, because everyone in the network has agreed that this transaction is valid.

At this point, all of the distributed ledgers in the network are updated and contain the exact same chain of transactions.

The “Blocks” in the “Chain”

Since you now have a better understanding of the basic concept of the blockchain, it’s time to dive a little deeper in order to get a better understanding of the “blocks” in the “chain.”

The blockchain is – you guessed it – a chain of blocks.

block chain graphic

Each block in the chain contains some specific data.

Data

The type of data stored in a block depends on the type of blockchain. For example, a block in the Bitcoin blockchain stores information such as the number of bitcoins in the block, who sent the bitcoins and who received them.

If the blockchain belongs to another cryptocurrency like Ethereum, the block contains information about Ethereum instead of Bitcoin.

Hash

A hash could look this this:

82e35a613ceba37e9652366234c5dd412ea586147f1e4a41ccde16149238187e3dbf9

A hash is always unique – it contains a string of random letters and numbers. The unique string basically holds the information of what content is stored in the block.

When a block is created, the unique hash that belongs to the block is then calculated. When something in the block changes – for example, if the number of bitcoins goes down by 2 – the hash will also change.

That means that when the hash changes, it’s no longer part of the same block. So, a new block is created.

Hash of the Previous Block

Every newly-created block also contains the unique hash string of the previous block. That way, all the blocks are connected to each other.

blockchain blocks

As you can see in the example, every block is connected to the previous block by stating the hash of that block.

block chain genesis block

The first block doesn’t contain a previous hash, simply because there wasn’t a block before that. The name of the first block in the chain is also called the Genesis block.”

Security & Blockchain

If someone tries to interfere with an already-existing block or change something in the block, the hash will change. That means that all of the following blocks will become invalid because they contained a different hash than the newly-generated hash, because a change was made.

In order to solve the issue of the other blocks becoming invalid, all the other block hashes must be calculated again.

To counter that vulnerability, there’s a piece of data involved called “proof of work,” which basically slows down the creation of new blocks. The difficulty for miners to create new blocks is controlled so that the time required to solve a calculation and to create a new block is possible only every 10 minutes.

That means that if you were to interfere with or change one block in the chain, you would need to recalculate all the following blocks in the chain, 1 per 10 minutes – which is way too time-consuming and expensive.

Another layer of security is a Peer-to-peer (P2P) network.” The existence of a P2P network ensures that the blockchain is distributed across a large network, instead of being stored in one single entity.

As you learned before, this is an open and public network that anyone can join. When you enter a network, you’ll receive a copy of the blockchain.

When a new block is created (see image below), it will be sent to the node of all the participants in the network.

blockchain new block

Then, each node of every participant in the network will check the new block and identify whether it’s a valid block. If it checks out, each node will add the new block to its own blockchain.

If all the different nodes of the participants in the network contain the exact same blockchain, it means there’s a general agreement that this is the official blockchain, and invalid blocks will be denied from being added to the blockchain.

When it comes to security, if you wanted to tamper with the blockchain, you would need to change all the blocks within the blockchain, recalculate all of the hashes, alter the proof of work, and on top of that, you’d need to take control of more than 50% of the P2P network.

If not, your modifications on the blockchain wouldn’t be accepted by everyone else in the network.

Obviously, this would be pretty much impossible to execute, which means that the security of blockchains is incredibly good.

Simplified Summary of the Blockchain Technology

The first thing that we learned is that blockchain and Bitcoin are two different things. Additionally, we’ve learned that blockchain technology is based on a few basic principles.

  1. The distributed ledger is an open network that is public to everyone.
  2. Every participant in the network can validate transactions.
  3. The ledger is distributed and exists in many different nodes on the network (this removes the involvement of a trusted third party).
  4. The concept of miners and the role of miners are to validate transactions within the ledger by solving calculations.

Miners are special nodes in a network. The task of the miner is to validate “intended” transactions, solve the puzzle and publish/share it to the network so everyone can add the new transaction to the chain in their ledger.

Miners try to solve these mathematical puzzles. This ensures that everyone in the network collectively agrees that this is the next chain in the official ledger that everyone in the network should use.

We’ve also learned that every block in a blockchain contains 3 types of data.

  1. Data
  2. Hash
  3. Hash of the previous block

Of course, what’s stored in every block depends on the blockchain, but a simple example would be that in the Bitcoin blockchain, every block stores information about bitcoins.

We also learned that every block contains a unique string of random letters and numbers which is called a “hash.” The hash holds information about the block, so once a piece of information in a block changes, so does the hash.

In order connect the blocks with each other, every block holds the hash string of the previous block. Only the first block, at the “genesis” of the chain, does not contain the hash of a previous block, simply because it was the first block in the chain.

In sum, the implementation of blockchain is incredibly detailed and complex, but my explanation of the concepts and ideas – in plain english – should already answer a lot of your questions.

Useful new ICO metrics for 2018

0

I’ve been at a few events recently where people talk about the “market cap(italisation)” of utility tokens issued in ICOs, and comparing them to the market cap of cryptocurrencies or (even worse) listed companies.  This is truly dreadful and misleading, perhaps sometimes intentionally so.  In this post I introduce two useful metrics for comparing across ICOs: the Reserve ratio, and the Commitment ratio.

For a non-hypey introduction to ICOs please see A gentle introduction to ICOs.

Market cap

Market cap is a term borrowed from traditional finance, and for a company, it means the number of outstanding shares multiplied by the share price of a single share.  It’s the current total value of the company.  It represents the current theoretical “fair” price of the company should all the current owners sell it to buyers who want to buy it as much as the sellers want to sell it (though in reality one side always wants to do it more than the other side, so the price achieved is pretty much never the market cap).

When this term is applied to pure cryptocurrencies (BitcoinEther, etc), it kind of makes sense.  It’s the total value of the cryptocurrency that exists.  In theory, if you wanted to buy all the BTC in the world, and the current owners equally want to sell all the BTC in the world, and they want to sell it to you as much as you want to buy it from them, the market cap is the number of dollars that would change hands for the transaction.  In practice that wouldn’t happen, as the situation won’t arise and there are other complications such as lost or immoveable coins, but you get the idea.  It’s far from perfect, but it kind of makes sense.

Utility tokens

But what about market cap of utility tokens?

Remember the first rule of issuing ICO tokens:  Tokens aren’t securities.  Tokens aren’t securities. Tokens aren’t securities.  Tokens aren’t securities.  Tokens aren’t securities.  Tokens aren’t securities.  Tokens aren’t securities.  Tokens aren’t securities.  Tokens aren’t securities.  Tokens aren’t securities.

And remember the first rule of buying ICO tokens: I’m not expecting profits from the efforts of others.  I’m not expecting profits from the efforts of others.  I’m not expecting profits from the efforts of others.  I’m not expecting profits from the efforts of others.  I’m not expecting profits from the efforts of others.  I’m not expecting profits from the efforts of others.

 

not_a_duck

ICO tokens are not securities

Is the “total value of utility tokens issued” a useful metric?  Does it tell us anything about the health of the ICO issuer?  What does it tell us, really?

Tokens vs issuers

Firstly we have to differentiate between the tokens and the issuer.  The (badly-named) “market cap” of the tokens is the total value of tokens created/issued.  The (actual) market cap of the issuer is the issuer’s share price multiplied by the total outstanding shares it has issued – it’s got nothing to do with the tokens.  Make sense?  …But that’s not quite how ICOs work.  Usually the issuer of the tokens is a foundation, set up in Zug in Switzerland or Singapore.  A benefit of having a foundation to issue tokens is that a foundation isn’t a company and doesn’t have a shareholder structure, so it’s harder to say that the tokens are shares (remember the first rule of ICOs): “It’s a foundation, how could the tokens possibly represent shares?”.

So, there’s another entity – the operating company.  This is the for-profit company that has shareholders (founders, VCs, etc) and receives grants (payments) from the foundation in return for meeting certain goals, usually milestones in the development of the product or service, which is usually software based.

So to summarise, the foundation (with no shareholders) issues tokens (not shares) in return for cryptocurrency (usually BTC or ETH), then sometimes sells that cryptocurrency for fiat to get some financial stability, and pays (grants) the cryptocurrency or fiat to the operating company who pays staff to develop software that is later used by token holders.  Token holders (not investors or shareholders) redeem their tokens for access to unspecified amounts of product or service, if they haven’t already sold their tokens first to other people who presumably want the unspecified amounts of product or service more than the original holder.

The meaningless market cap

So, token market cap… what does it mean?

Firstly, anyone can generate however many tokens they want (say, 1 trillion tokens) and then sell 1 token to a complicit friend for some nominal amount (say, $1) and create $1 trillion of market cap.  You can own zillions of market cap in this way, if you want.  Journalists and news reporters love this because they get to say big numbers and compare them to actual market caps of real legitimate companies, like Uber and Tesla(!).

So already we can see that market cap is an easily manipulated figure.  So some of the more ethical (?) cryptocurrency lists will only include ICOs whose tokens are trading on legitimate (?) exchanges, and with a minimum threshold of daily average volume (say $10m of trading per day).  So at worst it’ll cost you a few thousand dollars a day in exchange fees to maintain the illusion of a high market cap.

Utility token market caps are also a terrible, almost meaningless, way to compare ICOs.  Can we do better?

How can we do better?

Remember, ratios are good ways to compare across companies or projects, to see if any are “good value” or “out of whack” with others.  Market cap isn’t a particularly good number to compare, so what can we look at?

Financially, it’s helpful to think about who holds what immediately after the ICO / Token Generation Event / Voluntary Donation Scheme:

  • The public (or self-declared accredited investors) has tokens that they have been promised will be redeemable for unquantified products or services some time in the future.
  • The Foundation has money in the form of cryptocurrency – this is proceeds (donations!) from the token sale.  This can be converted into fiat immediately or later.
  • The Foundation usually holds back some tokens to incentivise the community or do something with later.
  • The Founders usually give themselves and sometimes advisors some tokens for free (presumably the founders and advisors have a great future need for unquantified amounts of the product or service)

So the tokens in circulation are a kind of liability to the company.  The company has promised to deliver to the token holders some unquantified amount of goods or services in the future.  The more tokens out there, the more value the company is on the hook to deliver.  Surely a high token value then is a bad thing – isn’t it a burden on the company?  Isn’t a company that is on the hook to deliver $100m of value more precarious than a company on the hook to deliver $10m of value?  Well, not really – just like other crowdfunding, the company hopes to make a profit by selling the product for more money than it cost to create the product.  So the company pre-sells the service for $25 and the service only costs the company $10 to provide.  So a bigger number here in general is better.

But some tokens are issued but not in circulation, eg those held back by the foundations.  They aren’t waiting to be redeemed, they are assets to the project (the project being collectively the company and the foundation).  So there are a couple of metrics it might be helpful to look at.

The Reserve ratio

The first useful metric might be the ratio of tokens reserved for product development (which are assets to the project) vs tokens that are out there in circulation (which are kind of liabilities to the project).  Tokens held by founders or angels should be considered on the liability side.  We can call this the reserve ratio and it provides the project some level of insulation to the fluctuating market price of their token.

The Reserve ratio

No. tokens retained by project : No. tokens in circulation

This tells us how insulated the project is with regard to price changes of their tokens.  A higher number indicates that the project is more insulated against price moves.  If the value of the tokens rise suddenly, a project with more tokens held back will have a larger warchest to deliver the greater value demanded by the external customers.

This metric should be known when the ICO closes and shouldn’t fluctuate with token price.  It might change over time as the project releases more tokens or buys back tokens in circulation.

The Commitment ratio

A second useful metric is related to how much value the project has committed to deliver vs how much capital the project has with which to deliver it.  This tells us how much value each dollar of working capital has to create to satisfy the token holders.  A lower number can be regarded as safer as there is more working capital available to deliver less commitment to the token holders.

The number of tokens in circulation and amount of funds raised/donated is usually disclosed when the ICO closes and sometimes pre-determined in the whitepaper/plan, though we can never tell really who owns the tokens that were sold in the presales and public sales – potentially some of it is bought by founders or the company itself as part of the hype cycle in order to create the illusion of success.

There is an unhedged and fully hedged version of this metric.

Commitment ratio

Token redemption value : Capital raised (hedged / unhedged)

Redemption value is the current market value in USD of the tokens in circulation (that is, excluding tokens held back by the project, founders etc).

Capital raised (hedged) assumes that the cryptocurrency raised in the ICO was completely sold into fiat immediately, ie it is the USD value of the cryptocurrency raised at the time the ICO completed.  (this number does not change)

Capital raised (unhedged) assumes that the cryptocurrency raised in the ICO was not hedged at all, ie it is the current market USD value of the cryptocurrency raised at the time the ICO completed. (this number changes as the price of BTC and ETH changes)

You can create a 50% hedged metric (or any other ratio) too which is probably closer to how projects manage their capital raised.

The Commitment ratio changes over time, as it is linked to the market price of the tokens.  It gives us some idea of how hard the working capital has to work to keep the token holders happy.  We may see commitment ratios emerge for original token holders (the original token price) and current token holders (based on current market price) – they paid different amounts for their tokens so they demand different amounts of value back when redeeming.

Are these any good?

These are not perfect metrics by any means, but they are certainly better than market cap, especially for comparing across projects.  Remember that value of the tokens in circulation is just the minimum amount of value the company has to deliver to keep token holders happy (and the project is not under any obligation to keep token holders happy!  They are not shareholders!).  And of course the company doesn’t simply shut down once all the token holders have redeemed their tokens.  That would be silly.  I would imagine that the project recycles the tokens and keeps delivering value.  When the service is up and running, if the value of the token is $10, then a customer redeems 1 token to the company for $10 worth of service, the company can then resell that token back on an exchange (and realise the $10 and use it to pay staff), then the token is back in circulation again.

But these metrics provide something more meaningful than just token market cap, with which to compare across ICOs.  A friend commented today that it would also be useful to look at the lock-up characteristics of the issued tokens.  What proportion of tokens can be liquidated at short notice, and how does this change over time?  Some projects stipulate that founder have a lock-up project, whereas others don’t stipulate this, making it easy for a founder to do a quick exit scam.

Who controls the price of a token?

The simple answer might seem to be “the market” or “buyers and sellers”, but this is not helpful or accurate.  Sure, while initially the quantity of goods/services that the tokens can buy is unspecified (which is almost always the case in ICOs), the price of the token is subject to normal cryptocurrency market forces, and there is no way to do fundamental analysis on what a fair market price should be: you can’t price “cloud storage” without quantifying how much, for how long.

But there comes a point when the project has to make a decision:  Do they set prices in USD or in tokens?  Should 1 GB of cloud storage for 1 year cost $10, payable in tokens at market rate, or should 1 GB of cloud storage for 1 year cost 1 token?

Let’s explore the options:

1) Priced in USD, paid in tokens

If this is the case, then at first you’d think that the price of tokens should be irrelevant.  Customers hold USD, then when they want to use the service, they buy the tokens then quickly redeem them.  This is the reasoning that “bitcoin for remittance” companies use when they say that the price of bitcoin is irrelevant to their business.

If this is the case, are tokens a good investment? (Note: Tokens are not securities!)  Perhaps.  As tokens are redeemed, there are fewer and fewer of them in circulation so long as the project does not re-issue them to get back the USD to pay their staff.  Fewer tokens may mean a higher price due to scarcity.  So a project in good financial health, who isn’t reliant on reselling their tokens to pay their costs, can create scarcity in the tokens, meaning that token holders see a rising price.  Perhaps.  But a project in poor financial health will need to keep reselling their tokens to cover their costs.  So actually, we see that the price of the tokens are related to the financial health of the company (but remember tokens are not securities!).

2) Priced in tokens, paid in tokens

This is wonderful – if the company sets the price of the goods or services in tokens, the company has pretty good control over the market price of their tokens (kind of like an airline controls the value of air miles).  How?  Imagine there is an equivalent service operated by a competitor, priced at $10.  The project can use this to anchor the price of their tokens.  They set their price to 1 token, which effectively pegs the price of a token to $10.  A rational customer will pay up to $10 for a token (usually slightly less because the usefulness of a token is less than cash).  If tokens are on the market for more than $10, then it makes sense to sell your token for say $11, and spend $10 with the competitor, and pocket the $1.  If the project wants the value of the token to go up, it increases the amount of service you get for 1 token, then people will pay more than $10 for this token.  Want to double the price?  Sell twice as much service for 1 token.

If this is the case, are tokens a good investment? (Note: tokens are not securities)  Probably.  The founders of the project, provided they haven’t done a quick exit scam, also hold tokens and are financially incentivised to keep the price of tokens high.

So projects have more control over their token price if they price their services in tokens, so I would expect that as projects come to maturity, we will see projects price in tokens, at or above market price, providing that they haven’t all been shut down for violating securities regulations first.

Conclusion

As the market evolves and matures, we will need new metrics and ratios to compare across ICO projects, just as we have price/earnings, working capital, debt/equity ratios etc in the traditional equities world.  In this post I have presented are just two that are applicable to ICOs and tokens, and can be used to compare across projects, and I expect many more will emerge.

From Mobile Apps to Omni-Channel

0

A technological exploration to master today’s complexity in digital content distribution. (1 of 5)

This is the first blog in a series of 5 that explore the new world of digital delivery strategy.

Introduction

The landscape for content distribution has changed dramatically over the past few years. Mobile, Chat, Voice, TV, Desktop are all channels where customers are nowadays and, consequently, where businesses should have a meaningful presence.

The average smartphone user in the US downloads zero new apps a month and spends most of their time in less than just five apps [source].

While the App market is far from dead (197 billion app downloads in 2017 alone), what are the chances that your App will actually get noticed by your target audience?

65% of smartphone owners are using phones with a storage capacity of 16GB or less and 52% reach the level of insufficient storage ever week [source].

How useful should an App be to deserve space in a user’s devices?

Predictions from eMarketer state that 40% of Millennials will have a smart assistant in their home by 2019. [source]

By 2020, 50% of all searches will be done by voice (more conservative estimates from Gartner predict this will be 30%). [source]

Is your business ready or at least planning to have a meaningful presence on Chat and Voice channels?

And what about other alternative channels with their own stores, rules and ecosystems?

WeChat has 1 billion MAUs and captures 83% of all smartphone users in China. [source]

Let’s quickly look at the use case of the travel industry to illustrate just how multi-channel today’s consumers are. Travellers at the airport are usually quite interested in knowing the status of their flight and information about their departing gate. Some are frequent travellers so are likely to have the Airline App installed; others fly once per year and a mobile web experience is more than fine for them, but still would like to receive push notifications and not get a ‘downasaur’ if they are on a cellular network blind spot. Finally, there are others who, for example, have previously engaged with the Airline via Facebook Messenger to ask about their baggage allowance and now they’d love to get updates on that channel.
Same information, different users, different channels.

Businesses should be where their (potential) customers are. 

Can today’s businesses afford to address all those channels directly without a form of cross platform development strategy? Yes of course, but doing so would be inefficient, timely and most importantly, costly.

Just a few years ago cross platform development mostly referred to tools and programming languages that could deliver exclusively to iOS and Android platforms with (almost) a single source. Today this idea of cross platform seems out of date.

The Leverage

Like with any cross-platform approaches there is the need to identify a leverage point. For some products the leverage is a low level programming language capable of running in multiple platforms, or a clever transpiler capable of producing the right binary for the target platform.

In this technological exploration the Web Channel will be used as the leverage channel, with the belief that the technologies behind it are a great candidate for the omni-channel vision. Here are few reasons:

  1. Mobile Browsers are installed and available in any Desktop and Mobile device out there (and to some extent also TVs);
  2. The rise of Progressive Web Apps (PWA) makes the case for the Mobile Web much more compelling compared to the past;
  3. ReactNative and NativeScript for Mobile Apps and Electron for Desktop Apps are great examples of how Web technologies can produce native results without compromising on User Experience;
  4. A well crafted Web Channel can be used to extend other channels. Example: Facebook Messenger can display a mobile website to perform a feature rich flow such as making a reservation; WeChat mini-programs are developed with Web technologies;
  5. Any new channel in the block will plausibly support programming with Web technologies as doing otherwise will most likely result in content starvation and channel death;
  6. Web technologies and tools are pervasive, powerful and widely known and adopted.

The Stack

Among many great technologies the followings have been chosen for further exploration in the direction of the omni-channel vision.

React

The ‘learn once, write everywhere’ philosophy seems very aligned to the omni-channel vision. Plus the component oriented design it proposes is considered an advantage compared to other MVC solutions. The wide and active community also plays a strategic role. React also features as the most loved, dreaded and wanted framework on StackOverflow’s 2017 survey with 70% of preferences.

React Native

As obvious companion to React, it allows the creation of Native look&feel Apps for iOS and Android but with a Web (Javascript) heart. In contrast to other solutions such as NativeScript, React Native offers a simpler approach for incremental adoption into existing native iOS and Android Apps. Same extremely active and vibrant community as for React.

GraphQL

Being capable of using graphs to represent and expose domain content, and letting clients query and traverse the data as they need is considered a game changing advantage to fixed APIs and Models in the omni-channel context. This is particularly true for non-GUI based channels such as Voice and Chat channels and it will be explored in future blog posts.

Service Workers

The enabling technology allowing a script to run on a browser independently from the webpage, opening the doors for Offline, Background Synch and Push on Mobile Web. Service Workers are at the heart of what usually goes under the name of Progressive Web Apps.

Conclusion

In order to stay relevant, the digital delivery strategy of businesses over the next 5 years should be very different from what it has been over the past 5.

This series of articles goes through a technological exploration which has the Web Channel and Web technologies at its core.

Next posts will cover the technical setup of the stack as follows:

Blog 2. Proper React Native and TypeScript setup
Blog 3. React, Service workers and React Native cohabit
Blog 4. The introduction of GraphQL
Blog 5. Voice & Chat channels

Stay tuned! … and share your thoughts on comments.


References

  • Maximize mobile sales and ad spend with progressive web apps
  • 83% Smartphone Users in India Delete Apps To Free Up Storage
  • Make Some Noise: The Future of Travel Sounds Like Voice
  • WeChat has hit 1 billion monthly active users

A gentle introduction to self-sovereign identity

0
A gentle introduction to self-sovereign identity

In May 2017, the Indian Centre for Internet and Society think tank published a report detailing the ways in which India’s national identity database (Aadhaar) is leaking potentially compromising personal information. The information relates to over 130 million Indian nationals.  The leaks create a great opportunity for financial fraud, and cause irreversible harm to the privacy of the individuals concerned.

It is clear that the central identity repository model has deficiencies.  This post describes a new paradigm for managing our digital identities: self-sovereign identity.

Self-sovereign identity is the concept that people and businesses can store their own identity data on their own devices, and provide it efficiently to those who need to validate it, without relying on a central repository of identity data.  It’s a digital way of doing what we do today with bits of paper.  This has benefits compared with both current manual processes and central repositories such as India’s Aadhaar.

Efficient identification processes promote financial inclusion.  By lowering the cost to banks of opening accounts for small businesses, financing becomes profitable for the banks and therefore accessible for the small businesses.

What are important concepts in identity?

There are three parts to identity: claims, proofs, and attestations.

Claims

An identity claim is an assertion made by the person or business:

“My name is Antony and my date of birth is 1 Jan 1901”

claim

Proofs

A proof is some form of document that provides evidence for the claim. Proofs come in all sorts of formats. Usually for individuals it’s photocopies of passports, birth certificates, and utility bills. For companies it’s a bundle of incorporation and ownership structure documents.

proofs

Attestations

An attestation is when a third party validates that according to their records, the claims are true. For example a University may attest to the fact that someone studied there and earned a degree. An attestation from the right authority is more robust than a proof, which may be forged. However, attestations are a burden on the authority as the information can be sensitive. This means that the information needs to be maintained so that only specific people can access it.

attestation

What’s the identity problem?

Banks need to understand their new customers and business clients to check eligibility, and to prove to regulators that they (the banks) are not banking baddies. They also need keep the information they have on their clients up to date.

The problems are:

  • Proofs are usually unstructured data, taking the form of images and photocopies. This means that someone in the bank has to manually read and scan the documents to extract the relevant data to type into a system for storage and processing.
  • When the data changes in real life (such as a change of address, or a change in a company’s ownership structure), the customer is obliged to tell the various financial service providers they have relationships with.
  • Some forms of proof (eg photocopies of original documents) can be easily faked, meaning extra steps to prove authenticity need to be taken, such as having photocopies notarised, leading to extra friction and expense.

This results in expensive, time-consuming and troublesome processes that annoy everyone.

kyc_current_process

What are the technical improvements?

Whatever style of overall solution is used, the three problems outlined above need to be solved technically. A combination of standards and digital signatures works well.

The technical solution for improving on unstructured data is for data to be stored and transported in a machine-readable structured format, ie text in boxes with standard labels.

The technical solution for managing changes in data is a common method used to update all the necessary entities. This means using APIs to connect, authenticate yourself (proving it’s your account), and update details.

The technical solution for proving authenticity of identity proofs is digitally signed attestations, possibly time-bound. A digitally signed proof is as good as an attestation because the digital signature cannot be forged. Digital signatures have two properties that make them inherently better than paper documents:

  1. Digital signatures become invalid if there are any changes to the signed document. In other words, they guarantee the integrity of the document.
  2. Digital signatures cannot be ‘lifted’ and copied from one document to another.

What’s the centralised solution?

A common solution for identity management is a central repository. A third party owns and controls a repository of many people’s identities. The customer enters their facts into the system, and uploads supporting evidence. Whoever needs this can access this data (with permission from the client of course), and can systematically suck this data into their own systems. If details change, the customer updates it once, and can push the change to the connected banks.

centralised_identity_solutions

This sounds wonderful, and it certainly offers some benefits. But there are problems with this model.

What are the problems with centralised solutions?

1. Toxic data

Being in charge of this identity repository is a double-edged sword. On the one hand, an operator can make money, by charging for a convenient utility. On the other hand, this data is a liability to the operator: A central-identity system is a goldmine for hackers, and a cybersecurity headache for the operator.

If a hacker can get into the systems and copy the data, they can sell the digital identities and their documentary evidence to other baddies. These baddies can then steal the identities and commit fraud and crimes while using the names of the innocent. This can and does wreck the lives of the innocent, and creates a significant liability for the operator.

2. Jurisdictional politics

Regulators want personal data to be stored within the geographical boundaries of the jurisdiction under their control. So it can be difficult to create international identity repositories because there is always the argument about which country to warehouse the data and who can access it, from where.

3. Monopolistic tendencies

This isn’t a problem for the central repository operators, but it’s a problem for the users. If a utility operator gains enough traction, network effects lead to more users. The utility operator can become a quasi-monopoly. Operators of monopolistic functions tend to become resistant to change; they overcharge and don’t innovate due to a lack of competitive pressure. This is wonderful for the operator, but is at the expense of the users.

What’s the decentralised answer?

Is it a blockchain?

A blockchain is a type of distributed ledger where all data is replicated to all participants in real time. Should identity data be stored on a blockchain that is managed by a number of participating entities (say, the bigger banks)? No:

  1. Replicating all identity data to all parties breaks all kinds of regulations about keeping personal data onshore within a jurisdiction; only storing personal data that is relevant to the business; and only storing data that the customer has consented to.
  2. The cybersecurity risk is increased. If one central data store is difficult enough to secure, now you’re replicating this data to multiple parties, each with their own cybersecurity practices and gaps. This makes it easier for an attacker to steal the data.

What if the identity data were encrypted?

  1. Encrypted personal data can still fall foul of personal data regulations.
  2. Why would the parties (eg banks) store and manage a bunch of identity data that they can’t see or use? What’s the upside?

So what’s the answer?

The emerging answer is “self-sovereign identity“. This digital concept is very similar to the way we keep our non-digital identities today.

Today, we keep passports, birth certificates, utility bills at home under our own control, maybe in an “important drawer”, and we share them when needed. We don’t store these bits of paper with a third party. Self-sovereign identity is the digital equivalent of what we do with bits of paper now.

How would self-sovereign identity work for the user?

You would have an app on a smartphone or computer, some sort of “identity wallet” where identity data would be stored on the hard drive of your device, maybe backed up on another device or on a personal backup solution, but crucially not stored in a central repository.

Your identity wallet would start off empty with only a self-generated identification number derived from public key, and a corresponding private key (like a password, used to create digital signatures). This keypair differs from a username and password because it is created by the user by “rolling dice and doing some maths” rather than by requesting a username/password combination from a third party.

At this stage, no one else in the world knows about this identification number. No one issued it to you. You created it yourself. It is self-sovereign. The laws of big numbers and randomness ensure that no one else will generate the same identification number as you.

You then use this identification number, along with your identity claims, and get attestations from relevant authorities.

You can then use these attested claims as your identity information.

self_sovereign_identity_public_key_model

Claims would be stored by typing text into standardised text fields, and saving photos or scans of documents.

Proofs would be stored by saving scans or photos of proof documents. However this would be for backward compatibility, because digitally signed attestations remove the need for proofs as we know them today.

Attestations – and here’s the neat bit – would be stored in this wallet too. These would be machine readable, digitally signed pieces of information, valid within certain time windows. The relevant authority would need to sign these with digital signatures – for example, passport agencies, hospitals, driving licence authorities, police, etc.

Need to know, but not more: Authorities could provide “bundles” of attested claims, such as “over 18”, “over 21”, “accredited investor”, “can drive cars” etc, for the user to use as they see fit. The identity owner would be able to choose which piece of information to pass to any requester. For example, if you need to prove you are over 18, you don’t need to share your date of birth, you just need a statement saying you are over 18, signed by the relevant authority.

attestation_over18

Sharing this kind of data is safer both for the identity provider and the recipient. The provider doesn’t need to overshare, and the recipient doesn’t need to store unnecessarily sensitive data – for example, if the recipient gets hacked, they are only storing “Over 18” flags, not dates of birth.

Even banks themselves could attest to the person having an account with them. We would first need to understand what liability they take on when they create these attestations. I would assume it would be no more than the liability they currently take on when they send you a bank statement, which you use as a proof of address elsewhere.

Data sharing

Data would be stored on the person’s device (as pieces of paper are currently stored at home today), and then when requested, the person would approve a third party to collect specific data, by tapping a notification on their device, We already have something similar to this – if you have ever used a service by “linking” your Facebook or LinkedIn account, this is similar – but instead of going to Facebook’s servers to collect your personal data, it requests it from your phone, and you have granular control over what data is shared.

self_sovereign_identity_platform

Conclusion – and distributed ledgers

Who would orchestrate this? Well perhaps this is where a distributed ledger may come in. The software, the network, and the workflow dance would need to be built, run, and maintained. Digital signatures require public and private keys that need to be managed, and certificates need to be issued, revoked, refreshed. Identity data isn’t static, it needs to evolve, according to some business logic.

A non-blockchain distributed ledger would be an ideal platform for this. R3’s Corda (Note: I work at R3) already has many of the necessary elements – coordinated workflow, digital signatures, rules about data evolution, and a consortium of over 80 financial institutions experimenting with this exact self-sovereign identity concept.

A Gentle Introduction to Blockchain Technology

0

This article is a gentle introduction to blockchain technology and assumes minimal technical knowledge.  It attempts to describe what it is rather than why should I care, which is something for a future post.

Shorter companion pieces to this are:


PART 1 – EXECUTIVE SUMMARY


People use the term ‘blockchain technology’ to mean different things, and it can be confusing.  Sometimes they are talking about The Bitcoin Blockchain, sometimes it’s The Ethereum Blockchain, sometimes it’s other virtual currencies or digital tokens, sometimes it’s smart contracts.  Most of the time though, they are talking about distributed ledgers, i.e. a list of transactions that is replicated across a number of computers, rather than being stored on a central server.

The common themes seem to be a data store which:

  • usually contains financial transactions
  • is replicated across a number of systems in almost real-time
  • usually exists over a peer-to-peer network
  • uses cryptography and digital signatures to prove identity, authenticity and enforce read/write access rights
  • can be written by certain participants
  • can be read by certain participants, maybe a wider audience, and
  • has mechanisms to make it hard to change historical records, or at least make it easy to detect when someone is trying to do so

I see “blockchain technology” as a collection of technologies, a bit like a bag of Lego.  From the bag, you can take out different bricks and put them together in different ways to create different results.

 

blockchain_bag2

I see blockchain technology as a bag of Lego or bricks

What’s the difference between a blockchain a a normal database? Very loosely, a blockchain system is a package which contains a normal database plus some software that adds new rows, validates that new rows conform to pre-agreed rules, and listens and broadcasts new rows to its peers across a network, ensuring that all peers have the same data in their databases.


PART 2 – INTRODUCING BITCOIN’S BLOCKCHAIN


The Bitcoin Blockchain ecosystem

As a primer on bitcoin, it may help to review A gentle introduction to bitcoin.

The Bitcoin Blockchain ecosystem is actually quite a complex system due to its dual aims: that anyone should be able to write to The Bitcoin Blockchain; and that there shouldn’t be any centralised power or control.  Relax these, and you don’t need many of the convoluted mechanisms of Bitcoin.

That said, let’s start with The Bitcoin Blockchain ecosystem, and then try to tease out the blockchain bit from the bitcoin bit.

Replicated databases.  The Bitcoin Blockchain ecosystem acts like a network of replicated databases, each containing the same list of past bitcoin transactions.  Important members of the network are called validators or nodes which pass around transaction data (payments) and block data (additions to the ledger).  Each validator independently checks the payment and block data being passed around.  There are rules in place to make the network operate as intended.

Bitcoin’s complexity comes from its ideology. The aim of bitcoin was to be decentralised, i.e. not have a point of control, and to be relatively anonymous.  This has influenced how bitcoin has developed.  Not all blockchain ecosystems need to have the same mechanisms, especially if participants can be identified and trusted to behave.

Here’s how bitcoin approaches some of the decisions:

bitcoin_approach


Public vs private blockchains

There is a big difference in what technologies you need, depending on whether you allow anyone to write to your blockchain, or known, vetted participants.   Bitcoin allows anyone to write to its ledger.

Public blockchains.  Ledgers can be ‘public’ in two senses:

  1. Anyone, without permission granted by another authority, can write data
  2. Anyone, without permission granted by another authority, can read data

Usually, when people talk about public blockchains, they mean anyone-can-write.

Because bitcoin is designed as a ‘anyone-can-write’ blockchain, where participants aren’t vetted and can add to the ledger without needing approval, it needs ways of arbitrating discrepancies (there is no ‘boss’ to decide), and defence mechanisms against attacks (anyone can misbehave with relative impunity, if there is a financial incentive to do so).  These create cost and complexity to running this blockchain.

Private blockchains.  Conversely, a ‘private’ blockchain network is where the participants are known and trusted: for example, an industry group, or a group of companies owned by an umbrella company.  Many of the mechanisms aren’t needed – or rather they are replaced with legal contracts – “You’ll behave because you’ve signed this piece of paper.”.  This changes the technical decisions as to which bricks are used to build the solution.

See the pros and cons of internal blockchains for more on this topic.


PART 3 – MORE DEPTH, PLEASE


Warning: this section isn’t so gentle, as it goes into detail into each of the elements above.  I recommend getting a cup of tea.

DATA STORAGE: What is a blockchain?

A blockchain is just a file.  A blockchain by itself is just a data structure.  That is, how data is logically put together and stored. Other data structures are databases (rows, columns, tables), text files, comma separated values (csv), images, lists, and so on.  You can think of a blockchain competing most closely with a database.

Blocks in a chain = pages in a book
For analogy, a book is a chain of pages. Each page in a book contains:

  • the text: for example the story
  • information about itself: at the top of the page there is usually the title of the book and sometimes the chapter number or title; at the bottom is usually the page number which tells you where you are in the book. This ‘data about data’ is called meta-data.

Similarly in a blockchain block, each block has:

  • the contents of the block, for example in bitcoin is it the bitcoin transactions, and the miner incentive reward (currently 25 BTC).
  • a ‘header’ which contains the data about the block.  In bitcoin, the header includes some technical information about the block, a reference to the previous block, and a fingerprint (hash) of the data contained in this block, among other things. This hash is important for ordering.

books_and_blocks

Blocks in a chain refer to previous blocks, like page numbers in a book.

See this infographic for a visualisation of the data in Bitcoin’s blockchain.

Block ordering in a blockchain

Page by page.  With books, predictable page numbers make it easy to know the order of the pages.  If you ripped out all the pages and shuffled them, it would be easy to put them back into the correct order where the story makes sense.

Block by block.  With blockchains, each block references the previous block, not by ‘block number’, but by the block’s fingerprint, which is cleverer than a page number because the fingerprint itself is determined by the contents of the block.

ordering

The reference to previous blocks creates a chain of blocks – a blockchain!

Internal consistency.  By using a fingerprint instead of a timestamp or a numerical sequence, you also get a nice way of validating the data.  In any blockchain, you can generate the block fingerprints yourself by using some algorithms.  If the fingerprints are consistent with the data, and the fingerprints join up in a chain, then you can be sure that the blockchain is internally consistent.  If anyone wants to meddle with any of the data, they have to regenerate all the fingerprints from that point forwards and the blockchain will look different.

one_block

A peek inside a blockchain block: the fingerprints are unique to the block’s contents.

This means that if it is difficult or slow to create this fingerprint (see the “making it hard for baddies to be bad” section), then it can also be difficult or slow to re-write a blockchain.

The logic in bitcoin is:

  • Make it hard to generate a fingerprint that satisfies the rules of The Bitcoin Blockchain
  • Therefore, if someone wants to re-write parts of The Bitcoin Blockchain, it will take them a long time, and they have to catchup with and overtake the rest of the honest network

This is why people say The Bitcoin Blockchain is immutable (can not be changed)*.

*However, blockchains in general are not immutable.
-> Having said that, the peer-to-peer data sharing mechanism, plus the fingerprinting makes it obvious when a participant tries to alter some data, if you keep track of the fingerprints. Here’s a piece on immutability in blockchains.


DATA DISTRIBUTION: How is new data communicated?

Peer to peer is one way of distributing data in a network.  Another way is client-server.  You may have heard of peer-to-peer file sharing on the BitTorrent network where files are shared between users, without a central server controlling the data.  This is why BitTorrent has remained resilient as a network.

Client-server
In the office environment, often data is held on servers, and wherever you log in, you can access the data.  The server holds 100% of the data, and the clients trust that the data is definitive.  Most of the internet is client-server where the website is held on the server, and you are the client when you access it.  This is very efficient, and a traditional model in computing.

Peer-to-peer
In peer-to-peer models, it’s more like a gossip network where each peer has 100% of the data (or as close to it as possible), and updates are shared around.  Peer-to-peer is in some ways less efficient than client-server, as data is replicated many times; once per machine, and each change or addition to the data creates a lot of noisy gossip.  However each peer is more independent, and can continue operating to some extent if it loses connectivity to the rest of the network.  Also peer-to-peer networks are more robust, as there is no central server that can be controlled, so closing down peer-to-peer networks is harder.

4
The problems with peer-to-peer
With peer-to-peer models, even if all peers are ‘trusted’, there can be a problem of agreement or consensus – if each peer is updating at different speeds and have slightly different states, how do you determine the “real” or “true” state of the data?

Worse, in an ‘untrusted’ peer-to-peer network where you can’t necessarily trust any of the peers, how do you ensure that the system can’t easily be corrupted by bad peers?


CONSENSUS: How do you resolve conflicts?

A common conflict is when multiple miners create blocks at roughly the same time.  Because blocks take time to be shared across the network, which one should count as the legit block?

Example. Let’s say all the nodes on the network have synchronised their blockchains, and they are all on block number 80.
If three miners across the world create ‘Block 81’ at roughly the same time, which ‘Block 81’ should be considered valid?  Remember that each ‘Block 81’ will look slightly different: They will certainly contain a different payment address for the 25 BTC block reward; and they may contain a different set transactions.  Let’s call them 81a, 81b, 81c.

three_blocks

Which block should count as the legit one?

How do you resolve this?

Longest chain rule.  In bitcoin, the conflict is resolved by a rule called the “longest chain rule”.

In the example above, you would assume that the first ‘Block 81’ you see is valid. Let’s say you see 81a first. You can start building the next block on that, trying to create 82a:

mine_on_first

Treat the first block you see as legitimate.

However in a few seconds you may see 81b. If you see this, you keep an eye on it. If later you see 82b, the “longest chain rule” says that you should regard the longer ‘b’ chain as the valid one (…80, 81b, 82b) and ignore the shorter chain (…80, 81a). So you stop trying to make 82a and instead start trying to make 83b:

mine_on_longest

Longest chain rule: If you see multiple blocks, treat the longest chain as legitimate.

The “longest chain rule” is the rule that the bitcoin blockchain ecosystem uses to resolve these conflicts which are common in distributed networks.

However, with a more centralised or trusted blockchain network, you can make decisions by using a trusted, or senior validator to arbitrate in these cases.

See a gentle introduction to bitcoin mining for more detail.


UPGRADES: How do you change the rules?

As a network as a whole, you must agree up front what kind of data is valid to be passed around, and what is not.  With bitcoin, there are technical rules for transactions (Have you filled in all the required data fields?  Is it in the right format?  etc), and there are business rules (Are you trying to spend more bitcoins than you have?  Are you trying to spend the same bitcoins twice?).

Rules change.  As these rules evolve over time, how will the network participants agree on the changes?  Will there be a situation where half the network thinks one transaction is valid, and the other half doesn’t think so because of differences in logic?

In a private, controlled network where someone has control over upgrades, this is an easy problem to solve: “Everyone must upgrade to the new logic by 31 July”.

However in a public, uncontrolled network, it’s a more challenging problem.

With bitcoin, there are two parts to upgrades.

  1. Suggest the change (BIPs). First, there is the proposal stage where improvements are proposed, discussed, and written up. A proposal is referred to as a “BIP” – a “Bitcoin Improvement Proposal”.
    If it gets written into the Bitcoin core software on Github, it can then form part of an upgrade – the next version of “Bitcoin core” which is the most common “reference implementation” of the protocol.
  2. Adopt the change (miners). The upgrade can be downloaded by nodes and block makers (miners) and run, but only if they want to (you could imagine a change which reduces the mining reward from 25 BTC per block to 0 BTC. We’ll see just how many miners choose to run that!).

If the majority of the network (in bitcoin, the majority is determined by computational power) choose to run a new version of the software, then new-style blocks will be created faster than the minority, and the minority will be forced to switch or become irrelevant in a “blockchain fork”.  So miners with lots of computational power have a good deal of “say” as to what gets implemented.


WRITE ACCESS: How do you control who can write data?

In the bitcoin network, theoretically anyone can download or write some software and start validating transactions and creating blocks.  Simply go to https://bitcoin.org/en/download and run the “Bitcoin core” software.

Your computer will act as a full node which means:

  • Connecting to the bitcoin network
  • Downloading the blockchain
  • Storing the blockchain
  • Listening for transactions
  • Validating transactions
  • Passing on valid transactions
  • Listening for blocks
  • Validating blocks
  • Passing on valid blocks
  • Creating blocks
  • ‘Mining’ the blocks

The source code to this “Bitcoin core” software is published on Github: https://github.com/bitcoin/bitcoin.  If you are so inclined, you can check the code and compile and run it yourself instead of downloading the prepackaged software on bitcoin.org.  Or you can even write your own code, so long as it conforms to protocol.

Ethereum works in a similar way in this respect – see a gentle introduction to Ethereum.

Permissionless
Note that you don’t need to sign up, log in, or apply to join the network.  You can just go ahead and join in.  Compare this with the SWIFT network, where you can’t just download some software and start listening to SWIFT messages.  In this way, some call bitcoin ‘permissionless’ vs SWIFT which would be ‘permissioned’.

Permissionless is not the only way
You may want to use blockchain technology in a trusted, private network.  You may not want to publish all the rules of what a valid transaction or block looks like.  You may want to control how the network rules are changed.  It is easier to control a trusted private network than an untrusted, public free-for-all like bitcoin.


DEFENCE: How do you make it hard for baddies?

A problem with a permissionless, or open networks is that they can be attacked by anyone. So there needs to be a way of making the network-as-a-whole trustworthy, even if specific actors aren’t.

What can and can’t miscreants do?

A dishonest miner can:

  1. Refuse to relay valid transactions to other nodes
  2. Attempt to create blocks that include or exclude specific transactions of his choosing
  3. Attempt to create a ‘longer chain’ of blocks that make previously accepted blocks become ‘orphans’ and not part of the main chain

He can’t:

  1. Create bitcoins out of thin air*
  2. Steal bitcoins from your account
  3. Make payments on your behalf or pretend to be you

That’s a relief.

*Well, he can, but only his version of the ledger will have this transactions. Other nodes will reject this, which is why it is important to confirm a transaction across a number of nodes.

With transactions, the effect a dishonest miner can have is very limited.  If the rest of the network is honest, they will reject any invalid transactions coming from him, and they will hear about valid transactions from other honest nodes, even if he is refusing to pass them on.

With blocks, if the miscreant has sufficient block creation power (and this is what it all hinges on), he can delay your transaction by refusing to include it in his blocks.  However, your transaction will still be known by other honest nodes as an ‘unconfirmed transaction’, and they will include it in their blocks.

Worse though, is if the miscreant can create a longer chain of blocks than the rest of the network, and invoking the “longest chain rule” to kick out the shorter chains.  This lets him unwind a transaction.

Here’s how you can do it:

  1. Create two payments with the same bitcoins: one to an online retailer, the other to yourself (another address you control)
  2. Only broadcast the payment that pays the retailer
  3. When the payment gets added in an honest block, the retailer sends you goods
  4. Secretly create a longer chain of blocks which excludes the payment to the retailer, and includes the payment to yourself
  5. Publish the longer chain. If the other nodes are playing by the “longest chain rule” rule, then they will ignore the honest block with the retailer payment, and continue to build on your longer chain. The honest block is said to be ‘orphaned’ and does not exist to all intents and purposes.
  6. The original payment to the retailer will be deemed invalid by the honest nodes because those bitcoins have already been spent (in your longer chain)

double_spend

The “double spend” attack.

This is called a “double spend” because the same bitcoins were spent twice – but the second one was the one that became part of the eventual blockchain, and the first one eventually gets rejected.
How do you make it hard for dishonest miners to create blocks?

Remember, this is only a problem for ledgers where block-makers aren’t trusted.

Essentially you want to make it hard, or expensive for baddies to add blocks.  In bitcoin, this is done by making it computationally expensive to add blocks.  Computationally expensive means “takes a lot of computer processing power” and translates to financially expensive (as computers need to be bought then run and maintained).

The computation itself is a guessing game where block-makers need to guess a number, which when crunched with the rest of the block data contents, results in a hash / fingerprint that is smaller than a certain number.  That number is related to the ‘difficulty’ of mining which is related to the total network processing power.  The more computers joining in to process blocks, the harder it gets, in a self-regulating cycle.

difficulty_calibration

Every 2,016 blocks (roughly every 2 weeks), the bitcoin network adjusts the difficulty of the guessing game based on the speed that the blocks have been created.

This guessing game is called “Proof of work”. By publishing the block with the fingerprint that is smaller than the target number, you are proving that you did enough guess work to satisfy the network at that point in time.


INCENTIVES: How do you pay validators?

Transaction and block validation is cheap and fast, unless you choose to make it slow and expensive (a la bitcoin).

If you control the validators in your own network, or they are trusted, then

  • you don’t need to make it expensive to add blocks, and
  • therefore you can reduce the need to incentivise them

You can use other methods such as “We’ll pay people to run validators” or “People sign a contract to run validators and behave”.

Because of bitcoin’s ‘public’ structure, it needs a defence against miscreants and so uses “proof of work” to make it computationally difficult to add a block (see Defence section).  This has created a cost (equipment and running costs) of mining and therefore a need for incentivisation.

Just as the price of gold determines how much equipment you can spend on a gold mine, bitcoin’s price determines how much mining power is used to secure the network. The higher the price, the more mining there is, and the more a miscreant has to spend to bully the network.

So, miners do lots of mining, increasing the difficulty and raising the walls against network attacks. They are rewarded in bitcoin according to a schedule, and in time, as the block rewards reduce, transaction fees become the incentive that miners collect.

tx_fees_replace_block_rewards

The idealised situation in Bitcoin where block rewards are replaced by transaction fees.

This is all very well in theory, but the more you look into this, the more interesting it gets, and with the bitcoin solution, the incentives may not quite have worked as expected. This is something for another article…


CONCLUSION


It is useful to understand blockchains in the context of bitcoin, but you should not assume that all blockchain ecosystems need bitcoin mechanisms such as tokens, proof of work mining, longest chain rule, etc.  Bitcoin is the first attempt at maintaining a decentralised, public ledger with no formal control or governance. Ethereum is the next iteration of a blockchain with smart contracts. There are significant challenges involved.

On the other hand, private or internal distributed ledgers and blockchains can be deployed to solve other sets of problems.  As ever, there are tradeoffs and pros and cons to each solution, and you need to consider these individually for each individual use case.

If you have a specific business problem which you think may be solvable with a blockchain, I would love to hear about this: please contact me.


Acknowledgments

With thanks to David Moskowitz, Tim Swanson, Roberto Capodieci. Errors, omissions, and simplifications are mine.

Software Defined Data Centers (SDDC)

1

Virtualization has led to the increasing commoditization of computer hardware particular enterprise systems like servers, networking equipment, and SAN storage. Cloud service providers are now so common and mainstream that IT professionals need strong business cases to justify purchasing physical compute resources and even when possible private cloud options are typically at the forefront.

Software-Defined Data Center (SDDC) is the term used to describe an architecture where all elements of a computer infrastructure are virtualized and automate management, and provisioning tools are utilized.

Main compute elements would include:

  • Virtualized Compute;
  • Virtualized Networking;
  • Virtualized Storage.

Implementing an SDDC will abstract physical equipment from the compute requirements and offers the following advantages:

Hardware independence; traditional data centers tends to encourage and in some cases lock-in specific hardware vendors;

  • Resource Pooling: CPU, memory and storage resources are pooled and made available without the need to consider the physical hardware;
  • Resilience and High Availability: Not only are local resources pool but the SDDC can span multiple data centers and allow convergence with cloud providers such as AWS or Azure.

Hyerconverged infrastructures are similar to SDDC’s have many of the main elements virtualized but the main difference being that SDDC environments will typically have management and control systems particular using orchestration to simply and in some cases automate provisioning.

Instead of applying CPU\RAM limits and reservations, policies organizations to virtual machines, which allow for resource requirements to be defined and changed as application or business needs change. The policies changes can be automatically updated and controlled using management tools such as VRealize from VMware or Turbonomic.

In SAN based infrastructures storage is allocated in logical unit number (LUN) addresses on the SAN and made available as virtual machine disks. For Virtualized Storage, virtual workloads are decoupled from physical storage. Software-defined storage pools all storage devices into a data plane and assign storage by using a policy-based controls aligns with the performance characteristics of the underlying storage device. This creates a pooled storage resources that caters for the needs of the application or virtual machine.

SDDC’s must have three key fundamental feature:

  1. Must be able to run applications without requiring them to be redesigned or re-architected;
  2. It must allow organizations to utilize staffs existing virtualization skills while providing self-service capabilities;
  3. Allow seamless manage for standard data centers and cloud-based services.

The benefits of implementation an SDDC are:

  1. Significantly better resource utilization that will improve efficiency and lower costs;
  2. Rapid application provisioning;
  3. Ability to provide very high granularity to application security;
  4. Compute workloads that are evenly spread across the SDDC environment.

The Software-Defined Data Center (SDDC) is a combination of virtualization and automation that will enable IT to adopt a hybrid cloud strategy and provide enhanced efficiency and security with low cost for IT projects.

www.cto.guru

Flexibility And Cost-saving – Two Greatest Advantages Of Using Intelligentia’s Cloud Services

4

Amazon Web Services India has already established its stronghold among a variety of industrial verticals in the country. AWS Managing Director Iain Gavin might consider flexibility as the core advantage of their cloud solutions, but cost-effectiveness is undoubtedly the equally amazing aspect and the growing AWS users in India agree to it. If you are about to choose among the top cloud computing service providers, read the following discussion and know how these two important advantages can be best enjoyed with AWS.

The Pay-as-you-use Factor:

A host of Amazon cloud services follow this pricing model which brings together the advantages of flexibility as well as cost-saving. Some of the most notable of these services include:

  • Amazon S3: The Simple Storage Service (S3) allows flexible data storage and retrieval in any amount, at any time and from anywhere. No minimum fee is charged and the users pay for what they use.
  • Amazon EC2: The Elastic Compute Cloud (EC2) service offers resizable cloud computing capacity that can be increased or decreased as per your requirement. While flexibility is the main draw here, you pay for EC2 instances on hourly basis with no upfront payments or at the irresistible discounts of up to 75%.
  • Amazon SimpleDB: Database administration can be burdensome, expensive and complicated. SimpleDB offers flexible DB administration and that too in a highly secure and cost-saving manner.
  • Others: Other Amazon cloud computing services offering agility, scalability and cost saving include SQS (Simple Queue Service), EMR (Elastic MapReduce), Import/Export and many more.

Freedom to Choose:

When you decide to build a business application on cloud, what matters is the freedom to choose the platforms of your choice. Amazon Web Services offer unparalleled flexibility in this direction as it allows you to select among:

  • Popular programming languages like .Net, PHP, Java, Python, Ruby and others.
  • Databases like MySQL, MS SQL, Perconna, MongoDB, Oracle and more.
  • Web servers like Tomcat, Apache, JBoss, GlassFish and more.

Flexibility and Cost-saving Go Hand-in-hand:

The discussion so far highlights one important point that the users of AWS support services are able to enjoy flexibility and cost-effectiveness simultaneously. One great example in this regard is that of Amazon Auto Scaling which maintains balance between performance, costs and requirements when you use Elastic Compute Cloud. There are several other examples that prove this point.

Is AWS For You?

If you are serious about migrating IT infrastructure, application development, storage and other critical business-influencing aspects from on-premises to cloud, AWS is the perfect choice for your business. Considering that the small and medium business (SMB) section in India is huge and prospering, Amazon cloud solutions offer great cost-saving advantages. In addition, there are other benefits of Amazon Web Services support, including:

  • Ease of use
  • Scalability which again is an extended benefits of flexibility
  • Reliability
  • Security

Considering that your business growth is highly dependent on information technology, it is important that you save IT operation costs while enjoying flexible solutions with AWS.

Using PKI with the Cloud- Secure Web the Sophisticated Way

0

PKI is an abbreviation for Public Key Infrastructure. It is a comprehensive system aimed at providing digital signature services and public-key encryption. The purpose of a PKI is to provide management of certificates and keys. When an organization opts for public key infrastructure for management of keys and certificates, it establishes a reliable and safe networking environment. Furthermore, with PKI, a wide array of applications can take advantage of stipulated services.

What Makes PKI Inevitable For Cloud Computing

With the prevalence of cloud computing, PKI technology heralds a new era of renaissance where computer to computer communications are more secure than ever before. PKI can be thought of as an arrangement which is directly responsible for handling issues like public key assignment and digital certificate. These digital certificates are issued by a certification authority which is the part of the process followed by the PKI. This process also involves the binding of validated user identities to public keys.

User identities are either verified by a certification authority or an individual registration authority is used. Once the user identity is issued, the certification authority uses a unique digital signature to seal and stamp the certificate. Since secure online network communications and online transactions are the most sought-after features cloud computing is expected to provide, it can only be made possible with PKI. Furthermore, PKI also provides a certificate holder authentication of his business identity.

How It Reduces Web Vulnerability

There are several services that PKI is expected to provide to ensure its effective implementation. What makes PKI valuable for cloud computing is that it allows the use of digital signatures and encryption in multiple applications. At the same time, transparency is the most significant constraint on a public-key infrastructure which establishes that the cloud computing can be used optimally only when the users are not required to understand how the keys and services are being managed by the PKI. A handful of areas that comes under the jurisdiction of PKI for providing the desired services of certificate and public key management are outlined below:

  • Issuance of key certificates;
  • Repository of certificates;
  • Revocation of certificates;
  • Support for backup and recovery;
  • Assurance of non-repudiation of digital certificates;
  • Automatic update of certificates and keys;
  • Cross-certification service.

How It Works To Build Trust on Clouds

PKI plays a vital role when it comes to building trust on cloud computing services. Some aspects that PKI brings to cloud computing are briefly listed below:

·         Security

When documents of confidential nature are to be communicated over the cloud, security is of paramount importance. Since PKI ensures secured signing by making use of digital signatures with a digital ID that can be accessed exclusively by that user, it implies comprehensive protection.

·         Reliability

Reliability offered by a service is a telltale of its capabilities, seamless implementation and high level of security. With PKI, review of quality, monitoring and development are aligned with cloud service, hence substantiating reliability.

·         Open communications

Successful communication is directly associated with trust and cloud services are require to respond and acknowledge the digitally aware end-users in the desired manner and this is what the use of PKI offers. With several ways of communication available, it is the responsive and prompt communication PKI streamlines that make the real difference.

Security Considerations for Cloud Applications

1

Be it Software as a Service (SaaS), Platform as a Service (PaaS) and Infrastructure as a Service (IaaS), cloud environments pose an increased threat to applications data and security practices need to give due consideration to the nuances that exist in cloud environments.

The steps to secure an application on a cloud computing infrastructure and the types of potential vulnerabilities depend on the cloud deployment models. Private cloud vulnerabilities closely match traditional IT architecture vulnerabilities but public cloud infrastructure, however, requires an organizational rethink of security architecture and processes. A secure cloud implementation must not only address the risks of confidentiality, integrity, and availability, but also the risks to data storage and access control.

Some of the common security considerations of applications in a cloud environment can be classified into following categories:

  1. Application Lock in

SaaS providers typically develop a custom application tailored to the needs of their target market. Customer data is stored in a custom database schema designed by the SaaS provider. Most SaaS providers offer API calls to read and export data records. However, if the provider does not offer a ready-made data ‘export’ routine, the customer will need to develop a program to extract their data. SaaS customers with a large user-base can incur very high switching costs when migrating to another SaaS provider and end-users could have extended availability issues.

  1. Vulnerabilities related to Authentication, Authorization, and Accounting

A poor system design could lead to unauthorized access to resources or privileges escalation, the cause of these vulnerabilities could include:

  1. Insecure storage of cloud access credentials by customer;
  2. Insufficient roles management;
  3. Credentials stored on a transitory machine.

Weak password policies or practices can expose corporate applications and stronger or two-factor authentication for accessing cloud resources is highly recommended.

  1. User Provisioning and De-provisioning Vulnerabilities

Provisioning and De-provisioning can cause concern for the following reasons:

  1. Lack of control of the provisioning process;
  2. Identity of users may not be adequately verified at registration;
  3. Delays in synchronization between cloud system components;
  4. Multiple, unsynchronized copies of identity data;
  5. Credentials are vulnerable to interception and replay;
  6. De-provisioned credentials may still valid due to time delays in the roll-out of a revocation.
  1. Weak or lack of encryption of archives and data in transit

Unencrypted data or use of weak encryption for archived or data in transit pose a great threat to the authenticity, confidentiality, and integrity of the data.

Organizations are recommended to define encryption approaches for applications based on a host of factors such as data forms that are available in the cloud, the cloud environment, and encryption technologies to name a few.

  1. Vulnerability assessment and Penetration testing process

The type of cloud model will have an impact on the type or possibility carrying out penetration testing. For the most part, Platform as a Service (PaaS) and Infrastructure as a Service (IaaS) clouds will permit pen testing. However, Software as a Service (SaaS) providers is not likely to allow customers to pen test their applications and infrastructure. Customers normally have to rely on the testing carried out on the infrastructure as a whole and this might not suit the security requirements of some.

  1. Lack of forensic readiness

While the cloud has the potential to improve forensic readiness, many providers do not provide appropriate services and terms of use to enable this. For example, SaaS providers will typically not provide access to the IP, firewall or systems logs.

  1. Sanitization of sensitive media

Shared tenancy of physical storage resources means that data destruction policies can be hampered for example; it may not be possible to physically destroy media because a disk may still be used by another SaaS customer or the disk that stored your data may be difficult to locate.

  1. 8.     Storage of data in multiple jurisdictions

Data store in different or even multiple jurisdictions could leave the company vulnerability to unfavorable regulatory requirements. Companies may unknowingly violate regulations, especially if clear information is not provided about the jurisdiction of storage.

  1. 9.     Audit or certification not available to customer

The cloud provider cannot provide any assurance to the customer via audit certification

For instance, some CP is using open source hypervisors or customized versions of them (e.g., Xen ) which have not reached any common criteria certification, which is a fundamental requirement for some organizations (e.g., US government agencies).

Cloud is surely going to be the next big thing and is going to change the way businesses work. Security is the biggest concern for the cloud applications but reducing the vulnerable aspects of a Cloud system can reduce the risk and impact of threats on the system.

Deploying Applications to the Cloud – How to Make the Most Out of the Solution

1

Despite the fact that cloud computing is a comparatively new technology, it has revolutionized the way communication in today’s world takes place. Offering an inexpensive and flexible alternative to an organization’s infrastructure, the cloud network can be created, destroyed, reconfigured, shrunk or grown on demand.

With statistics showing a steep rise in the number of enterprises that plan to exploit the technology’s potential, it is of paramount importance to ensure that the cloud provides support for running and scaling any application. This can only be achieved by deploying it to the cloud.

How Web Deployment to the Cloud Works Wonders

Deploying an application to the cloud ensures certain advantages which are briefly outlined below.

  • Bid farewell to servers for good. Deployment of an application to the cloud guarantees instant execution.
  • It enables a secure, dedicated and scalable platform for managing simple and critical applications.
  • It ensures a reliable cloud platform where it is significantly easier to build, manage and test applications.

Understanding the Three Cloud Deployment Models

Cloud computing is classified in terms of the three deployment models that organizations use. Each of these modes has its advantages catering to diverse business needs. These deployment models are briefly outlined below.

Public Cloud

As the name suggests, this cloud model is accessible by the general public. Owned and managed by a third party, this type of cloud is an attractive cost effective infrastructure. Establishing a ‘pay as you go’ model, a public cloud establishes that a single resource will be shared among multiple clients, all of them using security and configuration settings supplied by the cloud provider. It can be thought of as a multi-tenancy solution where several users enjoy the same services simultaneously. Perks public clouds offer include ease of use, agility and cost effectiveness. However, data security concerns should be taken into consideration.

Private Cloud

Private clouds are solutions that are built exclusively for an enterprise or organization. Private clouds address the two shortcomings of public clouds named improved data security and management of resources. This is established through a firewall where unauthorized personnel is denied access to data. It is an ideal choice for a business with stringent security issues, desiring complete control over the infrastructure. The advantages a business can secure with this deployment model are flexibility, optimal use of resources, data security, and better performance.

 Hybrid Cloud

Hybrid cloud is a deployment model that incorporates at least one public and one private cloud. Combining the advantages of the two, a hybrid cloud is aimed at achieving application portability. A hybrid cloud is a great choice to secure performance gains. Advantages a hybrid cloud has to offer are business agility, data security, and financial gains.

Prerequisites to Application Deployment

Characteristics essential to an application before it is migrated to the cloud are specified below.

  • Since an application comprises several components, it is of grave importance that any licensing agreement associated with them be satisfied.
  • It is recommended that the application is designed with a code that is multi-threaded. This will allow processing to occur in smaller chunks making it ideal for a cloud.
  • Make sure that the bandwidth requirements to seamlessly access applications on the cloud are met and addressed.
  • Clouds make use of the IP (Internet Protocol). The application should use Internet Protocol as a communication mechanism and use of TCP (Transfer Control Protocol) is strongly recommended.

Moving your application to the cloud is a smart business move and with all the information provided above, we hope that you will be able to accomplish it successfully.

Three Different Ways to Deploy Applications

Three different ways that are preferred by application architects to deploy their application to the cloud are as follows.

  1. Once you have installed the cloud software, log into the cloud. Copy the rmp or exe file of your application into the cloud. Install it and it’s done. This method is used when an application is to be used and deployed frequently;
  2. Make use of a virtual appliance. You can easily deploy your application provided it is in the appliance format. If it is not, you have to create an appliance using an appliance building tool;

III.            Auto-install the application. This is done at run time. Often application architects do not understand the need to create an appliance if the application has to run only once. This method not only establishes flexibility but you also get a chance to automatically have the right application installed repetitively.

13,452FansLike
814FollowersFollow
0FollowersFollow
7,783SubscribersSubscribe

Recent Posts