Clarity of Mind Foreword Introduction

Runtime cost analysis

Every transaction executed on-chain comes at a cost. Miners run and verify transactions, levying transaction fees for their work. These costs should be very familiar to anyone who has ever interacted with any blockchain.

But there are other costs that not many are aware of. They are called execution costs and each block is limited by them. In addition to block size, these are what limit the number of transactions that can fit into a single block. They are also used to reduce resource consumption when calling read-only functions, for example called by dapp UI, and in this case they are limiting a single function call.

Execution costs

In Clarity, execution costs are broken up into five different categories, each with its own limit.

|                      | Block Limit  | Read-Only Limit |
| Runtime              | 5000000000   | 1000000000      |
| Read count           | 7750         | 30              |
| Read length (bytes)  | 100000000    | 100000          |
| Write count          | 7750         | 0               |
| Write length (bytes) | 15000000     | 0               |

Runtime costs limits overall complexity of the code that can be executed. For example, negating boolean value is less complex than calculating SHA512 hash, therefore (not false) will consume less runtime costs than (sha512 "hello world"). This category is also affected by contract size.

Read count limits how many times we can reach out to a memory or chain to a extract piece of information. It is affected by reading constants, variables, intermediate variables created with let, maps, but also by some functions that needs to save intermediate results during execution.

Read length (bytes) limits how much data we can read from memory or the chain. It is also affected by contract size. Calling into a contract using contract-call? Increases the read length by an amount equal to the contract size in bytes, every time. If you call into a contract with a length of 2,000 bytes twice, the read length is increased twice.

Write count limits how many times we can write data into chain. It increments when writing to variables and maps.

Write length (bytes) limits how much data we can write to chain.

Read and Write limits are quite self explanatory and practically static. The more data we read and write, the closer we get to the limits. Runtime costs are more dynamic, and more difficult to grasp as each Clarity function has its own runtime cost. For some it is static, and for the others it changes based on how much data they have to process. For example, the not function always takes exactly one boolean and does the same amount of work, while filter takes an iterator function and a list that can contain a number of elements, making the amount of work required dynamic.

As a developers we have to be careful how we structure our code, which functions we use and how we use them as it is quite easy to write code that will be 100% correct, yet it will be so expensive to execute that it will eat significant portion of execution costs set for all transactions in a block. As a result we will not be able to call it more than few times per blockā€”not to mention having to compete with others for such a large chunk of the block.


Analysis costs can be challenging, so it is important to have good tooling. Clarinet has cost analysis features built-in which make the development lifecycle a lot easier. We will take a look at manual as well as automated cost analysis tests.

Using Clarinet to analyze execution costs.

Describe how to use ::get_costs and maybe clarinet test --costs (Disclaimer with regards to clarinet test --costs:

Setting up baseline

Describe adding ultra simple function to contract we want to analyze, that we can use as reference point.

(define-read-only (a) true)

Having a reference point is SUPER IMPORTANT especially when we work with big contracts, because in big contracts functions that we think should be cheap can be quite expensive only due to contract size.

Gathering initial data

Describe measuring all public functions, and calculating how big a chunk (% wise) of the costs they eat.

Analyzing functions that requires a lot of upfront setup

Describe creating separate setup contract, that consists code responsible for easy and fast preparation for gathering execution costs of a single or multiple functions

How to find low hanging fruits, best bang for buck and bottle necks.

Describe searching for code repetitions (especially if we call other functions in our function), safe inlining, extracting suspicious code into separate functions, replacing part of the code with static values temporary.

Optimization techniques

  1. Reducing obvious code complexity (first write correct code, then work on the costs)
  2. Reducing amount of data we have to read/write
  3. Reducing number of times we reach for the same data on-chain (especially reading same values in maps)
  4. Reorganizing code (using different functions, different algorithms)
  5. Splitting/combining maps and variables
  6. Combining multiple contract calls to the same contract into single call
  7. Inlining and extracting logic
  8. Reducing amount of data passed between functions
  9. Reversing logic (example: instead of counting positive values in a list, it might be cheaper to count zeros if we expect that list should consists only positive values).
  10. Bulk processing instead of loops (when we iterate over list of uint it might be cheaper to first check if all of them are positive (see prev. point) then doing it inside fold)
  11. Reducing contract size by removing comments, using shorter names (functions, variables, keys in tuples)

Long term recommendations

Premature Optimization Is the Root of All Evil

Start with working code and focus on functionality first.

Write tests. A lot of tests. Code refactoring is difficult and tests helps you make sure that your code behaves the same way before and after change.

Aim for low hanging fruits first.

Small gains accumulates very quickly - snowball effect. Sometimes reducing costs of one function by mere 0.01% can result in 2% reduction in other.

If you create huge tuple in one function that is called sporadically and then read it and use only small portion of it in another function that is called very, very often, then it is a great candidate for data structure split.

If you execute fold over small list (up to 10 elements), but each element of that list is a tuple and you return tuple - test if unwinded (inlined) code won't be cheaper to execute than fold (quite often it will).

If you create variables inside let and use them once and only once - check code inlining. You have a 50% chances it will be cheaper.

Analyze read-only functions. If they don't fit into read only limits, and you have them only for the UI/off-chain purpose - try to expose data used by these functions and handle calculations off-chain.