Network of Underwriters Guild

This post describes the implementation design of the core function of the Network of Underwriters Guild (NUG); updating assets belonging to the providing party and the receiving party.

Procedural overview

Underwriter sends

{ 
  id: reference to underwriter's own ledger
  transaction: []
}

encrypted with their private key

The NUG software will update the assets of two users, but with multi-billion rows, and 10's of thousands of underwriters, we have to make sure no one else will start updating our rows while we're at it!

We will have a last_tick table with one row telling us what the last generated tick is, and a ticks table holding the index[1] and an assets table holding the assets.

The NUG will call a function to generate a tick (a SHA256 hash) from

{
  uct: underwriter's callTag
  ptick: previous tick
  time: timestamp
  id: the id provided by the underwriter
  hash: SHA256 hash of the transaction
}

once it has updated the assets -

begin transaction;
--
-- provider
update assets set quantity = quantity - @quantity, lock = 1 
where callTag = @providing_user
and asset = @asset
and quantity - @quantity >= 0
and lock = 0;
--
-- receiver
update assets set quantity = quantity + @quantity, lock = 1
where callTag = @receiving_user
and asset = @asset
and lock = 0;
--
-- we might want to validate the rows before we commit!
select * 
from assets 
where asset = @asset and callTag in (@providing_user,@receiving_user);
--
-- TODO verify 2 rows
update assets set lock=0
where callTag in (@providing_user,@receiving_user)
and asset = @asset
and lock = 1;
commit transaction;
--
-- update last_tick
begin transaction;
update last_tick set lock=1;
set @tick = build_tick(@uct, (select tick from last_tick) as @pt, @time, @id, @hash) 
insert into ticks (tick,callTag,id) values (@tick,@uct,@id)
update last_tick set tick=@tick, lock=0;
commit transaction;

and return that tick - and we're done!

This statement will get called thousands of times every second - it is imperative that it will perform at its optimum! Or that some other implementation will allow us to make a new last_tick perhaps like 100,000 times per second![2]


  1. This last table will be VERY big - not in row size, but in number of rows! We will probably have to roll ticks off every now and then into week, month, year tables - like ticks_2018_02_01 or something. ↩︎

  2. If 7 billion people do just one 'buy/sell' per day we are looking at 80K transactions/second. ↩︎