►
From YouTube: Basic Technical Walkthrough - Baseline Protocol
Description
Baseline architecture and code walkthrough conducted by Kartheek Solipuram at the November 8, 2021 Baseline Core Devs Session.
Join our Core Dev or other community teams here:
https://www.signupgenius.com/org/baseline#/
A
All
right,
as
as
as
soon
as
pointing
out
I'll
go
over
some
of
the
basics
of
the
baseline
protocol,
particularly
around
some
of
the
core
components
that
we've
introduced
with
help
from
the
the
main
members
of
the
community,
predominantly
from
provide
and
consensus,
and
some
of
our
work
as
we
came
along
to
open
source
or
make
this
make
this
a
baseline
project.
You
know
go
into
public
domain.
A
Essentially
I'm
not
going
to
go
over
some
of
the
motivations
which
I'm
sure
that
most
of
you
are
aware
of,
but
just
as
a
quick
round
up,
we
some
of
the
cons.
The
main
considerations
for
baseline
is
how
do
enterprises
communicate
with
each
other,
any
form
of
any
form
of
inter
business
communication
or
any
form
of
logical
interpolation
or
any
form
of
synchronization
for
that
matter,
on
data
on
process
or
any
form
of
business
logic
that
enterprises
deal
with
on
a
day-to-day
basis
with
their
other
counterparties.
A
So
with
that
being
said,
some
of
the
key
considerations
are
around
security
to
make
sure
that
the
data
that
any
of
the
enterprise
systems
use
to
interact
with
the
mainnet,
that
data
itself
doesn't
leave
the
legacy
systems
and
any
form
of
data,
interchange
or
transformations,
etc
remain
hidden.
While
doing
so
it
is
square
it
is.
A
It
is
significantly
important
to
make
sure
that
the
the
information
of
of
the
various
counterparties
or
the
various
parties
on
this
baseline
protocol
are
the
information
that
is
being
shared
is
consistent
and
the
way
this
consistency
can
be
maintained
is
by
using
some
form
of
state
accumulators
or
some
mannerism
in
which
the
state
of
every
the
state
of
data
that
any
party
comments
or
any
party
attests
to
is
consistent
with
any
other
party,
that
is,
a
consumer
consumer
of
the
data
or
a
tester
of
the
data
or
a
validator
of
the
data
for
that
matter.
A
So
in
that
sense,
we
need
to
one
of
the
main
considerations
with
data.
Consider
sorry,
consistency
is
the
compatibility
with
multiple
data
systems.
Different
organizations
may
have
different
types
of
data
systems
or
database
technologies
or
or
even
data,
governance
considerations
or
data
architecture
paradigms,
but
being
aware
of
all
of
those
aspects
or
all
of
those
variations
or
variabilities.
A
How
to
the
main
consideration
is
making
sure
that
the
data
across
the
various
types
of
models
is
consistent
across
all
the
enterprises,
and
in
doing
so,
we
want
to
use
mainnet
as
the
primary
source
of
truth
or
a
common
reference
frame
by
which
all
different
parties
can
more
or
less
subscribe
to
the
various
changes
that
are
happening
in
the
data
or
the
various
state
changes
that
might
impact
the
primary
blockchain
itself.
So
we
do
so
by
introducing
a
set
of
smart
contracts
to
verify
data
consistency
while
keep
the
data
while
keeping
the
data
hidden.
A
One
of
the
other
consideration
is
that,
with
the
changing
nature
of
the
mainnet,
with
with
various
folks
coming
up
or
with
the
folks
that
have
come
up
in
the
last
two
years,
even
with
byzantine
and
london,
hard
folk
and
also
with
the
new
types
of
pre-compiles,
getting
added
to
enable
different
form
of
signature
mechanisms
or
even
zero
knowledge,
proof,
based
proofing
or
verification
mechanisms,
are
continuously
changing
or
continuously
being
upgraded
with
introduction
of
new
eips
and
so
on.
So
it's
pri.
A
It's
it's
important
that
as
we
are,
as
we
are
lining
up
things
to
use
mainnet
as
a
common
reference
frame
of
data.
It
also
needs
to
be
extensible
to
various
changing
privacy
techniques
and
finally,
scalability
is
also
of
main
importance,
as
we
know
that
there
have
been
numerous
issues,
or
rather
numerous
quote,
unquote-
concerns
with
the
rising
gas
fees
on
of
conducting
any
business
or
any
transactions
on
the
main
net.
A
It
is
important
that
for
scale
not
only
is
not
only
is
it
important
to
be
able
to
commit
any
of
the
private
elements
or
private
data
or
private
logic
to
the
mainnet,
but
it's
also
important
that
it
needs
to
be
extended
to
many
other
private
states
and
the
ability
to
compose
multiple
groups.
Given
the
given
the
nature
of
the
zero
knowledge
proof
that
is
embedded
with
any
verification
of
a
business
logic
or
a
business
data
or
data
synchronization.
For
that
matter,
it's
important
to
compose
these
proofs
and
conduct
efficient
verification.
A
I
just
wanted
to
give
a
basic
roundup
of
why
and
how
we
came
about
in
terms
of
building
the
baseline
stack
or
coming
up
with
a
or
working
with
the
folks,
like
andreas
and
kyle,
to
come
up
with
standards
as
well
in
this
case,
so
that
I'll
just
go
forward
into
some
of
the
primary
baseline
components
as
I'm
going
over
these
I'll
switch
to
the
repo
as
well
as
well
in
the
interim,
to
walk
through
some
of
the
elements,
or
rather
the
library
components
that
could
be
used
or
that
that
were
used
rather
in
their
various
reference
implementations
as
well.
A
So
starting
from
the
top,
we
have
the
these
these
six
power.
Five.
Sorry,
these
five
packages
are
or
these
five
components
are
some
of
the
critical
components
of
the
baseline
stack.
A
It
starts
off
with
api,
which
is
around
creating
this
successful
interface,
enabling
solution
providers
or
like
solution,
adopters
of
the
baseline
protocol
or
any
organizations
of
the
baseline
protocol
as
they're
adopting
the
standard.
It's
important
that
api
acts
as
a
mechanism
for
enabling
either
rest
base
and
points
or
any
form
of
micro
service
enabled
endpoints
for
businesses
or
parties
to
conduct
business
with
each
other
so
quickly
switch
to
the
repo
itself.
A
So
we
have
in
the
baseline
stack
the
all
the
all
the
various
services
or
the
components
are
structured
under
the
baseline
core
and
particularly
under
api.
We
have
a
typescript
form
of
an
interface
definition
where
you
could
see
that
we
have
various
and
various
interfaces
to
essentially
maintain
or
connect
to
an
rpc,
maintain
an
rpc
connection
to
the
mainnet,
and
there
are
a
bunch
of
other
a
bunch
of
other
interfaces
as
well.
So,
for
example,
here
one
of
the
main
interface
is
the
baseline
rpc.
A
We
have
a
few
methods
to
be
able
to
get
to
be
able
to
get
the
commit
of
any
of
any
object
or
data
that
is
getting
attested
or
verified
and
then
put
into
a
merkle
tree
I'll
I'll
talk
about
the
overall
business
process
as
to
where
mr
tree
comes
in.
But
the
idea
is
that
this
rpc
interface
or
this
baseline
rpc
interface
essentially
contains
a
list
of
methods
to
track
and
follow
the
state
changes
that
are
happening
on
a
merkle
tree.
A
So
the
idea
is
that
as
different
as
say,
for
example,
any
given
organization
is
community
communicating
a
piece
of
message
or
any
form
of
data
data
packet
to
other
organizations.
A
As
we
go
into
the
details
of
how
we
do
so
I'll
just
cover
at
the
surface,
what
these,
what
these
methods
do?
A
That
is
that,
when
we
are
communicating
or
when
organization
a
is
communicating
with
organization
b
about
any
business
document
or
any
business
logic
or
any
form
of
data
structure,
because
we
use
zero
knowledge,
proofs
and
privacy-based
elements,
it
is
important
that
we
don't
commit
or
we
don't
send
the
message
or
we
don't
commit
the
message
directly
to
any
smart
contract,
which
is
publicly
sorry,
which
is
the
means
of
which
participants
or
folks
on
the
mainnet
can
access
the
access.
A
The
state
changes
itself
directly,
but
we
use
a
localization
concept
or
an
accumulator
concept
where
we
push
the
hashes
of
any
of
the
data
elements
that
we
would
like
to
communicate
from
between
one
organization
to
another
directly
into
a
merkle
tree,
where
each
each
root
or
each
sorry.
Each
leaf
of
the
merkle
tree
corresponds
to
a
hash
of
the
data
that
is
communicated
between
the
parties
and
the
hash
itself
is
embedded
with
other
properties
around
zero
knowledge
groups
as
well,
which
I'll
go
into
in
a
few
moments.
A
And
this
these
set
of
methods
essentially
help
in
being
able
to
getting
the
latest
route
of
the
merkle
tree
or
getting
a
proof
given
a
particular
leaf
or
tracking
a
particular
leaf
or
even
verifying
whether
or
not
a
particular
leaf
is
a
part
of
a
merkle
tree.
A
So
essentially,
this
entire
set
of
baseline
rpc
methods
provides
an
ability
to
store
things
in
historic
hashes
in
the
merkle
tree,
both
on
chain,
but
also
maintain
a
parallel
collection
in
your
local
mongodb
of,
or
rather
when,
I
say,
local
mongodb,
a
part
of
the
persistence
in
the
baseline
protocol,
which
I'll
go
to,
which
is
one
of
the
next
elements.
So
the
idea
is
that
each
hash
that
is
getting
stored
on
the
blockchain
also
needs
to
have
a
representative
structure
or
storage
on
your
local
databases
as
well.
A
When
it
comes
to
participants,
we
also
have
some
of
the
standard
ones
for
connecting
to
a
blockchain
service
here,
like
broadcasting,
a
transaction
fetching
transaction
receipts
or
even
signing
any
payload.
Before
before
sending
any
message,
then
we
have
the
registry
interface.
This
is
where
this
is
the
interaction
with
the
organization
registry,
smart
contract.
A
So
the
idea
is
that,
when
we
are
interacting
with
various
parties
on
the
mainnet,
it
is
important
that
we
need
to
be
able
to
have
some
form
of
an
identification
and
also
an
enablement
or
a
provision
for
say,
for
example,
different
kyc
providers
or
email
email
providers
to
be
able
to
detect
or
identify
who
the
counterparty
is
to
identify
who
the
various
organizations
are.
So
to
that
extent,
we
have
various
methods
which
are
essentially
behind
the
behind
the
scenes.
A
You
could
think
of
each
of
the
implementations
of
these
methods
to
interact
with
the
smart
contract
using
rpc
methods.
So
to
that
extent
we
have
create
work,
group
or
sorry
work
group
functionalities
to
create
work
groups
to
fetch
the
various
work
groups,
the
details,
the
organizations
and
the
re
under
the
work
groups
and
so
on.
A
A
The
traditional
keys
that
most
of
us
know
are
of
say
any
form
of
secrets
that
are
used
to
sign
any
transactions
in
general
on
ethereum,
which
is
your
standard,
ethereum
public
key
private
key
pair.
So
the
vault
acts
as
a
enabler
for
storing
such
private
keys.
But
as
we
go
forward,
we'll
notice
that
there
is
some
elements
of
privacy
as
well,
especially
when
it
comes
to
zkp.
A
We
use
a
special
form
of
signatures
to
ensure
that
parties
signing
a
certain
message.
Their
identity
is
hidden
under
serial
knowledge
proofs
and
to
be
able
to
do
so,
you
need
you
need
some
special
special
curves,
like
a
form
of
edd
sa
curves
and
also
bls
type
signatures
to
be
used
as
part
of
signing
a
given
message.
So,
apart
from
traditional
private
keys
that
we
that
we
nominally
use
for
signing
any
transactions
to
interact
with
the
mainnet,
the
wallet
is
also
designed,
or
rather
facilitated,
to
store
other
types
of
private,
private.
A
Sorry,
private
keys
as
well
like
the
private
keys
that
are
needed
for
bls
signatures
or
the
private
keys
or
or
even
the
curves,
that
itself
that
are
used
to
create
construct.
The
bls
signatures,
and
so
on,
so
these
so
in
that
sense,
api
acts
as
a
way
to
a
way
to
interact
with
the
blockchain
way
to
interact
with
security,
storage
and
finally
interact
with
the
smart
contracts
that
are
deployed
on
the
main
net.
A
The
next
one
we'll
go
into
is
the
contracts
so
as
part
of
contracts,
some
of
the
primary
contracts
that
we
have
are
around
setting
up
work
groups.
Obviously
these
set
of
smart
contracts
are
just
the
base
ones
and
any
extensions
are
welcome
and
any
extensions
are
possible
here,
but
we,
when
we
came
up
with
this
architecture
originally
the
idea
was
that
we
want
an
ability
for
different
organizations
to
be
able
to
identify
each
other
who
they
are
working
with,
and
also
the
ability
to
ensure
that
these
are
organizations
are
valid
and
identifiable
organizations.
A
So
the
notion
we
came
up
with
was
that
various
organizations
they
may
there
can
be
many
organizations
which
are
effectively
using
the
mainnet
and
especially
interacting
with
say
the
baseline
protocol
stack,
but
different
organizations
have
different
business
imperatives
and
they
have
different
working
relationships
with
different
other
types
of
organizations.
A
Each
of
them
could
be
assumed
to
be
a
part
of
the
same
working
group,
but
in
the
on
the
same
extent,
each
of
these
entities,
like
a
buyer
supplier
or
a
distributor
or
a
transportation
agency,
may
have
their
own
sub
service
line
or
subsidiaries
or
even
downstream
business
partners
that
they
might
be
doing
business
with,
like
a
transportation
agency,
for
example,
maybe
in
turn
working
with,
say
a
local
logistics
company
or
a
warehousing
company
to
be
able
to
store
goods
upon
upon
receipt
of
any
manufacturing
goods
and
to
be
able
to
ship
them
over
to
the
intended
recipients
like
the
intended
buyers
or
buyers
itself.
A
So
to
that
extent
again,
this
work
group
that
a
transportation
agency
may
have
may
be
very
local,
so
to
speak
in
terms
of
business
integratives
for
the
transportation
agency
and
in
a
similar
manner.
This,
a
buyer
may
or
a
different
part,
or
a
different
participant
on
the
ecosystem
may
or
in
this
work
in
this
group,
may
have
their
own
downstream
providers
that
they
deal
with
their
own
tier
2
suppliers
or
tier
2
manufacturers
and
so
on.
A
So
regardless
of
all
this,
what
we
are,
what
we
came
up
with
was
that
if
there
was
an
ability
for
different
organizations
to
identify
themselves
and
form
working
groups
so
that
they
can
uniquely
identify
which
working
group
they
are
part
of
and
and
at
the
same
point
at
the
sorry.
To
the
same
token,
we
have
organizations
being
added
to
the
working
group
or
removed
from
the
working
group
or
moved
from
one
working
group
to
another.
A
So
the
idea
is
that
we
would
want
a
standard
or
we
would
want
a
good
eip
that
we
could
use
for
representing
organizations
and
their
work
groups,
and
not
just
that.
We
also
want
to
make
sure
that
the
work
group
or
the
set
of
contracts
or
set
off
say
shield,
sorry
set
of
say
the
merkle
tree
contracts
or
the
shielding
contracts
that
are
needed
for
zero
knowledge
proofing.
A
They
are
also
custom
to
each
working
group,
so
a
working
group
for
say,
for
example,
a
working
group
that
consists
of
say,
hypothetically
just
a
supplier
and
buyer
may
be,
could
be
rather
attuned
to
just
verifying
say
proofs
of
any
documents
that
are
being
shared
around
say.
The
agreements
that
are
agreed
upon,
like
in
terms
of
the
goods
that
need
to
be
procured
or
or
say,
the
purchase,
orders
or
invoices
etc.
A
So
to
be
able
to
verify
the
shipping
contract
details
or
to
ensure
the
data,
integrity
of
the
shipping
contract
between,
say,
a
transportation
agency
and
a
local
warehousing
agency,
or
a
logistics
company,
the
very
it
may
have
they
may
have
their
own
verifications
of
verifying
the
shipping
notice
or
verifying
a
shipping
contract
or
verifying
the
logistics
contract
and
so
on.
So
to
that
extent,
it's
key
that
organization
registry
as
such
not
only
has
a
facility
to
create
different
working
groups,
but
you're
also
able
to
register
different
types
of
interfaces.
A
So
a
shield
interface
is
an
example
in
this
case,
where
a
shield
contract
or
verify
shield
and
verifier
as
a
combination
are,
can
have
different
interfaces
or
different
definitions
based
on
how
they
are
interacting
with
how
they're,
interacting
with
the
merkle
tree
itself.
So
to
that
extent,
organization
registry
also
has
functionality
to
register
the
various
interfaces
so
just
quickly
switch
to
the
baseline
contracts.
A
This
is
the
implementation
of
the
org
registry,
and
here
you
will,
you
can
see
that
we
have
an
ability
to
register
organizations
which
is
where
we
capture
all
the
important
details
of
of
a
given
organization
like
that
ethereum
mattress,
a
unique
name
which
could
be
an
ens
domain
as
well
for
that
matter,
then
you
also
have
the
messaging
endpoint,
and
this
is
one
of
the
older
artifacts
where
we
were
using
or
we
were
relying
on
using
whisper
for
inter
inter-party
communication,
but
as
I
go
into
messaging
next,
we'll
go
into
particularly
the
nats
key
here.
A
So
this
whisper
key
is
more
of
a
more
of
an
indicator
of
the
messaging
endpoint
or
sorry,
the
the
key
that
is
needed
to
identify
your
counterparty.
When
it
comes
to
sending
private
messages,
then
you
also
have
a
zkp
public
key
to
to
indicate
an
organization's
signature
main
signature
key.
So
in
the
case
of
baseline
and
then
particularly
in
the
case
of
sending
sending
secure
messages
across.
One
of
the
key
aspect
is
the
type
of
signature
mechanism
that
is
being
used.
Is
that
particular
signature
mechanisms?
A
Public
key,
is
what
we
would
want
to
capture
over
here
as
part
of
the
registry
of
the
organization,
so
this
functionality
walks
through
or
provides
an
ability
to
register
organizations
not
much
different,
very
similar
to
any
of
the
other
naming
registry
or
naming
or
organization
registry
or
name
registry
contracts.
That
folks
must
have
seen,
but
we
also
have
another
method
where
we
are
essentially
registering
interfaces,
so
different,
different
types
of
interfaces
or
different
types
of
implementations
of
a
given
interface
can
be
used
by
the
working
group
organizations.
A
A
So
the
registration
of
interfaces
includes
a
token
address
which
is
representative
of
a
token
interface,
a
shield
address
which
is
representative
of
a
shield
interface
and
a
verifier
address,
which
is
of
a
verifier
interface,
and
the
main
aspect
over
here
is
that
when
we
chose
to
implement
this,
we
use
the
standard
for
erc
1820,
which
is
a
cl
which
is
a
a
standard
that
was
published
to
the
ethereum,
to
the
theorem
in
it,
and
also
which
is
one
of
the
eips
out
there,
which
was
proposed
about
three
years
ago,
where
it
it
followed
on
these
principles,
to
ensure
that
the
principles
of
making
sure
that
different
interfaces
can
be
registered
in
one
sort
of
global
registry.
A
So
the
idea
is
that
when
we
are
using
erc
1820,
it
acts
as
a
factory
of
factories
or
an
interface
of
interfaces,
so
it
john's
one
of
john's
famous
words.
It's
like
it's.
It's
like
a
corporate
phone
book
of
sorts.
Where
you
can
have.
You
can
use
erc
1820
as
a
registry
of
registries,
so
that
you
can
have
various
types
of
organization,
registries
into
their
work
groups
or
shield
or
shields
or
verifiers,
etc.
But
all
of
them
can
be
dependent
or
all
of
them
can
be
registered
with
one
parent
erc
1820.
A
So
that's
about
the
contracts,
then
we
go
into
messaging
messaging
is
essentially
a
a
way
to
send
private
messages
or
encrypted
messages
from
one
organization
to
another.
A
These
encrypted
messages
are
essentially
needed
to
be
able
to
communicate
any
seek
any
private
information,
or
rather
particularly
the
information
that
is
needed
to
verify
a
zero
knowledge
proof,
which
is
what
we'll
go
into
next,
but
at
the
core
level
the
messaging
is
is
like
a
interface
or
an
enabler
for.
Oh
sorry,
it's
a
it's
a
library
or
containing
a
set
of
interfaces
to
typescript-based
interfaces
to
interact
or
with
any
form
of
messaging
providers.
A
So
in
in
the
repo
itself
we
have
in
the
repo
itself,
we
have
two
types
of
messaging
providers.
You
could
see
one
of
the
main
ones
which
we
used
now
is
called
nats,
and
we
see
that
as
part
of
this
interface,
we
one
second.
A
Okay,
so
you'd
see
that
in
this
interface
we
have
a
bunch
of
standard
methods
to
be
able
to
connect
to
an
at's
endpoint
or
rather
make
a
connector
make
a
connection
within
that
service,
disconnect
from
the
nat
service
and
also
publish
or
subscribe
to
messages
that
are
being
used
so
that
are
being
sent
across
between
one
party
and
another
on
the
mats
interface
itself.
A
So
this
ma
so
in.
In
short,
this
this
particular
package
deals
with
enabling
interfaces
or
creating
standard
methods
or
standard
functionalities
to
be
able
to
connect
to
nats
and
similarly
to
be
able
to
connect
to
whisper
as
well.
Although
we
don't
use
whisper
much
these
days
in
law
of
mats
for
that
matter,
this
idea
is
that
this
could
be
extended
to
other
types
of
messaging
providers
as
well.
A
In
the
past,
we
did
come
across
several
questions
around
the
ability
to
add
somewhat
more
beefy
messaging
providers
like
kafka
or
rabbit
mq,
but
but
to
that
extent
we
used
a
more
lightweight
protocol
like
nats,
but
that
being
said,
it
is
obviously
open
for
developers
to
extend
this
messaging
interface
to
not
just
include
nats
and
nats
and
whisper.
But
you
can
go
ahead
and
create
other
forms
of
interfaces
as
well.
A
So
the
main
idea
is
that
each
of
these
packages
acts
as
a
standalone
npm
library
which
could
be
used
to
set
up
a
container
or
rather
used
to
be
set
up.
Your
look
any
organizations
or
your
custom
legacy
systems,
microservices
or
say
cloud
services,
or
even
say
on-prem
services.
For
that
matter,
persistence
is
intended
to
indicate
any
form
of
data
storage.
A
I
won't
go
too
much
into
the
details,
but
in
short,
the
idea
is
that
persistence
acts
as
a
way
for
integrating
with
your
local
data
systems
in
the
case
of
baseline
and
in
the
case
of
some
of
the
bris
will
notice
that
in
many
a
time
this
persistence
is
referring
to
the
db
db
technologies
like
or
postgres
sequel,
but
in
that
to
that
manner.
A
In
that
manner,
I
think
you
could
say
that
even
the
integrators
for
sap
or
excel
or
any
other
connectors
that
are
being
developed
would
be
a
good
place
for
them
to
decide
from
a
standard
point
of
view
or
from
a
self-contained
competent
point
of
view
under
persistence.
A
So
finally,
we
have
privacy
which
is
pretty
much
the
main
piece
of
oil
main
engine
behind
hiding,
critical
or
business,
critical
information
or
business
critical
logic
and
which
is
where
we
use
zero
knowledge
groups.
So
to
start
off
I'll,
just
show
you
quickly
the
interface
and
then
I'll
go
into
some
more
detail
of
how
we
use
zero
knowledge
groups
for
for
securing
messages
or
hiding
hiding
private
information.
A
So
so,
as
you've
seen
with
in
general.
A
The
way
the
core
of
the
baseline
is
structured
is
that
it's
it
follows
a
standard
of
using
typescript
based
interfaces,
and
this
was-
and
one
of
the
main
main
folks
who
has
championed
over
this
is-
is
kyle,
who
has
spent
a
good
amount
of
effort
in
our
initial
days
to
ensure
that
the
stack
has
a
standard
interface
pattern
and
so
that
each
library
or
each
interface
can
be
thought
of
as
a
as
a
provider
of
various
types
of
libraries
or
various
types
of
connections.
A
So
in
api
we
had
seen
like
one
type
of
provision
or
like
one
type
of
provider,
the
or
the
interface
that
we
had
seen
in
api,
had
like
various
methods
for
rpc
connections
or
for
interacting
with
the
smart
contracts
and
so
on.
But
when
it
comes
to
the
implementation,
there
can
be
many
other
many
ways
in
which
that
could
be
implemented
or
virtualized.
A
So,
for
example,
one
could
use
in
the
case
of
rpc
a
standard
like
say,
ether's,
js
or
web
3
to
connect,
or
in
some
cases
some
organizations
have
created
their
own
cli
methods.
A
So
each
such
implementation
or
virtualization
of
the
base
interface
can
be
thought
of
as
a
provider
and,
to
that
extent,
you'll
notice
that
across
the
various
packages,
this
notion
of
having
a
provider
as
an
implementer
or
as
an
implementation
of
these
interfaces
or
of
these
interface
definitions
that
are
provided
in
in
the
in
the
definition
of
each
of
these
packages
could
be
extended.
A
So
to
that
extent,
in
even
in
the
case
of
privacy,
we
have
a
zk
circuit
provider.
We
have,
we
have
an
ability
to
get
the
compilation,
artifacts
the
trusted,
setup,
artifacts
and
and
also
the
providing
the
generating
the
proofs
itself.
A
So
that
being
said,
so
privacy,
in
that
extent
has
has
enablers
or
abilities
to
be
able
to
interact
with
some
existing
zika
snark
providers
when
we
are
talking
about
zika
smart
providers.
In
this
case
we
are
referring
to
the
various
libraries
we
started
off
with
using
socrates
and
and
in
the
weeks
and
months
or
the
year
that
followed.
We
have.
A
A
A
Let
me
just
go
into
the
privacy
management
itself
as
to
how
it
works
so
and
then
I'll
wrap
up
this
discussion
with,
given
an
example
of
how
a
document
that
is
exchanged
between
two
parties
can
be
using
the
baseline
protocol
stack
and
how
the
message,
how
the
messaging
and
the
consistency
of
information
and
privacy
is
maintained,
while
using
the
mainnet
so
before
doing
that
I'll
just
quickly
dial
down
into
the
privacy
management
piece.
A
Here
we
are
talking
about
how
any
given
participant
or
any
or
sorry
when
you
have
two
given
participants
like
alice
and
bob
on
the
baseline
protocol,
stack
or
leveraging
the
baseline
protocol,
assuming
that
all
of
them,
or
both
alice
and
bob,
are
able
to
set
up
this
structure
locally
to
essentially
have
various
micro
services,
all
these
services,
on-prem
or
cloud
or
otherwise,
once
they
are
set
up
with
all
these
all
these
enablers
or
all
these
interfaces.
A
So,
essentially,
what
that
means
is
you're
able
to
make
the
various
components
under
api,
smart
contracts,
messaging
and
so
on
available
as
services
that
you
can
interact
with
locally
so
given,
assuming
that
all
both
ios
and
bob
have
that
base
stack
setup.
A
The
idea
is
that,
in
the
case
of
zk
snarks,
usually
you
have
the
inability
to
create
private
groups,
or
rather
public,
or
rather
proofs
of
any
form
of
business
logic
or
a
business
logic
statement.
A
So
so
the
this
aspect-
or
this
is
done
using
what
you
call
as
circuits,
which
are
essentially
r1cs,
or
they
are
constrained
based
system-based
circuits
so
similar
to
boolean
circuits
or
like
bobolian
gates.
You
have
arithmetic
circuits
and
that's
where
that's
what
these
circuits
are
referred
to,
and
we
don't
need
to
write
these
arithmetic
static
circuits
directly
in
that
language.
But
you
have
other
dsls
like
in
the
case
of
socrates.
There
is
their
own
custom
dsl,
by
which
you
can
write
your
own
circuits.
A
The
idea
is
that
each
circuit
essentially
represents
a
self-contained
functional
component
of
a
proof
or
rather
to
to
verify
or
validate
a
a
logical
statement.
A
Similarly,
we
have
a
proof
of
knowledge
of
signature,
which
is
where
we
ensure
that
whether
or
not
the
buyer
and
supplier,
or,
for
example,
alice
and
bob
their
signatures
are,
are
verified
and
in
doing
so
the
verification
is
within.
Behind
the
scenes
is
using
a
library
circuit
that
which
essentially
proves
whether
or
not
points
are
on
an
edd
sa
curve.
So
you
because
edd
say-based
signature
mechanism
is
used.
A
These
this
particular
proof
of
knowledge
of
signature
encompasses
the
verification
of
a
signature
by
a
certain
buyer
or
a
supplier
or
any
of
those
parties,
and
it's
important
that
those
are
those
those
are
satisfied.
A
And,
finally,
we
have
the
proof
of
membership
of
a
hash
in
a
merkle
tree,
where
what
we
make
sure
is
that
when
we
are
inserting
any
leaf
into
a
merkle
tree,
say,
for
example,
a
hash
is
determined
and
that
hash
needs
to
be
set
into
the
merkle
tree.
The
important
point
over
here
is
to
make
sure
that
this,
the
the
root
that
gets
changed
upon
every
time
a
hash
is
inserted
into
a
merkle
tree.
A
That
root
is
calculated,
the
food
is
calculated
correctly
and
that
and
that
that
the
root
itself
is
representative
of
the
hash
of
the
actual
document.
A
So
this
the
circuit
itself,
you
can,
you
should
be
able
to
find
it
in
the
baseline
document.sock,
which
goes
into
a
little
more
details
of
how
this,
how
this
membership
proof
is
set
for
set
membership
proof
is
identified.
A
Similarly,
how
the
various
signatures-
sorry,
how
the
eddisa
eddsa-based
signature
mechanism
is
used
to
verify
signatures
and
hatching,
where
we're
using
the
standard,
sha
based
hashing
to
ensure
or
sorry
to
check
or
to
make
an
assert
that
the
input
document
is
the
same
as
the
hash
that
you
compute
within
the
circuit.
A
So
this
in
the
case
of
privacy
management,
when
alice
intends
to
integrate
sorry
interact
with
bob,
they
would
take
a
circuit
such
as
the
one
we
just
saw,
compile
it
set
it
up
and
publish
those
artifacts
and
using
those
artifacts.
You
can
deploy
the
verifier
smart
contract.
So
then
the
smart
contracts,
the
verified
smart
contract,
contains
a
functionality
to
verify.
A
Any
proof
that
is
being
submitted
can
be
verified
under
zero
knowledge
and
whenever
it
means
under
zero
knowledge
is
that
verifier
uses
a
library
appearing
curve,
library
called
bn256,
and
that
is
what
is
used
to
verify.
Whether
or
not
a
proof
is
is
a
legitimate
or
rather
a
proof
is
poi
is
a
point
on
that
on
the
patellar
electric
curve.
A
A
Anyone
using
that
particular
circuit
in
this
case
say,
for
example,
alice
is
using
that
same
circuit
to
tip
to
to
verify,
or
rather
to
generate
a
proof
that
proof
can
be
used
by
bob
at
any
given
point
to
verify
whether
or
not
that
proof
contains
the
the
proof
is
legitimate,
so
basically
ensures
that
the
proof
is
a
valid
hash
of
a
valid
document
that
was
shared
between
alice
and
bob.
A
It
starts
off
with
party
a
generating
a
commitment
which
is
essentially
generating
a
hash
signing
that
particular
hash,
and
this
signing
is
done
using
a
special
signature
mechanism
like
like
the
eddsa
and
and
one
form
of
edd
essay
we
have
been
using
in
this,
or
at
least
that
was
that
has
been
originally
published
to
the
baseline
stack,
was
the
baby
curve.
But
to
that
extent
any
other
form
of
signatures
also
can
be
used.
A
So
we
send
that
signed
commitment
over
to
party
b
and
then,
when
party
b
receives
that
signed
commitment,
they
resign
that
commitment
and
then
send
back
the
the
the
the
second
this
document
with
a
second
signature
or
which
is
signed
again
back
to
party
a
and
at
this
point
party,
a
can
generates.
A
A
proof
which
essentially
contains
is
a
all
those
three
methods
that
we
spoke
about
to
ensure
that
the
hash
is
correct
and
to
ensure
that
the
both
parties
have
signed
off
that
document
and
verifying
that
particular
signature
under
this
proof
and
finally,
inserting
that
particular
hash
as
a
as
a
as
a
leaf
in
a
bunker
tree
both
that
are
stored
locally.
So
the
proof
itself
contains
the
membership
of
a
particular
leaf
in
a
merkle
tree,
and
then
that
particular
proof
is
verified
by
calling
the
shield
contract
and
this
may
be
optional.
A
If
a
party
may
not
does
not
need
to
verify
the
proof,
they
can
still
verify
the
proof
locally
without
making
an
interaction
with
the
shield
contract.
But
this
is
just
for
posturity.
It
is
shown
that
both
anyone
can
verify
this
proof
and
after
verification
of
such
a
proof
is
when
the
proof
is
being
sent
and
it
and
once
the
proof
is
sent
and
as
a
part
of
party
b
as
well,
either
party
a
or
party
b
can
call
the
shield
contract
to
ensure
that
the
proof
is
valid.