►
From YouTube: The Baseline Protocol: September 2023 General Assembly
Description
The Baseline Protocol July 2023 General Assembly will take place on Wednesday, 9/20, at 12 PM EST, 6 PM CET and 10.30 PM IST!
The Baseline Protocol community members will cover updates from the Core Devs, Outreach Team, Technical Steering Committee, and other work groups!
We invite you to tune in, join the live audience, and share your own updates.
B
Yeah,
it's
been
a
few
weeks
since
we've
live
streamed
just
because
of
the
summer
holiday.
Everyone
in
the
community
was
heads
down
in
their
work
or
on
cool
trips
or
spending
time
with
their
families.
So
we
took
things
a
little
slow,
but
we
are
all
still
very
much
here
and
working
on
the
key
priorities
for
Baseline
and
excited
to
give
some
updates
today
and
hopefully
have
some
more
frequent
shows
coming
up
for
the
rest
of
the
year
with
more
guests
and
community
members
that
are
constantly
joining
the
community
or
closely
watching
I'll.
B
Give
some
updates
on
some
future
show
guests
that
are
in
the
queue
in
our
Outreach
updates.
But
with
that,
let
me
just
check
out
our
agenda
so
briefly,
we'll
go
over.
Outreach
updates,
we'll
talk
about
the
simple
reference
implementation,
also
known
as
Bri
free
Baseline
reference
implementation,
three
and
the
progress
going
on
on
that
large
large
effort.
B
That
is
supposed
to
be
in
fully
open
source,
simple
implementation
of
a
very
Hefty
complicated
protocol,
and
then
we
will
go
through
interop.
Workgroup
updates.
Interop
is
the
key
roadmap
item
for
this
year,
which
will
also
be
extended
into
next
year
and
we'll
get
more
info
on
that
during
the
roadmap
updates,
as
well
as
TSC
updates
and
standards,
and
then
we
will
have
an
open
floor
so
we'll
see
if
any
of
our
members
on
stream
want
to
chat
about
what
they're
working
on
or
learning
about,
but
that
will
cover
our
agenda
today.
B
So
I'll
kick
us
off.
The
Outreach
team
is
not
meeting
at
this
time
on
our
normal
Cadence.
We
are
all
just
chatting
async
on
the
key
priorities.
Alongside
the
key
technical
priorities,
the
Outreach
team
is
focusing
on
ensuring
that
blogs
are
produced
that
match
the
relevant
updates
and
progress
for
things
like
the
simple
reference
implementation.
B
There
are
still
monthly
TSC
blogs,
so
the
last
one
was
about
the
global
supply
chain
coordination
and
using
the
Baseline
protocol
published
by
Maron
from
sap
who's
on
the
TSC.
So
our
committee
members
are
still
publishing
a
monthly
blog
based
on
their
interests
or
different
viewpoints,
ideas.
Things
like
that
related
to
the
base
time
protocol
and
then
also
we
are
still
attending
conferences
throughout
the
whole
community.
B
A
few
of
our
mesh
r
d
members
went
to
permissionless
in
Austin,
which
was
a
finance
heavy
conference,
and
it
was
small
because
of
the
bear
Market,
but
it
was
actually
a
really
successful
and
well
liked
conference
by
many
of
the
attendees,
because
it
was
pretty
intimate
and
those
who
go
to
these
conferences
during
a
rough
market
and
the
crypto
space,
for
example,
are
clearly
building
and
very
passionate
about
it.
Those
are
really
unique
and
exciting
experience.
B
We
were
talking
a
lot
about
Baseline,
because
the
interop
use
case
that
will
be
talked
about
in
a
little
while
is
related
to
finance.
So
there
was
a
clear
fit
there
for
potential
web
3
or
non-web
three
companies
that
might
be
interested
in
use
cases
around
synchronizing
banking
data,
and
we
were
able
to
make
some
contacts
that
may
want
to
start
watching
the
work
going
on
in
this
community
and
leveraging
it
at
some
point.
B
Yeah,
so
aside
from
that,
really
the
Outreach
team
is
just
staying
in
touch
with
the
technical
work
going
on
making
sure
that
it's
digestible
to
those
that
are
following
the
work
in
blog
article
and
other
formats.
So
we
have
materials
that
supplement
the
work
going
on
as
we
get
through
it
with
that,
I
will
also
hand
over
to
Mark,
who
has
a
few
recent
blogs
to
walk
through
that
have
relevance
to
Baseline.
C
All
right
thanks,
sonal
yeah,
so
the
mesh
team
has
been
hard
at
work,
creating
creating
things
that
use
the
Baseline
protocol
and
standard,
and
we've
released
a
couple
of
blogs
that
outline
a
couple
things
that
we're
doing
that
use
the
standard
and
the
first
one
is
about
automating
billing
and
settlement
with
the
Baseline
protocol.
C
This
is
based
on
a
real
project
that
we're
doing
with
the
telecoms
Consortium
that
focuses
on
automating
billing
and
settlement
for
digital
service,
Digital,
Services,
that
these
telecom
companies
are
providing
and
it's
it
just
goes
over
the
the
main
components
of
the
protocol.
C
The
zero
knowledge
components
how
the
Baseline
protocol
uses
zero
trust
principles,
how
it
combines
dids
and
VCS,
or
an
identity
layer
for
this
protocol,
and
combines
these
three
components
of
the
protocol
not
only
for
privacy
and
security,
but
also
for
Regulatory
Compliance
for
companies
when
they're
executing
these
business
processes
in
the
multi-party
context,
of
course,
and
another
great
part
about
this
blog
is
how
it
actually
goes
into
how
billing
and
settlement
affects
what
we're
calling
the
digital
business,
trilemma
the
constraints
of
performance
security
and
decentralization
within
any
given
business
process
and
how
the
Baseline
protocol
can
actually
allow
all
three
of
these
things
to
occur
at
once,
instead
of
only
two
at
most
in
any
given
situation,
and
it
also
goes
over
the
different
industries
that
the
Baseline
protocol
can
affect
most
greatly,
specifically
Telecom
Healthcare
and
supply
chain,
all
industries
that
have
a
multi-party
processes
that
are
entrenched
and
how
services
are
provided
in
each
of
these
industries
and
Telecom
to
provide
services.
C
C
A
lot
of
people
have
to
work
together
to
get
products
materials
from
point
A
to
point
B,
so
definitely
check
out
that
article
about
billing
and
settlement
and
then
another
article
that
we
have
is
called
the
future
of
Erp,
which
talks
about
how
the
Baseline
protocol
can
use
the
SDK
evm
and
create
a
sort
of
multi-party
Erp
system
where
companies
can
use
a
shared
trusted
execution
environment
using
the
zkevm
use,
the
general
purpose,
deer
knowledge
computation
afforded
by
the
zke
evm
to
automatically
prove
processes,
use,
use
the
blockchain
for
it,
self-sovereign,
identity
capabilities
and,
basically
take
all
the
difficult
parts
of
Erp.
C
Like
point-to-point
Integrations,
you
know,
difficulties
all
these
difficulties
with
integrating
different
systems
and
improve
it.
Make
it
much
more
resilient
make
it
cheaper
increase
the
agility
that
companies
can
collaborate
with
each
other,
whether
you're
onboarding
different
companies
or
off-boarding
them
for
different
processes
and
making
it
all
private
and
secure.
With
all
the
cryptographic
capabilities
that
blockchain
offers
and
I
think
that's
a
pretty
good
overview
of
of
what
the
blogs
are
without
just
like,
spelling
all
that
out.
C
So
I'll
I'll
leave
it
to
the
audience
to
take
a
look
at
these
articles
and
learn
a
little
bit
more
about
them.
They've
been
shared
through
the
mesh
LinkedIn
and
you
can
also
go
to
the
mesh
website
and
go
to
the
blog
section
or
read
them
as
well,
so
definitely
check
those
out.
D
D
What
is
the
actual
business
use
case
of
of
Baseline
right
and
you
reflect
on
you
know
what
are
the
high
value
processes
in
supply
chain
and-
and
you
know
billing
and
settlement
right-
think
about
what
are
the?
What
are
the
common
friction
points
that
are
all
multi-party
right.
Like
hey.
Did
this
come
in
on
time?
Did
we
invoice
this
correctly
right?
That's
actually
not
just
a
single
Erp
problem.
That's
a
multiple
Erp
problem
of
like
everyone
has
to
have
the
same
point
of
view.
D
Everyone
everyone's
after,
like
the
the
stereotypical
single
plate
of
glass,
if
you
will,
but
it's
challenging
because,
like
hey
the
vendor,
wants
to
know,
if
you
did
the
get
your
shoot
get,
did
the
goods
receipt
and
invoice
receipt
the
way
they
expected
right
because
they
expect
to
be
paid
right?
D
You
know,
that's
that's
what
we're
trying
to
put
you
know.
Baseline
is
like
into
those
Court
processes
right.
What
would
you
say
to
like?
How
would
you
I
have
this
question?
This
is
General
to
anyone
like.
How
would
you,
how
would
you
work
with
an
Enterprise?
You
said:
that's
really
cool
idea,
but
it
seems
pretty
risky
to
do
right.
What
what
are
your
thoughts
around,
like
just
the
kind
of
the
risk
level
the
to
get
people
over
that.
C
I
think
it
I
think
it
just
starts
with
building
trust
with
your
clients,
and
the
approach
from
our
team
is
just
starting
small,
with
with
with
a
proof
of
concept
and
having
Enterprises
get
more
comfortable
with
the
concept
of
of
a
multi-party
Erp
system
or
really
just
like
any
sort
of
trusted
shared
execution
environment.
C
So
it's
all
about
giving
Enterprises
the
control
permissioning
warm
warm
feelings
that
they
need
to
actually
feel
comfortable
using
that
stuff.
So
that's
that's
how
we're
focusing
on
it.
D
Very
cool
I
think
about
yeah
use
cases,
one
you
know,
building
the
trust,
sometimes
even
like
going
for
in
even
lower
lower
risk
use
case,
even
though
the
baseline's
gonna
work
the
same
way,
but
when
I
think
what
a
lot
of
companies
think
about
when
they're
handling
data
is
what
is
what
is
the
risk
level
to
spillage
of
this
data
right,
whether
it's
you
know,
building
data
or
supply
chain
data
or
sustainability
like?
How
does
this
like?
D
How
does
this
go
if,
like
this
gets
leaked
out,
especially
especially,
if
it's
not
a
blockchain
right,
so
you
know
one
of
the
unique
things
I
think
that's
you
know
you
have
to
kind
of
dance
it
on
the
fire
with
ZK.
Zk
is
really
good
for
protecting
extremely
private
information.
In
fact,
it's
even
purpose-built
for
that.
How?
How
do
you
motivate
someone
or
a
business
to
actually
do?
D
It
is
probably
the
biggest
challenge
right,
like
hey
you're,
going
to
put
this
in
a
ZK
proof,
it's
not
going
to
reveal
the
data,
but
it's
there
right.
It's
it's
else.
Your
data
is
actually
elsewhere.
It's
not
in
the
ZK
route.
It's
just
a
proof
right,
that's
always
a
I
feel
like
that
breaks.
People's
brain,
sometimes
I,
think
about
approved,
not
the
actual
data,
but
proof
right,
and
how
does
that?
How
does
that
you
know
evolve
into
some
computation.
C
Yeah,
it's
it's
true
and
a
really
great
way
to
actually
show
that
for
companies
is
showing
them
how
the
zkevm
works,
because
you
can
run
an
entire
function.
You
know
through
an
ethereum,
smart
contract
and
all
you
can
you
know
all
that's
deposited
on
the
L1
is
just
a
proof.
You
can
actually
show
them
that
you
can
show
them
hey
that
transaction,
that
you
run,
here's
the
proof
for
it
and
you
can't
actually
see
any
data.
It's
just
that
proof
and
that's
that's
a
pretty
powerful
thing
for
companies
to
see
and
start
to
think
about.
B
A
Okay,
you
guys
looking
at
GitHub
right,
see
not
very
good,
okay
cool,
so
the
major
update
from
the
last
time
we
discussed
kind
of
via
this
this
platform
is
that
Milestone
four
is
more
or
less
completed.
It's
pending
three
PRS,
but
all
of
the
issues
kind
of
have
been
worked
on.
A
These
PRS
now
just
need
there
their
review
process
to
take
place,
and
then
maybe
some
changes
that
have
been
requested
but
for
the
most
part,
just
seem
to
be
merged
into
Main
and
then
Milestone
four
will
be
complete,
which
is
kind
of
a
major
Milestone.
That's
been
underway
for
quite
some
time.
There's
been
a
lot
of
issues
related
to
this
Milestone.
A
lot
of
work
has
been
put
into
it
and
we
can
just
take
a
look
to
it
at
what's
left
here.
A
A
A
You
know
submitting
transactions
verifying
the
results,
checking
that,
if
things
don't
go
the
happy
path
that
they
were
still
kind
of
reflect
on
that
correctly,
that
the
system
will
understand
what's
happening
and
then
all
of
the
necessary
workflows,
ctOS
agents,
things
that
will
be
required
to
kind
of
transact
in
order
to
do
all
of
those
functions.
So
that's
kind
of
the
last
major
piece
is
that
end
to
end
there's
also,
like
I,
said:
that's
secondary
PR.
A
A
So
once
Charice
is
no
this
one's
first,
okay,
so
that
one
can
be
merged
and
then
Charice
can
be
reviewed
and
then
once
that's
been
reviewed
and
any
comments
are
left.
Changes
are
made
that
can
also
be
kind
of
signed
off
on
and
then
close
out
that
issue
and
with
the
closing
of
that
issue,
we'll
begin
as
well
as
the
end
to
end
one
will
begin
Milestone
five.
A
So
if
Milestone
four
was
the
beta
Milestone
5
is
meant
to
be
version,
one.
It's
the
last
Milestone
of
the
full
Grant
and
is
meant
to
be
the
complete
release.
You
know
the
starting
point
of
Bri
3
as
it
stands,
so
this
Milestone
is
meant
to
support
General
business
case
with
all
the
features
being
part
of
the
grant.
Everything
within
scope
here
should
be
included
in
this
Milestone.
A
Now
with
that
being
done
in
order
to
kind
of
get
to
that
state
of
completing
Milestone
5,
we
are
using
here
this
Milestones
tab
to
track
Milestones,
so
Milestone
four
add
17
in
total
issues
that
were
given
the
flag,
I
believe
it.
We
began
using
the
flag
kind
of
late
into
this
milestone,
but
Milestone
five
will
be
using
this
flag.
A
Sure
so
we
can
go
kind
of
back
in
time
a
little
bit
the
the
team
meets
every
Thursday.
So
tomorrow's
meeting
is
where
we're
gonna
flesh
out
a
lot
of
Milestone
five
issues
and
add
details
and
architectural
decision
making
to
ensure
that
it
all
kind
of
all
Fallen
lines.
So
we
can
go
through
Milestone
four
and
look
at
things
that
have
already
been
completed.
A
A
lot
of
it
is
just
is
following
our
process
of
first
creating
a
document
of
some
kind
or
a
discussion
somewhere
in
which
the
the
overall
framework
of
of
the
issue
trying
to
be
completed
is
presented
to
the
team,
where
it
can
be
made
sure
that
it
follows
everything
that's
done
previously
right,
so
that
all
the
code
is
is
written
in
a
standardized
way
and
the
architecture
can
work
with
itself
and
use
uses
the
right,
libraries
and
everything
right.
The
best
practices
standards.
A
A
Believe
I
just
have
this
open
twice,
but,
for
example,
you
create
these
kind
of
subtasks
and
then
somebody
can
go
through
pick
up
that
subtask
and
propose
how
to
go
about
it
right
and
then,
without
going
into
all
the
code,
they
would
then
kind
of
be
able
to
implement
that
in
the
way
that
they
deem
kind
of
appropriate
along
the
lines
of
the
documentation
that
they
had
already
pitched
to
the
team.
A
And
then
the
final
step
of
that
process
is
a
PR
where
there
can
be
some
back
and
forth
if
anything
isn't
quite
aligned
with
how
the
team
had
expected
it
to
be,
and
then
barring
any
comments
for
revision
or
any
update
changes,
it
would
then
get
pushed
to
Maine.
A
So
in
this
case
to
store
content,
addressable
hash,
this
PR
by
SRI
it
implements
some
snark.js,
some
sarcom
circuits
here
service
for
it
an
interface
here.
You
know
several
several
things
that
are
all
kind
of
contained
within
this
one
sub
task
that
can
be
outlined
within
you
know
some
greater
overall
workflow,
that's
trying
to
be
accomplished.
A
For
walking
through,
it's
all
good
yeah,
of
course,
yeah.
It's
just
a
lot
of
a
lot
of
code
just
being
pushed
through
at
the
PO.
At
this
point,
not
really,
you
know
we
could
go
through
some
of
these
PR's
and
actually
go
through
the
code,
but
there's
a
lot
of
PRS
that
are
being
open
and
closed
and
I.
Think
the
the
for
the
purposes
of
a
ga.
The
the
bigger
picture
is
that
these
subtasks
are
being
reviewed
as
a
team
right
and
everything
is
kind
of
coming
together
and
being
knocked
down.
A
A
If
anybody
wants
to
join
in
and
be
a
spectator
even
or
be
involved
in
how
Milestone
5
is
shaped
as
far
as
all
of
these
PRS
or
all
of
these
issues
are
opened
and
filled
out
and
kind
of
created,
so
that
we
have
these
sub
tasks
for
everybody
to
then
pick
up
and
go
through.
That
meeting
will
take
place
tomorrow,
so
yeah
again
open
invite.
A
A
And
anything
else
sonal
or
shall
we
move
into
interrupt.
E
E
Okay,
so
hello,
everybody,
hello,
hello,
viewer.
It's
me
Keith
I'm,
here
in
the
EU
Oasis
Baseline
Repository.
You
know
if
you've
been
listening.
You
already
know
probably
already
know
a
lot
about
Baseline,
but
in
the
case
that
you
are
new,
the
work
that
you
always
been
talking
about
with
SRI
Bure
i3.
The
whole
project
can
be
found
in,
of
course,
examples
within
examples.
We
have
the
di3
folder
that
contains
the
whole
project.
E
Since
the
last
time
we
went
into
a
general
assembly.
Now
you
have
to
forgive
me:
it's
been
a
while,
so
I
may
not
start
off
in
the
at
the
perfect
timeline
that
matches
the
last
time
we
spoke.
Hopefully,
I'll
have
some
overlap,
so
there's
no
gaps.
E
If
you're
not
familiar
with
BPI
or
the
terms,
you
know
work,
group
and
workflow,
we
do
suggest
you
go
back
and
view
some
of
the
the
previous
videos
I'll
try
to
account
for
that
as
I'm
speaking.
But
as
has
been
mentioned
numerous
times,
the
standard
is
complex,
so
we
suggest
you
go
back
and
either
read
to
the
standard
or
watch
our
previous
videos.
So
since
the
beginning
of
Milestone,
four,
let's
see
where's
a
good
place
to
start
I
think
I'll
start
probably.
E
Merkel
tree
crud,
happy.
Okay,
we
have
this
Implement
ZK
circuit
using
snark
JS.
It
might
have
been
a
little
bit
before
here.
Maybe
Merkle
tree
correct,
so
I'll
start
off
with
that,
because
of
course
this
is
a
up
here.
I'm
familiar
with
it's
one
that
I
made
and
as
you'll
notice
mine
have
the
the
most
comments
when
compared
to
the
PRS
around
it
sometimes
triple
or
quadruple
I
like
to
think
because
that's
because
that's
you
know,
that's
a
testament
to
how
good
they
are,
but
in
fact
it
may
be
the
opposite
right.
E
So
any
crud,
merge
or
crud
PR
is
generally
as
we
look
through
the
file
it's
going
to
be,
and
this
underlying
infrastructure
within
the
nest.js
project
to
set
up
possibly
including
some
API
endpoints.
If
those
points
need
to
be
hit
externally
and
if
not
just
internally,
the
agents
and
methods
functions,
possibly
Services
needed
to
do
what
we
call
crud
for
a
Merkle
tree.
So
this
create
read,
update,
delete
you
need
to
be
able
to
create.
You
need
to
be
able
to
pull
from
database
and
read
this
thing.
E
You
need
to
be
able
to
write
over
it,
update
certain
portions
of
it
and
possibly
delete
it,
so
this
isn't
to
say
that
all
of
these
functionalities
are
implemented
generally.
This
is
open
to
the
the
developer.
In
this
case,
it
was
me
to
decide
you
know
which
of
these
functionalities
do
we
need
which
of
them
need
to
be
externally
facing
at
this
point
in
time
and
to
put
those
into
my
PR
so
that
they
can
be
discussed
63
times
with
other
people,
so
most
of
these
will
start
off.
E
If
you
look
at
the
pr
you'll
have
kind
of
the
change
in
the
package.
Json
is
just
tracking
the
dependencies
that
are
being
used
in
this
case
the
addition
of
Merkle
tree
JS
and
the
types
for
cryptojs
which
comes
stock
in.
D
E
Here
we
have
a
change
to
the
Prisma
schema,
so
we
track
our
relational
database
schemas
using
Prisma.
When
you
update
the
prisoner
schema.
As
you
see,
I
did
here,
I
added
a
BPI
Merkle
tree
model.
This
is
adding
probably
most
likely
a
full
table
to
the
database.
That's
tracking
this
new
information
under
the
hood.
This
is
handled
by
Prisma.
So
if
you
want
to
see
how
it's
automatically
done,
you
can
go
to
the
migration
file
that
was
created
whenever
I
migrated
the
new
schema
and
you
can
see
what
it
did.
E
After
setting
up
kind
of
the
underlying
persistent
tracking
of
Merkle
trees,
database
I
went
through
and
created
the
infrastructure
in
the
nest.js
project
that
allows
these
trees
to
be
actually
created
and
then
saved
stored
on
that
team
in
the
database
and
then
possibly,
let's
see
updated
and
so
on
and
so
forth.
I'm
sure
I
didn't
get
everything
in
one
poll
press.
E
So,
let's
see,
what's
here,
here's
this
just
the
creation
of
the
or
this
is
the
addition
of
the
Merkel
module
inside
the
kind
of
top
level
app
module,
because
nest.js
is
using
this
inversion
of
control
and
dependency
objection.
So
it
needs
to
be
kind
of
aware
of
their
modules
that
it's
going
to
be
injecting
we
have
the
Merkle
tree
agent.
E
So
for
us
we
use
agents
somewhat,
as
kind
of
like
you'd
expect
a
service
layer.
The
Asian
contains
the
kind
of
in
process
or
say
during
execution
functionality
that
be
contained
within
the
service
itself,
so
things
like
create
a
numerical
tree.
E
So
if
I
send
in
the
data
that
I
want
to
create
a
numerical
tree,
well,
eventually
that'll
end
up
in
the
agent
calling
this
functionality
and
we
can
trace
it
through
and
see.
What
is
it
doing?
It's
creating
an
ID,
it's
accepting
a
hash
algorithm
which
is
later
replaced,
and
then
it's
calling
this
form
Merkle
tree,
which
is
within
the
Merkle
tree
service
right.
So
there's
a
separation
here
the
agent
is
more
just
a
collection
of
the
overall
actions
you
might
want
to
take.
Oh
I
want
to
create
a
miracle
tree.
E
E
Want
to
store
an
updated
Merkel
tree
I
want
to
delete
a
Merkle
tree.
We
have
these
underlying
dtos
as
well
as
just
normal
classes.
We
see
here
that
kind
of
Define
the
shape
of
these
Merkle
tree
objects
at
some
point
in
time
during
the
flow
as
it
enters
the
service
as
it's
stored
as
it
comes
out
of
storage,
especially
with
Merkle
trees.
The
way
that
it's
being
stored,
the
tree
is
being
marshaled
and
unmarshalled
into
from
its
object
form
during
execution
to
just
a
pure
string
form.
E
E
What
should
it
look
like
if
I'm
creating
a
regulatory
updating
so
on
and
so
forth,
error
messages
that
might
happen
in
this
case,
if
I'm
trying
to
retrieve
when
I
can't
find
one
the
controller
for
the
Merkel
tree
this
in
this
case,
these
were
externally
facing
apis
just
for
development
purposes.
So
anybody
after
this
is
immediately
merged
and
decides
to
work
Merkle
trees
into
their
database
and
into
their
flow
that
they
could
kind
of
easily
command
and
work
these
in.
E
Although
these,
of
course
is
not
in
Maine
as
it
stands,
because
there's
no
need
to
kind
of
externally
interface
with
the
Merkle
trees
directly,
and
then
we
have
these
these
commands
here.
So
if
you've
watched
any
of
our
videos,
you're,
probably
pretty
familiar
or
if
you
worked
with
nas
JS
you're
familiar
with
this
structure
of
Demands
and
command
handlers
due
to
the
inversion
of
control,
we
just
make
a
command
Handler
available
to
the
controller
and
it's
executed
when
that
controller
API
is
hit
or
when
some
other
endpoint
that's
executing
the
command
Android
sent.
E
So
in
this
case
we're
going
to
create
a
Merkle
tree
command.
So
this
command
is
injecting
to
the
Constructor.
What
do
we
need
a
Markle
tree
agent
and
a
Merkle
tree
storage
agent,
and
it's
going
to
make
use
it's
going
to
combine
the
methods
in
a
in
a
top
to
bottom
order
of
how
do
we
what's
the
flow
for
creating
Sparkle
tree?
Well,
first,
we
use
a
normal
agent
to
actually
create
the
tree
kind
of
during
execution,
and
then
we
take
that
tree.
E
That's
been
created,
that's
in
this
kind
of
ephemeral
memory
and
we
put
that
into
persistent
storage,
and
then
we
just
return
the
ID
of
that
thing.
That's
been
sorted,
so
that's
it
since
we're
probably
a
little
low
on
time,
I'll
just
kind
of
quickly
glance
over
some
of
the
other
things
we
did
so
Merkle
tree
crud.
We
have.
We
have
Shri,
beginning
her
first
of
five
PR's
in
implementing
the
ZK
snark
snark
circuit
and
we'll
set
snarket,
which
I
guess
kind
of
fits
here.
E
We
have
things
like
okay
here,
it's
similar
you'll
have
the
circuit
agents
right.
What
is
what
are
the
things
we
can
do?
We
can
create
it
with
this
creative
proof.
You
know
what
is
the
structure
of
the
proof.
Look
like
right
here
is
the
classified
the
structure
of
the
proof.
What
does
the
witness
look
like
same
thing?
Interface,
a
snark.js
circuit
service,
for
actually
doing
these
actions
during
execution
time,
creating
the
witness
and
creating
the
proofs
on
and
so
forth.
E
This
first
one
of
course,
is
kind
of
very
high
level.
We
try
to
keep
our
PRS
very
broken
down,
and
so,
for
example,
this
first
one
is
hey:
let's
lay
the
groundwork:
what
are
the
things
we're
going
to
need
on
an
infrastructure
level
we're
going
to
need
the
controller,
the
agent,
the
command
so
on
and
so
forth?
Let's
fill
them
in
with
detail
at
a
later
time.
We
can,
if
we
kind
of
scroll
up.
You
know
we
see
that
there's
going
to
be
oh,
a
part
two
here
or
some
more
details.
E
C
E
You
know
some
documentation
added
for
understanding
of
snark.js.
We
have
some
changes
to
the
packages
getting
ready
and
we
have
this.
You
know
powers
of
Tau
sh
file
here
added
for
the
powers
of
Tau
ceremony,
so
just
kind
of
again
very
low
level.
What
do
we
need
to
get
ready
to
make
these
circuits
happen
and
to
generate
these
proofs
very
simple
stuff?
E
Do
I
have
more
time
to
keep
going
or
should
I
pass
it
on?
In
my
my
long
time,.
B
Let's
wrap
there
just
to
get
through
a
few
more
updates,
but
we
would
love
to
have
more
and
more
yeah.
E
We
can
talk
about
it.
More
shows.
All
rap
was
saying
that
you
know.
If
you
take
some
time
to
look
through
here,
you
can
see
that
the
the
the
bulk
of
Milestone
4
is
going
to
be
again,
as
I
said,
implementing
the
Merkle
tree
services
within
the
project
being
able
to
manipulate
those
Merkle
trees
to
track
state
trees,
history
trees,
as
well
as
the
ZK
circuits
and
the
infrastructure
surrounding
that
and
the
automatic
execution.
E
Let's
say
if
a
work
step
happens
that
the
circuit
is
propped
and
that
a
proof
is
generated
and
created
and
stored
in
the
proper
place
and
added
to
the
Merkle
tree,
witness
leaves
so
so
on
and
so
forth.
So
this
is
the
kind
of
really
the
the
guts
of
the
implementation
here.
The
previous
ones
were
all
of
the
underlying
infrastructure.
Of
what
a
work
groups
look
like.
How
do
I
add
people?
Here's
you
know,
workflows.
E
Here's
accounts:
here's
all
the
kind
of
normal,
almost
web,
Tui
stuff
you'd,
expect
in
a
way,
and
then
here's
kind
of
this
edition
of
the
special,
more
cryptographic
esque
modules
that
add
that
flavor
of
kind
of
you
know,
automation,
privacy,
accuracy,
correctness
that
have
been
added
a
milestone
for
so
that's
kind
of
the
overall,
where
we're
at
okay
and
now
on
now,
I'll
give
it
up.
B
Awesome
yeah,
thank
you
both
for
walking
us
through.
All
of
that
I
know
it's
been
a
while
and
so
much
incredible
progress
has
happened
since
and
we'll
have
no
shortage
of
future.
Baseline
shows
to
get
further
and
further
into
the
weeds
all
right
y'all.
Can
you
give
us
the
the
lowdown
on
the
interop
work
group.
A
Yes,
very
quickly,
though,
first
with
what
Keith
started
off
mentioning
was
that
if,
if
you're
kind
of
newer
to
the
community
or
don't
have
a
super
technical
understanding
of
Baseline
protocol
itself,
all
of
the
different
you
know,
terminology
and
everything.
Keith
gave
a
great
presentation
a
year
and
change
ago
in
Amsterdam
called
the
basics
which
is
fully
uploaded
to
YouTube
and
there's
also
a
Blog
written
on
that,
and
so
it's
sort
of
in
text
form
both
of
those
are
excellent
resources.
A
Okay,
great!
Thank
you.
Oh
I
just
shook
my
zoom
and
it
minimized
all
of
my
windows
within
just
a
second
okay.
Getting
into
Ventura
So
within
I
will
share
my
screen
again
within
the
Baseline
repo
under
examples
same
spot
as
Keith
was
showing
where
you
can
find
Bri
one.
You
can
also
find
BPI
interopt
folder,
and
this
folder
has
a
high
level
readme
here.
A
A
A
Yeah
I
need
to
do
a
PR
for
anyone.
Looking
at
this
on
the
call
add
half
an
hour
to
any
of
these
times
for
the
new
updated
time
and
then,
of
course,
there
is
like
all
of
our
work
groups
that
are
open
source
drive
with
all
of
the
notes
that
are
being
updated
after
each
work
group
session,
for
anybody
that
cannot
attend
can
update
themselves
asynchronously,
but
kind
of
most
importantly
here
in
this
readme
is
the
use
case
itself.
A
A
So
what
we
landed
on
is
to
validate
banking
data
between
an
Enterprise
and
a
bank
without
exchanging
the
banking
data
itself.
The
purpose
of
this
is
twofold:
it's
to
reduce
the
the
risk
on
the
business
because,
of
course,
zero
knowledge
and
zero
trust
allows
you
to
transact
using
kind
of
sensitive
eii
without
the
risk
of
any
of
that
getting
leaked
and
also
to
reduce
cost
is
the
second
motivation
that
a
Enterprise
would
have
to
implement.
A
A
We
have
our
participants,
which
is
the
customer
of
the
Corp,
the
Acme
Corp
itself,
the
bank
of
that
customer
and
an
auditor
who
will
verify
that
everything
has
been
kind
of
correctly
transacted
going
into
the
more
technical
side
of
what
that
would
actually
look
like
for
two
Baseline
protocol
implementations
to
kind
of
work
together
on
either
side
of
the
fence
here,
in
order
to
make
that
happen
again
under
zero
knowledge
and
within
the
zero
trust
architecture,
which
is
the
main
appeal
here
of
why
you
would
want
to
do
this
in
the
first
place.
A
So
you
have
two
work
groups,
work
group,
one
which
is
in
bpia
and
work
group,
two
which
is
in
bpib,
and
that
is
kind
of
the
Crux
of
interop
interoperability.
The
the
demonstration
here
that
two
completely
separate
bpis
Baseline
protocol
implementations
are
able
to
work
in
tandem
together
in
order
to
kind
of
complete
this,
because,
thus
far,
all
you
know
BPI
like
the
SRI
Bri.
Three.
All
of
these
implementations
work
self-contained
within
themselves
and
run
tests
on
itself.
A
D
I
might
interrupt
it's
really
like
an
anti-fishing
like
an
anti-social
engineering
type
of
integration
right,
because
you
know
companies
they
receive
requests
all
the
time,
either
their
phone
or
through
email.
They
say:
hey,
I'm,
I'm
Jim
from
accounts
accounts
receivable
from
vendor
XYZ.
Please
change
banking
data
from
this.
To
that
and
there's
you
know
those
those
attacks
are
getting
more
and
more
sophisticated,
both
internal
and
external,
especially
with
AI
right.
So
how?
How
can
you
trust
the
data
that
you
received?
D
You
know
ZK
and
the
zero
trust
principle
is
really
great,
because
if
you
had
any
doubts
whatsoever
of
the
authenticity
of
you
know,
however,
the
state
came
in.
You
could
verify
and
say.
Yes,
this
result
to
two,
this
exact
data,
this
exact
person
that
that
I'm
transacting
with.
D
Right,
like
things,
can
get
very
convincing
like
they
like
people
who
are
thinking
up
on
social
engineering
attempts
even
before
AI
right,
and
if
you
look,
if
you
work
at
a
large
company,
you
you're,
probably
very
familiar
with
you-
know
different
anti-fishing,
anti-social
engineering,
trainings
that
are
trying
to
teach
people
to
to
be
very
Vigilant.
But
what,
if
you
could
add,
like
extra
layer
of
resilience,
but
still
very
privacy?
Preserving
but
you're,
not
giving
away
like
hey
yeah.
The
the
bank
number
is
one
two,
three
four
five
right.
B
E
A
Another
like
even
more
basic
surface
level
example
is
the
the
man
who
would
send
fake
invoices
to
I
think
Google
and
realized
that
if
it
was
below
some
threshold,
I
think
ten
thousand
dollars.
They
wouldn't
even
review
it.
They
would
just
approve
because
it
wasn't
worth
the
time
of
the
the
finance
department,
and
so
he
he
ended
up
getting
a
you
know
millions
of
dollars
before
anybody
investigated
and
realized
that
they
were
just
totally
bogus
invoices
that
should
never
have
been
filled
out
so
yeah.
A
D
And
it's
not
even
it's
not
even
it's
even
internal
in
a
way
right
of
of
you
know.
Unfortunately,
people
can
sometimes
scam
their
their
own
company
right
to
kind
of
gamify,
like
the
the
approval
limits
and
kind
of
I
worked
at
a
company
where
we
caught
one.
We
caught
one
one
time
when
someone
had
figured
out
how
to
bypass
the
whole
po
approval
system
and
just
get
you
know,
different
invoices
approved
all
the
time
we
had
to
cut.
D
We
had
to
shut
it
down,
but
but
certainly,
but
certainly
the
yeah
you're
getting
the
authenticity
of
of
a
request
down,
because
the
reality
is
is
that
you
have
to
make
this
Tech
very
adaptable
to
kind
of
like
very
everyday
business,
where
someone
yeah
there's
a
lot
of
automation.
There's
a
lot
of
cloud
stuff,
AI
things,
but
people
are
still
going
to
you
know,
initiate
business.
You
know
manually
through
email,
their
phone
call
and
like
how
do
you?
How
do
you?
D
A
That's
left
here
really
just
quickly
is
kind
of
the
the
workflow
of
how
that'll
go
through,
and
this
is
kind
of
further
into
the
weeds.
I
won't
get
to
caught
up
here.
I
know
we're
short
on
time,
but
for
anybody
who
wants
to
kind
of
do
their
own
homework
here
this.
A
This
outlines
how
the
two
different
work
groups
will
interact
using
the
different
work
steps
in
this
workflow
here
so
here
it
is
all
in
plain
text
and
then
here's
kind
of
a
nice
visual
representation
of
all
of
these
different
transactions
to
send
between
the
different
work
groups
and
to
verify
the
data
and
bring
back
proofs
and
validate
all
these
proofs
and
ensure
that
kind
of
the
the
pitches
sound
and
I
think
that's
where
I'll
leave
it
off
thanks
to
time
so,
I'm
handing
this
over
to
so
or
no
sorry
Andreas
for
standards
work.
F
But
certainly,
not
least,
so
we
have
eight
open
PRS
that
the
editors
still
need
to
review,
which,
for
various
reasons,
has
been
very
stack
on
for
the
last
four
or
five
months.
Six
months,
we'll
we'll
need
to
need
to
have
I
guess
a
meeting
to
get
to
get
that
backlog,
because
more
PRS
are
coming
as
we're
adding
more
testability
statements.
F
There
are
so
many
PRS,
because
the
number
the
the
number
of
testability
statements
being
added
has
been.
You
know
to
just
subsections
rather
than
a
whole
section,
because
that
could,
as
in
one
case,
be
like
a
hundred
testability
statements
in
in
one
PR,
which
is
hard
to
review.
F
So
that's
ongoing
where
5.5
is
still
outstanding,
as
well
as
section
three
and
section
four,
so
section
three
is
currently
being
worked
on,
there's
going
to
be
a
PR
coming
up,
so
we
should
be
hopefully
before
the
end
of
the
year,
ready
to
if
we
get
all
the
PRS
merged
ready
to
go
to
the
pgb
and
request
this
to
be
in
an
official
draft
from
release,
and
that's
why
the
interop
work
is
super
important.
We
need
to
have
these.
F
You
know
we
need
to
have
two
reference
implementations
showing
that
they
can
meet
all
the
requirements
and
interop
is
one
of
them
one
of
the
requirements.
So
you
need
at
least
two
implementations
and
to
get
a
become
a
full
standard.
F
We
need
to
reference
implementation,
so
it
becomes
a
bit
of
a
circular
argument
but,
most
importantly,
the
interop
work
is
absolutely
critical
in
order
to
move
the
standard
of
an
event,
then,
from
draft
to
final
to
final,
which
we
really
want
to
try
to
get
done
next
year
as
quickly
as
possible,
so
that
we
can,
we
can
put
a
we
can
put
a
pin
in
it
because
that's
it's!
F
It's
yeah,
it's
a
significant
body
of
work
and
we
have
been
added
for
three
years
and
we
need
to
come
to
to
to
a
conclusion.
So
that's
it.
B
Awesome,
thank
you.
I'm,
going
to
Breeze
through
just
a
couple
of
final
updates
from
the
TSC
side.
Our
get
coin
project
that
was
approved
about
a
month
ago
had
the
funding
round
closed.
B
22
days
ago,
we
raised
3077
by
2084
contributors,
which
is
very
exciting
because
the
get
coin
grant
program
uses
quadratic
funding,
so
the
more
contributors,
the
better
in
our
multiplier
to
get
higher
allocations
from
the
fund
that
Bitcoin
allocates
to
the
projects
and
the
ecosystem
to
give
back
to
projects
that
are
promoting
open,
good
public
goods
and
have
a
community
following
so
super
exciting.
We'll
see
what
comes
from
that
soon.
The
technical
steering
committee
is
also
working
on
another
Grant
application
for
funding
for
the
interop
work.
B
We'll
give
an
update
on
that
in
the
future
and
very
soon
we
will
also
start
sharing
information
for
the
upcoming
TSC
elections
for
the
2023-24
technical
steering
committee.
So
the
call
denominations
will
open
soon.
So
if
you
yourself
are
interested
in
being
on
the
committee
or
have
a
person
in
mind
who
you
think
has
been
a
great
contributor
or
would
be
one,
please
start
thinking
through
that,
and
we
will
share
the
info
on
how
to
submit
that.
And
in
the
meantime,
if
you
haven't
submitted
a
PR
in
the
last
six
months.
B
Please
do
so
because
that
will
give
you
voter
eligibility
for
the
TSB
election.
So
all
the
details
about
the
upcoming
elections
will
be
very
clear
in
the
Baseline
slack
and
across
our
socials
in
the
next
few
weeks.
So
there
will
be
no
room
for
any
missed
information
there
and
we're
going
to
start
that
process
starting
Monday,
so
keep
an
eye
out
and
we
need
as
much
participation
as
we
can
get
for
a
very
impactful
following
year.
I
think
that
is
all
we
have
for
the
general
assembly
just
because
of
time.
B
I
know
there
was
lots
to
cover
and
also,
of
course,
conversations
along
the
way,
because
it's
been
a
few
weeks
since
we've
had
a
baseline
show.
So
we
will
see
everyone
next
week
and
we
can
pick
up
where
we
left
off
and
have
room
for
open
floors
as
well
we'd
love
to
hear
what
Ryan
you
and
others
in
the
community
are
up
to.