►
From YouTube: The Baseline Protocol: May 2023 General Assembly
Description
The Baseline Protocol April 2023 General Assembly will take place on Wednesday, 5/24, at 12 PM EST, 6 PM CET and 10.30 PM IST!
The Baseline Protocol community members will cover updates from the Core Devs, Outreach Team, Technical Steering Committee, and other work groups!
We invite you to tune in, join the live audience, and share your own updates.
B
So,
as
usual,
we'll
go
through
our
agenda
where
we
have
all
of
our
work
groups,
give
an
update
on
what's
going
on,
we'll
leave
some
room
for
discussions
in
between
and
then
at
the
end,
we'll
have
an
open
floor
where
anyone
can
feel
free
to
share
what
they're
up
to
or
recent
deep
Dives
they've
been
going
on,
or
things
like
that.
So
with
that
I'll
start
out
with
the
Outreach
updates,
so
Mark
cattle,
our
Outreach
chair,
is
not
on
stream
yet,
but
I
will
take
it
over.
B
So
our
Outreach
team
has
mainly
been
focusing
on
the
Outreach
roadmap
items
and
writing
the
rfps
that
go
behind
there.
So
the
main
work
that
needs
to
get
done
in
the
Outreach
World
to
support
the
technical
work,
making
sure
that
it's
understandable
to
Technical
and
non-technical
people,
as
well
as
just
having
nicely
packaged
content
around
the
capabilities
and
demonstrations
of
a
baseline
protocol
implementation.
B
That
will
all
be
focused
on
by
the
Outreach
team
this
year
and
we
have
an
RFP
that
explains
in
detail
what
the
objectives
and
requirements
are
for
hosting
and
maintaining
that
work,
and
we
also
have
the
opportunity
for
any
individual
or
groups
of
people
to
join
the
effort
this
year
for
some
Grant
funds.
So,
as
I
mentioned,
we'll
get
into
that
a
little
further
in
the
Stream
when
we're
talking
about
the
roadmap
and
the
rfps
themselves.
B
But
since
our
last
call,
the
consensus
by
coindesk
event
took
place
at
the
end
of
April,
we
had
a
few
of
our
teammates
there
in
person,
as
well
as
some
who
are
not
on
stream,
but
anant,
Mark
Keith,
yowav
myself.
We
were
all
there,
I
think.
That's
it
from
who's
on
call
right
now,
but
any
any
feedback
on
how
consensus
by
coindesk
was
the
talks,
these
side
events
or
anything
like
that.
A
Was
my
first
kind
of
ethereum
or
or
more
broad
crypto
blockchain
focused
event
that
was
kind
of
of
that
style
where
it
was
a
lot
of
just
booths,
it
was
kind
of
a
more
traditional
style
event,
whereas
a
lot
of
booths
which,
with
each
sponsoring
company
giving
kind
of
their
pitch
and
their
you
know
just
information
about
what
they're
up
to
as
opposed
to
kind
of
the
more
ethereum
centered
ones
seem
to
be
a
lot
more
technical.
A
A
lot
of
you
know
the
the
Center
focus
is
the
talks,
as
opposed
to
you
know,
learning
more
about
the
different
companies
but
I
kind
of
appreciated
that,
because
the
ecosystem
grows
so
fast
and
there's
so
much
there's
so
many
companies
making
so
many
like
very
useful
tooling
for
developers
in
the
ecosystem,
and
it
can
be
kind
of
hard
to
scope
out
all
of
these
different
things.
Just
on
your
own,
you
know
just
like
trying
to
find
articles
or
Googling
it
research,
all
that,
so
to
have
like
one
place.
A
We
can
just
go
Booth
to
Booth
person
to
person
in
Tennessee
what
they're
up
to
and
how
it
can.
You
know
potentially
help
you
in
your
own
like
work
at
home.
It's
kind
of
you
know
it's
different,
it's
not
that
one's
better
than
the
other,
but
it's
kind
of
nice
to
be
able
to
do
that
and
see
what
everyone's
up
to.
B
Yeah
and
I
think
after
going
to
a
few
conferences
with
that
style,
consensus
itself
was
a
little
more
organized
and
like
quieter,
where
you
could
actually
have
the
conversations
the
people
you're
talking
to
it
was
pretty
D5
heavy
I,
noticed
lots
of
exchanges
or
counting
and
other
types
of
projects
going
on,
as
well
as
like
a
really
big
enterprisey
presence,
which
is,
of
course,
you
know
our
one
of
our
lanes
for
the
Baseline
Community,
but
like
KPMG,
hosted
a
happy
hour
which
was
one
of
their
first
big
events
at
a
web
3
conference
and
it
was
kind
of
like
it
was
matching
the
tech
scene
and
enterprise
scene
of
Austin
itself.
B
So
it
was
a
it
was
a
cool
conference.
I
think
all
of
the
talks
are
recorded
and
posted
online,
a
lot
of
stuff
about
regulation.
Things
like
that,
of
course,
the
Baseline
adjacent
topics
like
ZK,
Enterprise,
use
and
stuff
as
well
so
feel
free
to
sift
through
those,
and
let
us
know
if
you
learned
anything
from
those
videos
but
awesome
so
I,
don't
know
what
the
next
large
conference
is
that
we'll
have
baseliners
attending.
Is
anyone
here
going
to
a
conference
in
the
near
future.
B
Maybe,
not
maybe
it's
just
the
busy
summer
time
starting,
but
as
always,
our
Outreach
team
will
be
keeping
up
with
our
events
tracker
and
targeting
the
places
that
Baseline
members
should
be
having
a
presence
at
all
right
and
with
that
I
will
hand
over
to
you
mark
rimsa
as
one
of
the
sub
Outreach
topics
of
the
research
group.
If
you
want
to
give
us
an
update
on,
what's
going
on,
there.
C
Sure
so
one
one
sub
topic
of
the
of
the
research
work
group
is
the
fact
that
the
blip
one
and
International
Supply
Chain
efforts
have
sunsetted,
or
maybe
a
better
word,
is
paused
for
now
and
recently
I
just
submitted
a
PR
to
the
Baseline
repository
to
add
all
the
resources
that
have
come
out
of
this
of
this
working
group
so
that
it's
easier
to
access
for
the
for
the
broader
Baseline
community,
so
that
anybody
who's
accessing
the
repo
looking
through
the
examples
can
take
a
look
at
any
of
the
resources
that
the
group
has
produced
here
and
maybe
get
a
better
understanding
of
the
Baseline
protocol.
C
So
one
of
the
things
I
can
do
right
now
is
just
walk
through
some
of
the
resources.
Just
to
remind
everybody
of
what
the
work
group
has
done
and
then
just
to
remind
everybody
and
if
they
want
to
check
it
out
later,
they
can,
if
the
if
the
pr
gets
through
or
you
know,
once
we
find
the
right
place
to
put
it
out
there
for
the
community.
C
All
right
so
I'll
just
go
through
the
resources
in
the
order
that
I've
sort
of
listed
them
in
the
pr
in
terms
of
the
like
the
history
of
the
of
the
working
group,
so
the
first
thing
to
go
through
would
be
the
supply
chain
choreography.
C
So
this
right
here,
all
the
work
done
in
this
in
these
two
working
groups
stems
from
the
the
lens
of
an
international
supply
chain,
which
is
a
multi-party,
workflow,
multi-party
use
case.
That's
a
good
way
to
represent
how
the
Baseline
protocol
works,
why
it's
effective,
how
it
can
bring
benefits
to
businesses
any
Enterprise
that
wants
to
have
a
trustless
flow
of
automation,
business
processes,
this
supply
chain
choreography,
gives
a
good
overview
of
how
different
documents
are
proposed
and
agreed
upon
by
multiple
parties
throughout
an
entire
workflow.
C
So
you
know
you
start
with
an
MSA
it's
proposed
by
the
supplier,
it's
agreed
to
by
the
buyer,
it's
like
a
state
proposal
and
then
acceptance
and
then
from
there
that
would
be
signaled
to
all
these
parties
where
you
could
move
on
and
then
the
buyer
could
create
an
order,
for
instance.
So
very
simple,
simple
explanation
of
this.
C
Here,
but
this
is
just
a
good
bird's
eye
view
of
what
a
multi-party
workflow
looks
like
and
a
good
way
to
think
about
how
you
could
break
down
Baseline
workflows,
if,
if
you
or
a
different
company,
are
thinking
about
baselining
your
own
process.
C
Next
up
here
would
be
the
sequence
diagram
which
which
Maron
in
the
work
group
created.
This
gives
an
idea
of
all
the
different
parties
that
are
involved
in
this
International
Supply
Chain
who's,
creating
these
documents,
who's
signing
these
documents
and
then,
if
they
need
to
be
anchored
to
ccsm.
C
So
some
of
the
some
of
the
materials
in
this
work
group
reference
decisions
on
on
when
to
use
a
ccsm
or
not,
and
then
you
know
later,
as
we
get
through
some
of
the
materials
everything's
using
a
ccsm
as
a
verifiable
timestamp
for
Anytime,
a
anytime
estate
change
is
made,
but
the
the
decisions
between
when
you
use
the
ccsm
or
not
or
when
to
use
the
zero
knowledge
proof
they're,
all
included
in
a
lot
of
the
materials
that
are
produced
here
and
then
next
up
would
be
the
story
Journey,
where
we
have
a
a
general,
a
general
user
story
for
the
entire
flow
of
an
international
supply
chain.
C
We
have
the
personas
listed
here
and
then
we
have
all
the
documents
that
are
included
in
this
International
supply
chain
and
comments
and
considerations
about
the
about
each
document
about
who
it's
initiated
by
who
the
approvals
are
made
by
what
some
of
the
properties
of
these
documents
are,
whether
it
needs
zkp
validation,
whether
it
needs
anchoring
and
also
listed
here.
That's
that's
important
to
note
is
references
to
other
documents
using
nested,
verifiable
credentials
as
proof
of
previous
acceptances
or
like
approved
State
changes
from
other
documents
that
feed
into
new
documents
that
are
created.
C
So
building
on
this
building
on
this
story,
Journey
that
was
created
are
these
Story
Journey
advancement,
predicates,
which
I
think
give
a
decent
idea
of
how
people
can
conceptualize
the
the
flow
of
a
of
a
baseline
process
where
you
have
a
bunch
of
parties
that
come
together.
You
start
with
an
initial
state
and
then
you
propose
a
state,
and
then
you
accept
that
state
change.
So
all
of
these
give
more
granular
steps
for
how
you
could
create
a
purchase
order.
C
What
what
goes
into
that
decision
and
how
it
moves
with
more
technical
details
and
from
there
from
there
we
created
supply
chain,
demo
user
stories
that
take
all
of
the
rules
for
advancements
rules
for
advancement
of
each
step
of
that
supply,
chain
process
and
break
it
out
into
epics
and
use
the
stories
and
even
example,
verifiable
credentials
that
contain
all
the
properties
from
The
Story
Journey
sections
for
proofs
sections
for
signatures.
C
Rules
about
why
you
would
need
a
signature,
why
you
need
to
verify
a
zero
knowledge,
proof,
user
story.
Descriptions
of
you
know
as
a
buyer.
I
need
to
I
need
to
verify
the
signature
so
that
I
know
the
catalog
has
been
approved,
so
I
can
create
a
purchase
order
and
what
it
does
is.
It
takes
a
lot
of
the
technical
discussions
that
happened
and
a
lot
of
the
discussions
about
the
business
requirements
and
it
merges
them
together
into
these
user
stories
that
developers
if
they
were
to
come
across.
C
These
could
pick
it
up
and
maybe
they're
not
fully
complete,
but
it
gives
a
much
better
idea
of
how
you
could
create
a
trustless
supply
chain
or
some
sort
of
trustless
application
that
takes
these
documents
or
state
objects
and
synchronizes
them
and
verifies
them
with
zero
knowledge.
Proofs
anchors
them
to
ccsm
end
globally
signals
these
State
changes
for
everybody.
As
a
it's
like
a
common
bulletin,
board.
I
guess
you
could
say
so
that
is
a
it's
a
full
overview.
C
High
level
overview
of
the
the
work
group
I
just
submitted
a
PR
based
on
repo,
so
stay
tuned
for
that,
if
you're
a
core
Dev,
please
review
it,
and
let
me
know
if
that's
the
good
best
place
to
put
it
or
not.
Are
there
any
questions
on
this
work
here
from
anyone.
B
No
I
think
it
looks
good
and
I'm
glad
to
hear
that
it's
going
to
be
accessible
within
the
repo,
since
that's
the
place
that
anyone
can
access
who
is
just
finding
the
project
and
then
they
can
have
the
pointers
to
where
the
docs
actually
live.
So
thanks
for
doing
that,
and
then
also
it
seems
that
you
have
some
recent
blogs
that
you've
done
on
the
mesh
Blog
Page
related
to
baselining.
B
C
Yeah
sure
our
team,
the
mesh
team,
has
released
some
blogs
and
I
can't
take
all
the
credit.
These
are
largely
from
the
mind
of
Andreas
freund.
C
So
these
these
three
blogs
right
here,
if
you
can
see
my
mouse,
are
the
most
recent
blogs
that
our
team
has
released.
The
first
one
is
about
zero
knowledge,
iot
how
the
Baseline
protocol
can
secure
the
edge
of
networks.
This
is
a
great
blog
that
talks
about
iot
devices
and
how
they
have
specific
limitations.
How
they
lack
a
trusted,
verify
verifiable
identity,
how
they
can't
verifiably
prove
the
source
of
their
produced
data
and
that
they
can't
prove
any
transformations
of
their
data
are
done.
C
Provably
correctly,
and
all
these
properties
make
iot
devices
a
prime
candidate
for
verifiable
attestations
in
the
form
of
zero
knowledge
proofs.
When
you
take
these
verifiable
attestations
of
identity,
location
membership,
all
things
that
our
team
has
worked
on
Keith
and
you
have
created
a
trusted
location
service
that
we
did
with
a
Mobility
Consortium.
C
When
you
take
these
things,
it
gives
it
allows
iot
devices
to
be
equal
agents
as
humans
in
specific
processes,
and
you
can
create
Baseline
processes.
You
can
secure
the
edge
of
networks,
because
everything
is
done
in
xero
trust
under
zero
knowledge
and
overall,
this
is
a
really
really
great
report
that
provides
a
lot
of
insight
into
the
reasoning
and
logic
behind
the
Baseline
protocol,
how
it
can
secure
your
system
and
how
you
can
collaborate
with
others
trustlessly
by
not
trusting
but
verifying
always
and
working
through
that
way.
C
So
this
is
a
great
blog.
Definitely
read
this
another
one
that
we
released
is
the
Baseline
protocol
across
Industries,
specifically
the
industry
of
Telecommunications,
and
in
this
article
we
talk
about
just
some
of
the
the
general
things
that
we
usually
touch
on
in
each
blog
about
why
multi-party,
zero
trust
under
zero
knowledge
is
needed
because
of
regulatory
and
cyber
security
pressure
right
now
and
then
later
in
the
article
we
talk
about.
Why
telecommunications,
in
particular,
is
an
industry
that
needs
multi-party,
zero
trust
under
zero
knowledge.
C
It's
because
they
have
so
many
parties
that
they
coordinate
with
to
deliver
end-to-end
services,
and
we
provide
a
simple
example
here:
providing
end-to-end
connectivity
through
Telecom.
If
you
want
to
connect
a
Singapore
office
to
a
New
York
office,
you
need
to
go
through
four
different
companies
to
provide
that
connectivity
service
between
the
two
offices,
which
means
need
to
there's
a
lot
of
trust
assumptions
involved
there.
C
There's
cyber
security
vulnerabilities
and
the
best
way
to
shore
up
all
these
things,
ensure
that
your
regulatory
compliant
and
that
you're,
not
opening
yourself
up
to
undo
cyber
security
risk,
is
to
use
the
Baseline
protocol
and
and
secure
everything
in
that
way.
So
another
thing,
that's
cool
about
this
article-
is
it
talks
about
how
Telecom
is
thinking
about
zero
trust
and
zero
knowledge?
C
They've
created
some
of
their
own
standards,
and
another
thing
highlighted
here
too,
is
the
meth
showcase,
which
is
a
quarterly
membership
meeting
which
focuses
on
these
multi-party,
zero
trust
under
zero
knowledge
use
cases,
and
it's
something
that
our
consensus,
mesh
team
is
working
on.
C
We're
working
on
using
the
zkevm
as
the
private
verifiably,
correct
execution
framework
for
trusted
business,
automation
between
multiple
parties
and
then
using
Dows
as
the
governance
and
as
the
governance
and
like
business
automation,
framework
that
these
processes
can
be
run
in
so
definitely
check
this
article
out
to
get
an
idea
of
what
Telecom
is
doing
in
the
in
the
in
the
realm
of
zero
trust
and
zero
knowledge
and
more
insight
into
what
our
team
is
doing
to
further
Advance.
C
The
Baseline
protocol
here
in
this
way,
I
think
it
again
is
worth
noting
that
a
lot
of
these
Pilots
mentioned
here
in
the
article
they're
all
within
the
family
of
Baseline,
using
the
Baseline
pattern
to
achieve
this
trust
and
synchronicity
between
multiple
parties
and
heightened
heightened
security
and
then.
Finally,
back
to
the
topic
of
consensus.
C
I
gave
a
talk
at
the
hyperledger
booth
about
decentralized
business
automation
as
a
service.
So
this
sort
of
goes
hand
in
hand
with
that
telecommunications
article,
because
we're
working
on
building
decentralized,
business,
automation
and
we're
starting
with
Telecom.
First
and
this
article
gives
just
a
little
summary
of
the
presentation
that
I
gave
and
I
could
blow
up
this
little
diagram
here.
C
If
it
works
I
guess
it
isn't
really
working
but
we're
working
on
using
zkevms
and
bridges
and
web
2
systems
to
connect
connect
all
these
web
2
systems
and
do
trusted
decentralized
business
automation
here.
So
take
a
look
at
this
article
or
just
watch
my
presentation.
We
have
a
link
here.
C
So
that's
all
I
have
to
say
in
these
articles,
but
Andreas
you
contributed
to.
If
there's
anything
else,
you
want
to
share
about
them,
please
do.
A
So
core
Dev,
starting
with
Bri
3,
the
SRI,
simple
reference
implementation.
It
is
currently
on
Milestone
four
of
its
Grant
request
and
is
making
pretty
good
progress
on
that
Milestone.
The
team
is
currently
kind
of
wrapping
up
on
the
use
case,
doc
that
we've
been
working
on
in
this
doc
I
think
was.
We
spoke
about
it
as
well
in
the
last
G8,
but
this
stock
kind
of
outlines
the
architecture
that
would
be
required
to
implement
kind
of
a
basic
use
case.
A
The
stock
kind
of
is
to
get
everybody
on
the
same
page
kind
of
architecturally.
Speaking
of
of
what
developing
that
would
entail,
so
that
we
can
then
come
to
a
consensus
on
what
the
like
final
use
case
would
be
and
then
go
about
implementing
it.
A
So
yeah,
that's
kind
of
where
the
use
case
stock
is
Milestone.
Four
itself
kind
of
requires
part
of
its
completion
is
to
have
a
complete
use
case,
so
that
Doc
is
kind
of
a
really
big
first
step
in
getting
there
once
that
Doc's
in
place.
It
would
be
much
smoother
it'll,
be
a
pretty
quick
ride
to
having
that
use
case.
Put
together,
Milestone
4.
A
Also
outlines
ckp
components
requiring
that
all
proofs
are
generated
and
stored
accordingly
and
can
be
verified
by
participants,
as
well
as
a
fully
functioning
happy
path
test
case,
which
would
include
those
ekp
components
having
everything
stored
correctly
and
and
regenerated
in
zero
knowledge
current
state
of
those
things.
There
is
a
happy
path,
end
to
end
test
case
that
creates
two
BPI
subjects
and
a
work
group
and
adds
the
created
BPI
subjects
to
that
work.
A
Group
and
I
can
kind
of
show
that
PR
real,
quick
just
to
give
an
idea
of
what
that
would
look
like
so
kind
of
disclaimer
with
what
I
just
said.
This
test
case
is
kind
of
the
beginnings
of
that
the
beginnings
of
satisfying
that
Milestone
4
requirement.
It
does
not
include
all
of
the
zkp
components
necessary
to
satisfy
milestone
for,
but
it
does
like
I
said,
create
those
work,
groups,
work,
steps
and
add
them
to
the
work
group.
A
So
it
is
kind
of
the
beginnings
of
of
finishing
that
requirement
to
satisfy
the
rest
of
this
milestone,
but
the
spec
and
and
all
of
the
architecture,
kind
of
surrounding
making
this
work
is
all
very
in
line
with
the
architecture
of
bri3.
A
The
main
focal
point
of
how
everything
is
being
designed
is
to
be
as
simple
as
possible
to
be
kind
of
a
true
MVP,
in
the
sense
that
it
can
be
sort
of
the
here's,
a
fun
plan,
where
it's,
the
Baseline
of
of
an
implementation,
that
anybody
can
pick
up
and
sort
of
kind
of
quickly
understand
how
it's
been
put
together
and
build
on
top
of
it
without
there
being
a
lot
of
extra
or
unnecessary
kind
of
fluff
built
into
it.
A
So
you
know
not
to
go
line
by
line
in
code,
but
this
is
kind
of
that
test
case,
which
you
know
does
what
you
would
expect.
It
creates
all
the
different
participants,
it
it
utilizes
the
Prisma
to
seed
some
some
keys,
and
then
it
you
uses
those
keys
to
spin
up
all
of
the
necessary
components.
A
Storing
everything
that
it
needs
to
creates
the
work
group.
It
checks
that
the
proper
response
gets
returned
from
the
server
and
yeah
and
then
verifies
that
everything
is
kind
of
as
expected.
A
So
from
that
sense,
that's
kind
of
where
Milestone
4
is
at
the
moment
it's
coming
along
everything
that
it
requires.
A
You
know
being
that
end
to
end
case,
that
has
the
zkp
components
is
all
kind
of
very
close
to
a
stage
where
the
true
necessary
implementation
can
be
worked
on
and
in
that
sense
would
be
wrapping
up.
Milestone
four
Once.
Those
things
are
fully
complete,
so
that's
Bri,
3's
current
status,
the
the
core
Dev
group,
is
also
working
on
interoperability
for
the
interop
Fest
that
we
are
planning.
A
So
there
is
a
sort
of
separate
interoperability
work,
Group,
which
many
of
the
members
kind
of
overlap
with
the
Bri
3
work
group.
But
this
work
group
is
going
through
all
of
the
interop
related
specs
in
the
standard
and
kind
of
deciding
exactly
how
those
specs
would
have
to
be
implemented
within
the
two
bris
involved,
which
is
Bri
one
and
Bri
three.
A
In
order
to
kind
of
achieve
that
interoperability.
For
the
purposes
of
the
demo
for
the
interop
Fest,
so
we're
going
through
kind
of
every
single
requirement
we're
seeing
what
it
would
take
to
implement
or
or
if
not,
what
is
already
implemented
and
what
is
already
interoperable
between
the
two.
If
there
isn't,
you
know
if
that
component
is
already
satisfied
with
the
current
state
of
the
both
of
them,
the
team's
also
kind
of
outlining
an
API
spec
for
an
operability,
and
the
last
time
we
met.
A
So
these
are
kind
of
things
that
we
are
posing
to
the
group
things
that
are
kind
of
just
being
thought
about.
Alongside
all
of
these
specifications,
there's
also
a
suggested
use
case
being
worked
out.
A
A
So
this
this
proposed
use
case
thus
far
that
we've
been
thinking
about.
It
involves
syncing
synchronizing
banking
data.
It's
not
yet
kind
of
the
final
use
case.
It's
we're
still
discussing,
maybe
tweaking
that
one
or
suggesting
new
ones,
but
that's
kind
of
where
we're
at
I
can
show
a
little
bit
of
kind
of
the
notes
to
just
show
what
what
all
of
that
looks
like
in
practice
so
from
the
interopwork
group
notes,
document
and
all
of
this
is
kind
of
in
the
Baseline
open
source
public
domain
in
Google,
Drive.
A
So,
as
you
can
see
like
the
a
requirement
here
involving
interoperability
is
pasted
and
then
discussed
below
what
it
would,
what
we
would
need
or
what
we
have
to
satisfy
that
requirement
and
for
anyone,
who's
looked
at
the
the
spec,
it's
long
and
there's
a
lot
of
interrupt.
So
this
is
kind
of
a
long
effort.
It's
gonna,
it's
gonna,
take
some
time,
but
is
pretty
necessary
in
order
to
demonstrate
the
true
power
of
Baseline.
Because
really
you
know
it
comes.
A
It
comes
out
of
its
shell
with
interoperability
with
utilizing
more
than
one
party.
You
know
multi-party
of
course,
so
this
is
kind
of
a
necessary
evil
to
go
through
and
make
sure
that
we
can
take
care
of
this,
and
here
we
have
the
API
spec
that
we
worked
on
in
the
last
few
work
group
sessions
along
with
those
kind
of
to
do,
questions
that
I
mentioned
earlier
yeah.
A
So
that's
kind
of
the
state
of
those
two
work
groups
Andres,
if
you
have
anything
to
add
or
interop
or
keep
if
you
have
anything
to
add
for
Bri,
feel
free.
But
that's
my
update.
E
B
D
Or
I'll
apologies
for
for
not
turning.
My
camera
on
I
have
hardware
issues
I've
been
having
for
weeks.
I
can't
figure
it
out
anyway,
so
hence
no
no,
no
smiley
face
for
me,
except
my
picture,
the
so
where
we're
at
so.
We
finally
came
to
an
agreement
with
the
in
the
pgb
well
between
Charles,
Dan
and
I,
effectively
on
what
testability
actually
means
with
regard
to
requirement
which
ends
up
being
a
logical
test
of
the
requirements.
D
So
you
have
to
write
a
logical
test
for
the
requirement
which,
with
preconditioned,
test
steps
and
done
criteria
right
success
criteria
for
that
for
that
test
such
that
implementers
can
basically
follow
along
and
and
relatively
easily
write
a
test.
Well,
I,
don't
know
about
easily,
but
can
write
a
test
and
don't
have
to.
D
That's
the
that's
the
good
news.
The
bad
news
is
that
writing
such
testability
statements
is
a
tad
time
consuming.
Well,
thanks
to
chat
GPT.
This
is
less
time
consuming
than
than
otherwise.
D
You
can
write
those
prompts
and
with
and
insert
the
requirement
and
give
any
definitions
that
have
been
written
around
that
requirement
and
then
it
actually
gives
you
a
good
place
to
start
so
currently,
section
7
has
been
used
as
the
of
the
of
the
the
spec
has
been
used
as
the
which
is
around
storage
has
been
used.
D
That's
the
guinea
pig
for
that.
That's
has
been
updated
and
merged.
Section
8
PR
is
out
I'm,
currently
working
on
Section
six.
D
D
Six
is
slightly
expansive
as
well
as
five,
so
I
I
hope
to
have
six
done
in
the
next
couple
weeks
to
create
a
to
create
a
PR
I
think
there
are
something
on
the
order
of
of
80
to
100
requirements
in
that
and
then
section
six,
so
I
think
I
have
to
get
to
I,
think
I'm
at
248,
or
something
like
that.
50
and
I'm
still
need
to
get
up
to
like
279.
D
Plus
there
are
some
optional
and
and
should
requirements
in
there
too,
so
getting
getting
getting
there
slowly,
but
but
surely
yeah
so
that
that's
the
that's
the
that's
the
Journey
of
the
spec.
D
Once
we
have
all
the
testability
statements
added,
we
can
move
it
to
a
full
draft
status,
hopefully
during
the
summer
sometime
and
then
we
just
have
to
wait
for
the
interop
demo
to
be
completed
and
the
documentation
of
that
demo
and
Bri
one
and
Bri
three
to
be
added
to
you
know
to
the
to
the
to
the
pr
such
that
we
can
move
the
standard
to
a
final
specification
status,
hopefully
sometime
early
next
year,
and
then
we
can
say
that
that
that
is
done.
D
So
the
key
I
think
is
that
what
we
need
to
do
is
to
you
know,
drive
more
Outreach
get
people
people
involved,
because
a
standard
is
only
as
good
as
you
know,
just
the
people
who
are
actually
using
and
implementing
it
and
and
have
it
companies
and
Enterprises
use
it.
So
that's,
that's!
D
That's
an
important,
that's
an
important
piece.
So
yeah!
That's
that's!
That's
the
update
from
the
standards
working
group
and
the
editors.
D
Tsc
update
yes,
so
we
have.
We
already
had
the
interop
RFP
out
and
published
waiting
for
participants
to
submit.
We
have
a
little
over.
We
have
a
little
over
a
month,
left
I,
think
6
30,
or
something
like
that.
We
might
have
to
extend
that
deadline
and
we
also
the
TSC
agreed
on
the
wording
of
the
Outreach
Roadshow
RFP
that
has
been
published.
D
D
We
need
to
do
and
the
TS
it's
the
TST
job
to
now
do
a
concerted
effort
before
the
summer
months
kick
in,
where
there's,
basically
nothing
going
to
happen
in
order
to
to
to
start
on
generating
some
donations,
we
have
commitments
for
matching,
but
so
we
need
so
for
the
interop
RFP.
We
have
a
commitment
as
a
matching
fund,
so
we
need
at
least
two
participants
25
that
would
receive
25k
each.
D
So
we
need
to
raise
like
25k
to
get
a
25k
match
and
again
for
the
for
the
roadmap.
The
Outreach
RFP
I
think
we
have
a
data,
25
or
30
I
forget
exactly
what
it
is,
so
we
de
facto
need
to
re
to
net
raise
around
50k
for
those
two
rfps
to
get
going.
We
are
we
had
you
know
it's
like
Mark
already
talked
talked
about,
the
blogs
that's
TSC
has
has
has
published,
and
we
will
have
another
blog
coming
out
next
week.
D
It's
going
to
be
sort
of
like
a
surprise.
It's
like
a
box
of
chocolate.
You
never
know
what
you're
gonna
get
until
it's
until
it's
out,
so
you
know
suspension
suspension
a
little
bit
and
then
we
have
already
the
another
one
for
June
on
the
docket.
D
Otherwise
we
have.
The
TSC
has
made
a
proposal
to
the
pgb
to,
for
this
year,
2023
rfps
suspend
the
25
brand
fee
that
is
put
between
the
eea
and
Oasis,
because
potential
grantees
have
voiced
significant
concern
about
that
25,
which
would
be
on
top
of
the
eight
percent
that
the
that
they,
that
the
open
Collective
platform
collects
from
the
from
from
a
donation.
D
So
we'll
see
how
that
goes,
hopeful
that
we
can
get
this
waived
at
least
for
this
year,
such
that
we
can
getting
more
sponsors
new
sponsors
and
get
in
grantees
that
we
can
convert
to
sponsors
in
2024.
So
that's
that's
the
goal
and
that's
the
mission
and
that's
the
what
the
TSC
is
accountable
for
yeah.
That's
it
as
far
as
the
TSC
is
concerned,.
B
E
Hey
Sono
I,
just
wanted
to
you
know,
tell
the
guys
that
I
mean
I,
really
appreciate
the
effort
they're
taking
to
catalog
their
work
and
the
systematic
way
they're
building
their
user
stories.
E
I
think
that
really
kind
of
differentiates
baseline
from
most
of
the
other
projects
that
we
see
coming
out
of
you
know
the
web
3
space
right
so
say
like
a
great
shot.
You
know
really
appreciate
it.
F
All
right
yeah,
so
we
right
now
are
kind
of
tumbling
down.
The
Rabbit
Hole
of
building
a
back
end
for
Noir
using
Halo,
2.
I.
Think
something
kind
of
contextual
to
Baseline
would
be.
It
seems
like
the
kind
of
the
lingua
Franca
of
Baseline
right
now
is,
is
circom
right.
F
So
yeah
I'm
kind
of
wondering
I
guess
if
there's
been
any
kind
of
exploration
around
some
of
the
other
grouping
systems,
I
know
Andreas
has
talked
a
lot
about
plunky
two
before
is
that
something
that
and
including
kind
of
interoperability
as
well
is
their
kind
of
interest
in
looking
at
the
different
advancements
that
have
come
out
even
like
the
past
year
or
so,
and
and
seeing
what
kind
of
proving
enhancements
there
could
be.
G
I
think
I
could
jump
in
here,
so
I
work
for
risk
zero.
We
have
our
own
ZK
VM,
primarily
programmed
today
in
Rust,
and
we
are
looking
at
ways
we
can
support
Baseline.
G
F
And
that's
a
super
compelling
one
because
risk
zero
risk,
five.
The
kind
of
is
that
actually
how
that
works,
or
are
you
able
to
essentially
take
like
the
risk
five
instruction
set?
Let's
say,
you've
got
something
running
that.
Can
you
essentially
verify
those
sorts
of
machines
in
Risk
zero,
or
is
it
really
kind
of
a
standalone
thing.
G
I,
don't
want
to
turn
this
into
a
risk
risk
zero,
ZK,
VM
call
but
yeah.
That's
that's
the
technical
approach
is
we
from
inside
the
the
proof
code?
It
just
looks
like
a
risk.
Five
small
computer,
a
32-bit.
You
know
a
couple
hundred
Megs
of
RAM
machine.
You
can
do
whatever
work.
You
need
to
do
and
I
think
the
the
places
where
that
might
work
well
with
Baseline
are
places
where
you
want
to
plumb
things
like
existing
message
formats,
serialization
formats,
all
the
way
through
into
the
proof
code.
G
Where
you
know,
maybe
you
might
be
more
familiar
with
solidity,
so
it's
easier
to
write
the
business
logic
there,
but
because
you
have
this
extra
baggage
of
like
wanting
to
unpack
a
whole
JWT
or
something
like
that.
Rather
than
rebuild
all
of
that
from
scratch,
you
could
lean
on
existing
Library
code
and
do
it
through
ZK,
VM
or
orthod
bonsai.
F
It
certainly
seems
like
a
compelling
application,
especially
like
looking
at
the
microcontrollers
iot,
just
having
that
entire
stack,
fully
verified
essentially
or
at
least
the
the
controller
itself
be
entirely
running.
Verified
code
seems
very
interesting.
D
I
mean
generally
generally,
generally
speaking,
especially
for
interop.
You
just
need
to
to
agree
on
the
on
the
prover
system,
right
and
and
then,
depending
on
the
prover
system.
You
need
to
then
decide
by
which
part
that's
set
up
to
you
to
use
or
not,
and
therefore
also
to
trust.
D
So
you
know
we're
probably
gonna
go
for
the
for
the
intro
abdama
with
something
that
is
simple,
but
I.
Think
the
the
and
I
think
it's.
It's
really
doesn't
really
doesn't
really
matter
whether
you're
you're,
using
whether
you're,
using
Noir
or
or
some
other
DSL.
D
You
know
sarcom
or
narc
for
your
for
your
for
your
circuits.
In
the
end
it
it
the
the
it's
important
what
comes
out
at
the
end
and
that
you
know
that
how
to
you
know
what's
the
proving
system
where
the
verification
Keys,
you
know,
I
mean
what
the
witness
and
so
forth
so
that
you
can
verify
the
proofs
and
it's
important
to
understand
what
those
proofs
mean.
D
Therefore,
the
predicate-
and
that
is
that
that
that
the
proves
Express
right,
it's
important
to
understand,
because
otherwise
the
proofs
themselves
are
meaningless.
Right
and
that's
you
know,
I
I
know
what
what
I'm
proving
it's.
It's
like,
I'm,
I'm,
I'm
I'm,
having
a
hell
of
a
fun
time.
D
You
know
going
going
through
the
through
the
prover
systems
of
the
of
the
or
ZK
evm,
where
you're
using
Starks
and
then
going
going
into
and
then
generating
the
snark
and
there
there
are
so
many
there
are
so
many
subtleties
and
nuances
and
optimizations
that
can
still
be
done.
It
is
it
it's
at
this
point
in
time.
It's
still
it's
still
very
complex
and,
and
it
has
to
be
because
you're
simulating
a
CPU
right.
D
That's
the
in
anything
that
is
DK,
whether
there's
an
e
in
front
of
the
VM
or
not
whether
it's
a
ZK
jvm
or
you
know,
whatever
it
you're
simulating
the
CPU
so
that
that
is
by
by
my
design,
very
very
complex.
No,
the
question
for
Baseline
is:
how
do
you
plot
that
and
that's
where,
where
you
know
it's
like
Risk
zero
comes
in
where
and
how
do
you
plug
these
execution,
Frameworks
and
most
of
efficiently
and
effectively
right,
it's
it's
and
and
and
how
much
effort
are
you
do
you
want
to?
D
These
provers
are
aren't
cheap,
typically
right,
unless
you
you
have,
you
use
proper
optimizations,
but
you
know
if
you're,
if
you're
running
things
that
are
that
are
over
64-bit
on
a
64-bit
machine,
you
are
running
into
trouble
right,
so
you're
you're,
incurring
significant,
significant
overhead.
Hence
it's
really
important
that
we're
looking
at
sort
of
like
PPU,
optimized
Hoover
systems
that
are
that
are
you
know
that
are
attuned
to
current
CPU,
architectures
or
GPU
architectures.
D
Hence
the
work
they're,
both
zero
and
others
are
aren't
doing
is
really
is
really
important.
D
I
think
we're
at
a
point
where
we
kind
of
see
where
it's
going,
but
you
know
it's
it's
like
where
I'm
still
getting
tripped
up
is.
Is
you
know,
I
need
to
run
like
a
96
core
128
core
machine
with
like
512
gigs
of
RAM
right
in
order
to
to
to
to
get
somewhat
of
an
appreciable
performance
that
is
not
really
a
long-term
sustainable
model,
except
for
for
for
for
very,
very,
very
high
value
use
cases.
D
D
You
know
like
half
a
million
to
a
million
aboard
and
then
the
appropriate
operating
cost,
and
you
know
buying
a
few
hundred
of
those
at
a
time
right,
but
only
for
that
one
use
case.
Basically,
everything
else
is
like,
so
it
becomes
really
important
that
we
think
about
on
the
advancements
of
how
to
to
to
to
you.
D
G
Yeah
I
mean
I,
think
everybody
working
on
proofing
systems
is
real,
aware
and
concerned
about
performance,
I.
Think
for
for
sort
of
business
logic,
I
would
say
speaking
for
hours
at
KVM,
that
I
know
the
best
for
kind
of
typical
business
logic.
Scale
things
you
don't
need
that
big
of
a
machine
and
we're
working
on
getting
stuff
to
typically
fit
within
a
couple
gigs
of
RAM-
and
you
know,
hopefully
a
few
seconds
or
tens
of
seconds
on
a
normal
CPU
and
then,
of
course,
much
faster
on
a
GPU.
G
So
the
situation
is
getting
better
pretty
rapidly
across
the
board,
including
the
stuff
that
we're
working
on.
So
we
think
we
will
Converge
on
completely
practical
in
a
time
frame
that
fits
with
what
Baseline
is
doing.
It's
kind
of
one
of
our
one
of
our.
D
Focuses
that's
all
I
mean,
that's
that's
awesome
and
that's
really
really
important,
because
you'll
you'll
end
up
having
to
run
this
on
like
on.
Like
you
know,
retail
Hardware.
G
Yeah
well,
there
are
actually
two
two
areas
where
people
are
really
pushing
on
performance
of
memory.
Footprint
one
is
there's
a
set
of
people
who
want
to
be
able
to
generate
proofs
in
browser
in
wasm
and
32-bit.
Wazen
has
some
pretty
severe
limitations
on
how
much
memory
you
can
use.
So
that's
yeah,
that's
a
big
one
and
then
the
other
is
on
mobile,
where
it's
not
so
much
memory,
because
modern
phones,
let
you
use
a
lot
of
ram,
but
energy
consumption
is
a
big
deal
and
thermals
are
a
big
deal.
G
So
we've
got
pressure
to
go
much
smaller
than
I.
Think
even
a
server
application
would
be,
and
you.
D
Know
yes,
I
I,
but
until
you
have
the
server
side
to
figure
it
out,
it's
it's
really
hard
to
go
beyond
that,
because
you
don't
you
know,
you're
you're,
yeah
I
mean
doing
doing
stuff
in
wasm
in
browser
is,
is,
is
extremely,
is
extremely
painful
and
then
the
trade-offs
become
severe.
D
With
regard
to
to
to
to
with
regard
to
also
security,
I
think
there
is
a
there
is
a
path
where
you
will
have
these
type
of
Rover
systems
run
in
in
their
dedicated
VMS
on
on
OS.
D
Like
you
know,
you
can,
you
can
run
like
you
know
with
with
hypervisor
there's.
There
are
ways
how
to
to
to
do
that,
so
you
can
access
actually
the
the
CPUs
directly
and
I'm
I'm,
pretty
sure
this
will
come
fairly
fairly
quickly,
where
you'll
be
able
to
access
these
type
of
type
of
external
resources
from
the
browser
right,
because
there's
there's
no
way
the
browser
was
never
designed
to
be
a
to
be
a
hike
to
be
a
high
performance,
compute,
environment
and
I.
D
Think
right
now,
it's
being
being
incredibly
overloaded
because
there
are
no
real
support.
You
get
the
supporting
infrastructure
around
the
browser
on
the
OS
side
is
is,
is
not
there
and
very,
and
similarly
it's
also
on
it's
also
on
the
mobile
side.
So
I
really
see
the
big
advancements
coming
on
the
on
the
operating
system
and
and
and
and
chipmaker
side.
G
Yeah
chip
makers
are
slow,
know
a
little
bit
about
that,
but
yeah
no
I
mean
it's
going
to
be
interesting
to
see
how
this
all
unfolds
I
mean.
We
do
have
proof
generation
in
browser
working
today
and
I.
Think
there
are
a
few
groups
that
yeah
it
went
that
way
right.
D
G
Let
me
get
back
to
look
at
exactly
what
it
was
that
our
team
did,
but
I
think
in
in
general,
the
even
though
ZK
is
kind
of
a
new
area
in
web
3
is
a
slice
of
overall
software
is
sort
of
a
new
and
relatively
small
area.
Gaming
is
massive,
and
one
of
the
things
that
we
all
benefit
from
is
that
the
gaming
people
are
putting
a
huge
amount
of
pressure
on
every
platform
to
get
more
and
more
and
more
performance
right.
So.
E
G
We,
you
know,
I
mean
things
like
web
GPU
and
64-bit
plasma
support
and
simdi
and
wasm.
Those
are
all
things
that,
probably
you
know
a
few
of
us
by
ourselves,
couldn't
motivate,
say
Google
or
Apple
to
put
into
their
their
browsers,
but
we
benefit
from
gaming
and
creative
apps
as
well
all
kind
of
wanting
the
same
stuff
and.
G
Yeah
the
deployment
story
for
like
Chrome
extensions
once
you
want
to
go
outside
of
something
was
embeddle
anyway,
there's
a
whole
like
technical
rabbit,
hole
about
about
yeah.
Why?
Why
we're
trying
to
collude
it
together
with
exactly
the
Technologies
shipping
and
browsers
today,
even
though
we
know
that
it
will
eventually
get
better
but
yeah,
that's
all
stuff
that
we
pay
super
close
attention
to.
D
Yeah
so,
but
but
I
think
that
that
that
that
driver
like
gaming,
generative
AI,
we
will
be
able
to
piggyback
on
that
on
on
on
the
improvements
that
are
coming
down,
that
that
Highway.
G
Yeah
and
that
some
of
that
stuff
manifests
on
the
server
side
as
well.
I
mean
I
around
long
enough.
I
remember
the
original,
you
know
Cindy
and
commodity
x86
chips
was
really
driven
by
gaming.
It
was
you
know,
people
want
a
quake
to
run
fast,
and
you
know
here
we
are.
We
have
ABX
512
in
our
our
data
center
yeah.
G
Yeah
so
I
think
it'll
be
that
stuff
is
all
good
for
us
and
I
mean
I,
keep
pretty
close
track
of
developments
in
the
CPU
World
in
semiconductor,
but
it'll
benefit
even
in
business
applications,
because,
like
things
like
zkvm
or
some
of
the
crypto
acceleration,
you
need
for
other
blockchain
infrastructure.
D
Yeah
I'm
I'm
I
think
that
the
the
on
the
mobile
side,
the
Next
Generation
Snapdragon,
is
is,
will
have
a
lot
of
the
features
that
that
you'll
need.
That
we'll
need.
B
Awesome
thanks
for
engaging
in
the
discussion,
we
always
are
able
to
fill
up
the
entire
open
floor
because
there's
so
much
to
talk
about
all
right,
so
that
will
conclude
our
May
general
assembly
for
the
Baseline
protocol.
We
will
have
our
next
one
in
June,
but
next
week
we
will
be
hosting
polygon
ID
on
the
Baseline
show.
So
please
join
the
live
studio
again
or
tune
in
Live
or
let
us
know
if
you
need
links
to
either
but
again.
Thank
you
all,
and
we
will
see
you
next
week.