►
From YouTube: The Baseline Protocol: April 2023 General Assembly
Description
The Baseline Protocol April 2023 General Assembly will take place on Wednesday, 4/19, at 12 PM EST, 6 PM CET and 10.30 PM IST!
This is an excellent opportunity for the larger Baseline community to stay updated on Baseline Protocol's progress across various working groups and engage with the core team.
We invite you to tune in, join the live audience, and share your own updates.
A
C
Well,
yeah
so
we'll
go
through
our
usual
monthly
General
Assembly
updates,
where
all
of
our
different
teams
will
give
updates
on
the
work
going
on
and
what's
to
come,
we'll
get
updates
from
Outreach
research,
core
devs,
the
TSC,
we'll
talk
about
the
roadmap
items,
the
interop
events
and
use
case,
and
then
we'll
have
an
open
floor
and
see
if
there
are
any
other
updates
or
conversation
points
from
anyone
in
the
call.
So
to
kick
us
off
Mark
haddle.
Can
you
tell
us
what's
going
on
in
the
Outreach
team.
D
Yeah
right
now
we
have
already
had
our
third
blog
of
the
year
that
was
released
last
week
early
last
week,
and
this
is
the
one
that
my
time
was
up.
So
I
wound
up.
Writing
this
one.
D
The
biggest
thing
that
we,
the
blog
really
worked
on
was
you
know
all
the
different
approaches
to
multi-party
data
sharing,
because
there's
so
many
manual
processes
and
ad
hoc
solutions
that
are
really
kind
of
cobbled
together
in
various
different
ways.
There's
not
really
a
unified
solution
that
can
deal
with
the
problems
when
it
comes
to
pardon
me
for
a
second
I
get
my
cat
down.
D
D
So
the
different
approaches
that
you
know
have
been
done:
I
delineated
in
the
in
in
the
blog
post,
looking
at
EDI,
which
you
know
has
been
around
for
40
almost
50
years
and
it's
you
know
pretty
much
Remains
the
accepted
de
facto
way
of
sharing
data
objects
right
now,
but
EDI
and
also
you
have
value-added
networks
within
EDI,
which
is
like
a
cluster
of
EDI
tutorial.
D
When
you
start
getting
into
a
multi-party
environment,
the
thing
is:
EDI
is
a
little
bit
costly
to
set
up,
maintain
and
again
it's
it's
old
technology,
but
it
works.
D
But
you
still
have
a
lot
of
the
limitations
that
you
know
what
happens
if
you
have
people
that
don't
have
the
constant
updating
of
the
data
sets
and
it
isn't
put
through
the
EDI.
That's
why
yeah
blockchain
really
emerged
and
the
two
flavors
of
blockchain
that
I
talk
about
in
the
article
public
blockchains,
as
well
as
private
permission
blockchains.
D
We,
you
know,
talk
about
the
advantages
and,
of
course,
the
trade-offs
and
the
limitations
when
it
comes
to
each
one
of
those.
There
isn't
really
one
that
is
emerging
out
front
right
now,
just
depending
on
a
use
case,
there's
many
different
approaches
and
still
just
really
trying
to
figure
out
how
the
technology
is
going
to
really
kind
of
mature.
D
So
we
also
looked
at
there's
a
couple,
other
solutions
that
are
working
out
that
are
managed
Services,
where
you
are
cloning
and
moving
your
data
to
a
common
repository
that
has
encryption
and
the
permissioned
aspect
of
it
entirely.
Cloud-Based
and
generally,
this
involves,
you
know
forcing
your
data
into
a
shared
architecture,
you're,
giving
up
custody
and
control,
and
you
know
I've
got
governance
questions
when
it
comes
to
these
new
approaches
that
are
coming
out.
As
far
as
who
owns
the
network,
what
happens
with
dispute
resolution?
D
And
you
know
looking
at
all
of
those
shortcomings,
we
talked
about
how
Baseline
really
does
not
suffer
from
the
same
design
problems
that
a
lot
of
these
other
Solutions
have,
where
you're,
not
cloning
and
moving
data,
giving
up
custody
and
control
it
stays
in
your
data
stays
in
each
system
of
record.
It's
a
lot
easier
to
secure
that
at
rest
than
data
in
motion,
and
this
is
what
we
wanted
to
have.
D
D
You're
coordinating
this
internally,
but
also
across
your
corporate
borders.
So
that's
what
we
really
showcase
within
the
within
this
month's
blog
post.
So
coming
up
next,
we've
got
another
one.
I
believe
yov
has
this
one
and
it's
all
about
how
you
know
being
a
core
Dev
within
the
Baseline
protocol
and
it's
really
exciting
a
lot
of
this
stuff.
I
didn't
know
a
lot
of
the
stuff,
but
you
guys
actually
did
sometimes
and
and
I
pay
attention.
C
Well,
yeah,
thanks
for
that
and
looking
forward
to
that
blog
I
think
we'll
plan
to
publish
that
today
or
tomorrow,
and
that
was
a
great
next
topic.
You'll
love
just
because,
like
Mark
said,
you
have
to
be
in
the
community
to
know
how
things
work,
just
because
there's
a
lot
of
moving
Parts,
but
it
isn't
too
hard.
So
if
we
just
have
resources
that
exist
that
make
it
a
little
easier
for
onboarding
that'll
help
us
continuously
grow.
C
Also
with
that
we
have
a
few
events
coming
up.
We
have
events
in
Austin
taking
place
next
week
with
eth
Austin
and
consensus
by
coindesk.
C
We
don't
have
a
baseline,
focused
event
or
none
of
our
major
involved
organizations
are
doing
anything
because
we
we
did
some
stuff
at
eth,
Austin,
sorry,
East,
Denver,
but
I
think
we
will
be
on
ground.
Our
mesh
R
D
team
will
be
there,
I
think
some
other
community
members
from
UI
and
the
others
will
be
around
as
well
as
the
eea.
C
So
we
will
meet
in
person,
try
to
get
some
stuff
done
where
we
can
and
then
our
mesh
R
D
team
is
going
to
be
presenting
at
the
hyper
Ledger
booth
on
next
Wednesday.
We
have
the
last
slot
where
we'll
be
talking
about
the
ZK
VM
and
also
there
will
definitely
be
mentions
of
Baseline,
so
that
will
be
a
little
unique
opportunity
for
us
to
share
with
those
in
the
Linux
Foundation
ecosystem,
as
well
as
those
interested
in
learning
about
Baseline,
so
also
from
the
Outreach
team.
C
We
do
have
our
Outreach
rfps
that
are
at
the
final
stage.
We
just
need
to
wrap
them
up
and
get
them
posted
I.
Think
that's
on
the
the
list
for
the
next
day
or
two
so
Mark
is
our
Outreach
chair
mapped
out
the
roadmap
items
on
that
are
relevant
to
outreach,
so
what
the
large
events
for
the
year
will
look
like
and
what
types
of
resources
are
needed
to
support
that
so
we'll
share
more
on
that
next
time,
all
right!
So
Mark.
F
Yeah
sure,
thank
you
so
right
now
in
the
research
work
group
we're
about
halfway
through
all
of
the
all
the
user
stories
that
we're
creating
for
this
zero
trust,
zero
knowledge,
International,
Supply,
Chain,
and
that's
really
all
to
say
about
the
update.
F
We
hope
that
everybody
is
enjoying
the
content
so
far,
who's
paying
attention
to
the
materials
that
we're
creating
and
I
also
hope
that
if
there's
anybody
who's
interested
in
joining
the
work
group
sessions
that
they
do
and
that
I
just
want
to
add
that
these
sessions
are
great
for
anybody,
who's
interested
in
understanding
a
little
bit
more
about
how
Baseline
protocol
Works,
how
you
can
create
ZT
ZK
chains
and
if
there's
anybody
with
relevant
business
experience,
who
can
make
user
stories
and
create
requirements
that
are
helpful
for
developers
who
want
to
build
new
use
cases
using
the
Baseline
protocol.
B
All
right,
yes,
thank
you,
so
core
Dev
updates
are
mainly
going
to
Encompass
the
SRI
Bri
3
work
group
and
kind
of
where
that's
progressed
since
the
last
time
we've
given
an
update
So.
Currently,
the
SRI
is
on
Milestone
four.
The
team
has
kind
of
gone
through
and
discussed
all
of
the
necessary
components
that
Milestone
4
would
Encompass
and
have
opened
relevant
issues
in
the
GitHub
repo
that
kind
of
cover
those
different
components.
B
So
there
are
several
open
issues
like
I
just
mentioned
that
are
kind
of
relevant
to
the
milestone
for
things
like
Merkel
proof
in
the
ZK
circuit,
a
state
object,
cruds,
happy
case
end-to-end
testing,
mocking
Prisma
Merkle
tree
crud.
Things
like
this
things
that
kind
of
involve
the
the
overall,
the
overall
objective
of
Milestone.
Four.
B
There
are
some
PRS
that
have
kind
of
been
open.
There
are
I
believe
three
or
four
that
are
all
related
to
one
of
the
open
issues.
B
They've
got
a
couple
comments
and
discussions
some
back
and
forth
on
the
implementation
of
them,
but
should
sort
of
be
updated
and
reviewed
fairly
soon
and
those
those
those
PRS
are
accomplishing
the
zkp
component
that
connects
to
the
blockchain
itself
and
the
zkp
component
signature
edition,
as
well
as
the
Merkel
tree
crud,
and
the
mock
Prisma
client,
which
will
kind
of
remove
the
need
for
mocking
the
transaction
storage
agent,
which
kind
of
involves
a
lot
of
overhead
and
redundant
code
on
the
side
of
the
developer.
B
B
So,
alongside
all
of
the
actual
development
time
being
spent,
a
lot
of
time
has
gone
into
discussing
what
kind
of
use
case
we
want
to
go
forward
with
whether
it
be
something
real
world,
something
that's
actually
going
to
be
implementable
immediately
or
whether
it
be
something
that's
kind
of
more
in
line
with
reference,
material,
more
mock,
Style
data
and
we
decided
on
or
the
current
I
should
say,
the
current
documentation,
that's
kind
of
been
prepared
and
everybody
is
looking
at
the
proposal
itself.
B
It
has
been
decided
on
to
be
something
incredibly
simple
and
straightforward
kind
of
without
any,
without
going
into
too
much
complexity,
which
is
kind
of
the
overall
design
architecture
of
the
SRI,
to
kind
of
be
as
simple
and
easy
to
understand
as
possible
to
to
become
sort
of
the
starting
point.
The
base
layer
of
an
implementation
to
be
able
to
just
take
this
and
then
add,
on
top
of
it,
you
know
a
new
custom
implementation,
something
more
novel.
B
So
the
current-
the
current
proposed
use
case
is
to
have
two
counterparties
acting
as
individuals
in
a
buyer,
seller
style
relationship,
something
that's
kind
of
ubiquitous
and
easy
to
understand.
Things
like
this
are
often
brought
up
when,
when
example,
use
cases
are
needed.
B
B
And
so
there
is
a
document.
Maybe
we
can
share
I'm,
not
so
sure,
but
it
goes
through
the
details
of
how
this
use
case
would
be
implemented.
B
Kind
of
utilizing,
the
current
framework
of
where
the
SRI
is
now
it's
still
quite
early
stages.
The
team
is
providing
a
lot
of
feedback
back
and
forth
and
discussing
it
and
try
to
decide
if
it's
kind
of
exactly
what
we
want
to
go
with
or
if
it
needs
to
be
tweaked.
B
But
this
is
something
that
is
discussed
in
our
weekly
calls.
Every
Thursday
10
a.m,
eastern
time
for
anybody
who's
interested
in
joining
in
and
kind
of
either
listed
again
or
providing
feedback
as
to
what
they
think
is
more
appropriate.
A
A
The
implemented
use
case
will
actually
be
within
the
BPI,
but
more
of
a
way
for
the
team
to
align
on
the
the
concepts
and
the
largely
the
the
path
that
would
be
traveled
in
whatever
use
case
is
finally
decided
decided
in
synchronizing
their
states
and
so
on
and
so
forth.
So
it's
good
to
keep
that
in
mind.
If
you
look
at
that
document,
it's
aimed
at
bringing
an
understanding
to
you
know
the
underlying
functionality.
That's
going
to
be
implemented,
Mouse
to
point
not
necessarily
the
use
case.
Exactly
that's
it.
G
Absolutely
so,
we
are
aware
in
an
interesting
situation,
because
the
the
a
communities
projects
requirements
for
a
standard
listed
that
a
must
requirement
about
including
testability
statements
without
defining
testability
note.
W3C
has
spent
the
better
part
of
part
of
three
decades
trying
to
Define
testability
that
everybody
can
accept.
Needless
to
say
that
their
efforts
to
Define
this
in
a
unanimous
manner
have
been
utterly
unsuccessful.
G
Just
as
a
side
note,
however,
we
are
a
little
bit
more
pragmatic,
so
we
try
things
out
and
iterated
towards
a
more
practical
definition
of
testability,
which
I
think
after
much
work
and
iterations
and
complaints-
and
you
know
it's
like
facetiousness,
going
back
and
forth
between
all
interested
parties,
primarily
from
me,
because
I'm,
a
facetious
person
came
to
the
conclusion
that
these
testability
statements
in
order
to
be
useful
for
implementers,
should
be
a
human
readable
test.
G
A
logical
test
I
can
I
can
I
can
share
an
example,
but
it's
basically
you
know
what
can
you
you
know
if
a
requirement
states
that,
for
example,
a
work
step
must
have
an
input
process,
steps
and
and
and
and.
B
G
G
You
know
it's
like
you
can
test
whether
it
has
an
input
or
not
right,
whether
there's
a
change
to
the
input
into
an
output
and
whether
that
output
is
is
then
is
then
you
know
you
can
you
can
say
whether
there
is
an
output,
irrespective
of
what
the
details
of
the
output
are
so
based
on
this
type
of
thinking.
G
We
are
currently
at
the
stage
of
finalizing
a
general
structure.
How
we
should
write
these
I
think
this
in
the
end
will
be
also
really
helpful
for
or
implementers
because
they
can
basically
just
put
some
meat
on
the
logical
test
Bones
by
adding
their
then
very
specific
tests.
Again,
those
are
non-normative
statements,
so
it
it
really
doesn't
doesn't
require.
The
test
has
to
be
exactly
like
this
right.
G
This
is
just
a
guide,
though
I
think
we're
we're
close
to
that
and
I
think
there's
there
are
there
are
we
will
be
able
to.
You
know
now
that
we
have
this
Clarity
be
able
to
not
in
short
order,
but
in
a
reasonable
order.
Aka
time
frame
I'll
be
able
to
to
complete
the
complete
this.
This
task
we'll
have
to
rewrite
some
of
the
some
of
the
the
testability
statements
that
we
already
have,
but
it's
not
going
to
be.
Thank
you,
Chad
GPT.
G
That
is
very,
very
helpful
to
very
helpful
tool
to
to
write
and
write
the
first
version,
the
80
there
version
and
then
tweak
it
to
make
it
to
make
it
work.
So
you
can
I
I,
got
it
down
to
about
five
odd
minutes
per
requirement
using
using
chat,
GPT
or
any
other
equivalent
large
language
model
tool.
G
So
we're
working
on
that
the
the
editors
are
are
working
on
that
wall
now,
with
renewed
effort,
go
through
all
those
requirements
and
add
which,
unfortunately,
will
add,
you
know
decent
decent
volume
to
the
actual
length
of
of
the
documents,
but
I
think
it
will.
It
will
help
overall.
G
Even
though
there's
some
bickering
about
all
these
are
not
really
required,
but
anyway
the
the
the
or
the
requirements
are
well
written.
It's
like
yeah.
If
you
can
write
a
logical
test
for
it,
it
should
be
good
enough.
G
G
Yes
right,
TSC
updates
well
so
Mark
already
talked
about
the
blog
that
was
published,
so
we
have.
If,
if
you
don't
know,
every
TSD
member
have
the
responsibility
of
producing
a
Blog
during
their
tenure
of
the
one
year
10
years,
so
that
the
work
we're
we're
filling
up
those
those
slots
with
different
points
of
view,
which
is
because
we
have
a
decent
variety
on
the
of
backgrounds
on
the
TSC
we
are
actually
getting
in.
G
You
know
very
interesting
views
on
like
Baseline
and
how
it's
useful
for
different
types
of
roles,
entities,
people
I,
think
that's
Autumn,
Mark
great
job
we're
job
is
is
is
up
next,
talking
about
being
a
court
F
or
becoming
a
a
chord
F,
which
I
think
is
is
is
is
great.
So
all
very
much
thank
thanks
to
Mark
appreciation
and
you
know
high
five,
two
two
Joe
off
to
taking
on
this
this
work.
G
We
have
published
a
a
request
for
proposal
for
the
interoperability
demonstration.
G
G
G
We
obviously
need
then
also
grant
funding.
So
if
any
of
you
are
eager
and
Keen
to
either
contribute
in
kind
with
your
with
your
time
and
brain
cells
or
any
software,
you
want
to
showcase
or
if
you
open
up
your
purse
strings
and
are
willing
to
to
support
any
particular
RFP
with
a
grant.
Please
do
so
all
the
instructions
are
actually
in
the
or
in
the
RFP,
and
we
can
help
you.
G
Obviously
you
can
go
on
slack
and
reach
out
to
the
to
the
TSC,
if
you're
not
sure
how
to
do
that,
and
we
will
gladly
help
you
figure
out
how
to
to
to
you
know
sponsor
the
RFP
with
your
with
your
money
or
cryptocurrency,
we're
not
we're
not
discriminatory
against
that
either
So.
G
Speaking
of
the
interop
we
are
have
started
and
launched,
and
interoperability
working
group
which
meets
Friday
at
7,
AM
Pacific
10
a.m,
Eastern
4,
P.M,
Central,
European,
Time,
so
the
the
goal
there
is,
we,
you
know
everybody
interested
in
the
interoperability
subjects
and
potentially
wanting
to
to
to
reply
to
the
RFP
we're
working
together
to
Define
the
the
actual
use
case.
That
is,
and
then
also
then
write
down
hey.
G
There
are
a
whole
bunch
of
these
annoying
interoperability
requirements
in
the
Baseline
standard,
like
how
the
heck
are
we
gonna
gonna
Implement,
those
which
is
actually
a
great
test
to
figure
out
whether
they
are
actually
really
implementable,
whether
they
are
they
make
sense
or
not,
and
so,
with
great
discussions
going
on
there,
we
have
an
open
Slack
group
on
the
Baseline
slack
called
the
interop
working
group.
So
please
join
if
you're.
G
If
you're,
if
you're
interested
the
more
the
merrier,
we
typically
have
like
five
six
people
on
those
calls
working
working
together,
there's
also
a
document
which
is
also
pinned
to
the
to
the
group.
G
Where,
with
all
the
notes
and
and
where
things
currently
stand,
which
brings
me
to
the
last
Point
there,
which
is
the
the
use
case
that
the
group
has
has
settled
on,
which
is
something
that
is
very
common
happens
across
all
Industries,
which
is
the
synchronization
of
banking
data
between
a
company
and
a
bank
or
a
company's
customer
without
ever
exchanging
that
banking
information,
because
you're
not
allowed
to
so
that
that's
a
that's
an
that
is
a
and
we
are
we.
G
We
have
interest
from
from
various
sponsors
to
provide
different
types
of
systems
on
each
side,
actual
Enterprise
systems
to
make
this
demonstration
as
real
as
possible,
so
yeah,
so
that
that
the
that
is
both
the
update
and
obviously
the
highlights
also
the
focus,
Roadshow
and
interop
for
for
the
TSC
this
year.
C
All
right,
thank
you.
That
is
the
end
of
our
formal
agenda
of
General.
Assembly
updates
I
just
want
to
open
the
floor
and
see.
Are
there
any
other
updates?
Anyone
wants
to
give
or
anything
you've
been
working
on
recently
you're
learning
about
or
stuck
on.
E
Yeah
lots
going
on.
It's
been
a
busy
past
few
months.
We
recently
got
back
from
ZK,
Summit
and
I.
Think
we've
mentioned
for
a
while.
Now
one
of
our
Ambitions
was
building
ZK
State
channels-
and
you
know,
we've
been
talking
about
it.
The
past
year
about
I,
think
and
all
I
ended
up.
E
Taking
is
two
sleepless
nights
in
Portugal
and
we
got
them
built
with
plancky
two,
which
was
super
exciting
to
get
done,
but
we
realized
when
we
were
there,
that
not
a
lot
of
people
were
using
plancky,
2
right
now
and
there's
a
lot
of
Halo
2,
which
we
kind
of
dove
into
before,
but
there's
a
lot
to
digest
there.
So
we're
taking
a
look
at
that
again,
more
so
at
accumulation
and
the
kcg
implementation,
which
we
didn't
really
dive
into.
E
Yet
we
actually
just
got
done
hopping
off
a
call
with
Aztec
this
morning
and
we're
in
talks
right
now
to
bring
Halo
2
into
Noir
right
now,
which
would
be
super
exciting.
E
So
I
think
we
mentioned
before
on
a
call
the
way
Noir
works
is
they
have
an
abstract
representation
called
Acer,
so
you
can
take
different
proving
systems
and
plug
them
into
the
back
end.
I
think
most
people
here
are
familiar
with
dinaric
or
narc.
However,
you
pronounce
it
so
it
sounds
like
that
might
be
a
go,
but
we're
still
talking
to
them
right
now,
but
yeah
I
know
that's
kind
of
a
short
summary
of
what's
been
going
on.
C
H
There's
a
there's
this
exciting
time:
oh
I,
I
gotta,
grow
up
a
little
bit
here
got
a
little
short
there.
My
my
camera
keeps
getting
knocked
around
by
this
chord
over
here,
but
you
know
tons
of
exciting
things
going
on
around
provide.
You
know
folks
may
be
familiar
with
Eco,
that's
been
catching
on
really.
Well,
you
know.
Sustainability
is
a
really
great
way
to
engage
with
companies
about.
H
You
know
their
business
transformation,
because
what
I
always
encourage
folks
to
think
about
relative
to
Baseline
relative
to
to
blockchain
in
general,
is
you
know
think
about
these
Enterprise
customers?
Think
about
you
know
what
their
business
transformation
goals
are
because
they
are
getting
buyer
hosts
with
not
only
blockchain
or
Baseline
things,
but
AI
things
and
just
running
their
Core
Business.
H
So
when
you
think
about
how
to
communicate,
you
know
what
their
priorities
are.
Sustainability
is
actually
top
so
lots
of
exciting
things
you
know
still
to
announce
or
or
share
in
the
future,
but
we're
active
with
the
interop
discussion
here
as
well.
You
know
the
bank
Master
data
synchronization,
that's
something
that
we've
prototypes.
You
know
with
with
shuttle
successfully
and
submitted
to
hackathons
and
other
events
and
circumstances,
so
we're
excited
to
to
see
that
Advance
through
interop,
and
you
have
more
discussions
as
well
about
you
know.
H
How
does
that
fit
in
into
more
on-chain
Finance
right?
Because
you
know
once
you
have
like
these
cryptographic
assurances
around
correct
payment
address,
to
send
to
a
customer
or
or
or
vendor
right
now,
now,
you've
really
taken
a
step
towards
like
like
true
on-chain
settlements,
which
I
think
should
interest
everyone
here
on
this
call
in
this,
this
space
is
like
Hey.
H
How
do
we
move
from
an
EDI
Centric
payment
rails
to
a
more
digital
one
right,
and
you
know,
speaking
of
EDI,
that's
you
know
we're
we're
harping
on
this
message
of
like
ZK
EDI
right,
where
it's
not
necessarily
a
rip
and
replace
of
of
EDI
or
idocs,
but
rather
like
how
do
you
get
that
Assurance
outside
the
black
box
right?
So
once
you
send
the
CDI
document-
and
you
know
doing
more
than
just
confirmations,
but
getting
those
assurances
up?
H
H
You
know
really
tremendous
potential
there
overall,
so
more
news
to
come,
we're
going
to
do
another
provide
webinar
here
in
May,
we'll
also
be
doing
an
event.
You
know
in
partnership
with
xenerjex,
which
is
a
partner
of
ours,
that's
based
in
the
Australian
New
Zealand
region,
so
they're
very
excited
about
Baseline
as
well,
and
you
know
getting
getting
that
into
the
hands
of
more
sap
users
across
the
ANZ
region.
So
that's
that's
all
from
me
and
thanks
yeah
thanks
for
like.
Let
me
have
an
update
here.
A
C
D
G
Oh
they've
been
doing
that
for
years
already
and
that's
that's
wildly
wildly
widely
used.
I
mean
I
mean
Watson
was
the
first
one,
and
then
they,
you
know,
turned
out
that
that
that
did
not
work
so
well,
but
for
certain
things
it
is
significantly
more
advanced.
In
fact,
there
was
there
was
a
there.
Are
a
couple
of
research
papers
out
that
large
language
models
actually
can
detect
certain
types
of
cancer
significantly
earlier,
based
on
based
on
blood
tests
than
than
doctors.
G
Can
it's
just
simply
because
they,
because
because
of
you
know
you
you
just
can't
compare
a
much
larger
data
set
much
more
quickly,
so
it's
it's!
It's
not
like
super
magic.
It's
just
that.
You
know
it's
like
human
being.
Just
has
a
limited
amount
of
data
that
it
can
a
process
and
be
retained
at
any
point
in
time.
That
includes
patterns.
So
it's
it's.
It's
not!
It's
not
surprising.
However,
every
every
diagnosis
budget
by
these
type
of
large
language
models,
not
change
apt
in
particular,
any
large
language
models
needs
to
be.
G
You
know,
verified
right,
so
it's
like
it's
it's
it.
It's
should
be
used
as
a
as
a
diagnostic
assistant,
not
as
the
it's,
not
it's
not
there,
that
you
can
basically
delegate
full
responsibility
for
a
diagnosis
to
those
models.
G
To
very
quickly
simply
because,
because
this
type
of
like
pattern,
recognition,
these
models
are
much
more
suitable
for
than
human
beings,
at
least
basic
based
on
based
on
you
know,
if
you
need
a
lot
of
data
in
order
to
be
able
to
to
to
make
accurate
predictions,
especially
when
it
comes
it's
a
great
tool
for
for
especially
younger
healthcare
providers
that
haven't
had
Decades
of
experience
and
have
seen
basically
build
up
that
that
pattern
matching
most
pattern.
Matching
capabilities.
Well,.
D
And
I
think
they
miss
a
lot
of
the
tiny
details
and
stuff,
because
there
are
so
many
different
sources
of
data
that
they're
relied
upon
within
the
clinical
setting.
I
mean
you
know,
forget
the
research
for
a
second,
but
you
know
and
just
to
submit
a
claim.
You
know
just
whoever
is
within
the
revenue
cycle
of
the
provider's
office
staff.
D
G
Oh
yeah
I
mean
that's,
that's
a
that's
a
given
about
about
like
claims
and
stuff
like
that.
Yes,
I
was
just
merely
talking
about
Diagnostics,
but
speaking
of
claims.
It's
like
most
health
insurance
providers
are
actually
most
claims
are
never
seen
by
a
human
being.
D
G
G
Claims
that
are
that
are
that
are
that
are
that
are
reviewed
by
Machine
learning.
Every
single
claim
is
reviewed
by
a
machine
learning,
algorithms
and
I.
Think
over
80
are
fully
automatically
adjudicated,
including
a
lot
of
very
questionable
ones
that
are
then.
You
know
that
are
that
they're
basically
saying.
E
G
Are
we're
we're
just
denying
only
on
on
unless
people
scream,
scream
scream
and
then
come
with
a
lawyer,
we'll
be
will
be,
or
we
do
anything
about.
It's
basically
the
same
reasoning
that
the
IRS
had
when
they
are
audited,
primarily
low
income
families.
G
The
same
thing
right,
low
income
means
you
don't
have
the
knowledge
and
the
resources
to
properly
defend
yourself.
A
G
To
a
thousand
X
more
resources
to
do
an
audit
on
on
the
people
who
should
be
audited
compared
to
the
compared
to
low
income
right
I
mean
you:
can
the
audits,
unlike
low
income,
can
be
you?
Can
one
person
can
do
multiple
ones?
If
you
want
to
audit
a
billionaire,
you
need
between
50
to
100
people.
G
Which
is
which
is
which
is
much
more
effective
way
of
deterrence
for
employing
these
type
of
schemes
anyway,
just
saying
that
that
AI
models
are
also
utilized
in
in
fraud
detection
for
taxes,
you
can
you
can.
You
can
easily
tell,
for
example,
based
on
the
structure
of
numbers,
not
like
literally,
you
can
tell
whether
books
are
cooked,
based
on
based
on
what
the
what
what
the
number,
what
the
individual
numbers
are.
G
No,
you
you
can
you
you,
don't
even
need
that
that
much
sophistication
you
you
can.
You
can
just
tell
if,
if,
if
numbers
are
too
round
and
they're
this
and
that
there
there
are
there's
there
there's
a
book
that
has
been
written
about
that.
So
it's
like
it's
like
that
is
relatively
easy
to
to
spot
and
and
based
on
you
know,
forensic
accounting
practices
anyway.
Irrespective
of
of
that
yeah,
the
the
the
problematic
thing
is
that
there
is
a
lack
of
regulatory
framework
for
applying
AI
models.
D
And
even
a
lot
of
people,
they
can't
get.
You
know
really
there's
a
difference
between
Ai
and
between
artificial
intelligence
and
machine
learning
and
everybody's,
like
what's
the
difference
and
I'm
like
well,
if
it's
written
in
Python,
it's
machine
learning,
if
it's
written
in
PowerPoint,
it's
AI.
D
G
Yeah,
it's
machine
learning
is
typically
one
algorithm
large
language
models
employ
on
the
order
of
a
dozen
different
algorithms
right,
where
you
know
you,
you
machine
learning,
you
you'll
often
do
like
just
one
like
random
forest
or
something
like
that,
but
in
with
large
language
models,
you
in
fact
have
yeah
like
about
a
dozen
models
that
are
that
are
that
are
that
are
layered
on
top
of
each
other.
Well,.
G
D
Well,
and
the
big
thing
is:
is
you
the
output
that
you're
going
to
get
from
AIML
whether
either
one
is
only
as
good
as
the
input
that
you
have?
And
you
know
the
big
problem
that
I
see
exists
everywhere?
Is
you've
got
you
know?
You
know
data
science
teams
within
even
a
large
company
saying
we
don't
have
the
entire
data
set.
We
don't
have
all
the
pieces
of
the
puzzle.
D
We
have
missing
elements
and
we
need
to
you
know
we
can't
run
our
engine
until
we
get
those
and
so-
and
we
didn't
realize
that
we
needed
that
to
advance
three
months
ago.
D
So
now
we've
got
to
spend
another
month
and
a
half
trying
to
find
that
additional
missing
data
that
they
need,
and
it's
kind
of
interesting
I'm,
seeing
some
you're
seeing
some
exchanges,
or
at
least
some
ideas
that
are
coming
out
going-
hey,
let's,
let's
you
know,
match
these
people
together,
those
that
are
missing
certain
parts
of
the
data
set,
because
it's
very
uncommon
that
somebody
has
everything
that
they
will
need
to
have
that
high
confidence
output
from
the
AIML.
G
Right
and
you
can't,
you
can't-
share
the
data
right,
so
that's
where,
where
ZK
ml
comes
in
again,
it
becomes
it's
a
it's
a
it's,
a
multi-party
coordination,
which
means
there
are
trust
under
Azure
knowledge
that
that
becomes
becomes
necessary.
So
now
we're
talking
ckml
dkvm.
Those
are
those
are
those
are
that
becomes
that's
an
active
area
of
of
research
and
development,
because
it
it
it
it
is.
G
It
is
crucial
right
so,
especially
as
you,
if
you
push
things
toward
to
the
edge
it's
it's
and
and
the
more
these
models
are
being
delegated
to
in
terms
of
making
decisions.
G
That
the
source
source
sourcing-
and
you
know
it's
like
you
know,
debiasing
data
is
is-
is
right.
H
A
H
And
like
the
whole
hook
of
the
show,
was
like
at
the
beginning
of
every
show,
all
they
got
was
just
like
a
name
right.
They
didn't
get
any
other
context
about
this
person.
It
was
just
like
okay,
you
gotta
figure
out.
What's
going
on
with
this
person
right,
like
didn't,
didn't
talk
about
everything
else
going
on
with
this
person,
but
the
but
the
machine.
This
AI
was
nudging
them
towards.
Okay,
you've
got
to
figure
out.
What's
going
on
with
this
person
at
a
given
time
and
I.
H
Think
that
show
was
like
weirdly
prophetic
because
it
like
it,
it
kind
of
tied
into
like,
like
all
these
crazy
things
that
could
be
like
kind
of
guard
rails
for
AI
or
or
like
things
AI
could
do
for
itself
like
towards
the
end
of
the
show,
is
like
moving
its
own
data
center,
like
that
was
crazy.
Stuff
fun
show
highly
recommended
by
everyone
on
on
Baseline.
A
H
G
I
G
That
you,
you
you're,
you're
you're,
you
will
be
interacting
with
ages,
so
the
the
the
the
the
the
key
information
is
right
as
you
are
starting
to
interact
with
different
AI
agents
right.
What
what
do
the
AI
agents
have
to
prove
about
themselves?
Right?
It's
like
it's
like
you
know.
We
we
can.
We,
you
know
it's
gonna,
be
because
they're
gonna
interact.
G
I
So
that
becomes
really
really
you
know
it
it
it
as
AI
matures.
G
G
H
H
Like
it,
it
has
to
have
a
way
of
parsing
to
what's
just
garbage
data
or
like
training
model
data.
It's
that's
meant
to
throw
it
off
right
because
you
you
see
this
you
discuss
discussed
often
is
like.
If
you
give
chat
GPT
a
bunch
of
you
know
garbage.
You
know
fake
news,
it's
gonna
spew
that
right
back
to
you
right,
so
it's
it's
not
necessarily
that
AI
is
always
correct.
It's
only
as
good
as
the
the
training
data.
That's
there
right!
H
So
if,
if
you
can
have,
you
know,
verified
like
axiomatic
level
data
and
like
reporting
that
goes
with
the
information
that's
cryptic
cryptographically
signed
by
certain
parties,
then
then
you're
on
to
something
better
right.
You're,
it's
not
going
to
be
just
kind
of
a
garbage
and
garbage
out
type
of
scenario
with
what
information
they
give
you
yeah.
G
You
you,
you
need
a
you
need
a
provenance
of
your
data
supply
chain
right,
as
as
with
with
with
pretty
much
everything
and
what
that
comes
back
to
multi-party
multi-party
automation.
You
know
under
zero
truck
in
zero
trust
and
under
zero
knowledge
right.
So
again
you
can't
know,
but
you
need
to
prove-
and
there
is
no
trust
right.
Trust
needs
to
be
re-established
every
single
time
right
because
you
could
have
been
compromised
since
the
last
time
we
interacted
wait.
That
was
just
100
milliseconds
ago.
I,
don't
know
it's
like!
So
what
it's?
A
G
G
It
started
things
that
have
literally
legal
implications,
significant
legal
implication
because
the
you
know
the
standard
in
a
lot
of
court
systems
is
just
you
know
you
get
off
if
there's
Reasonable
Doubt
so
and
and
and
if
you
can,
through
fake
information,
big
proofs
generate
Reasonable
Doubt.
No
one
can
verify
anything
really.
Then
your
entire
legal
system
is
in
de
facto
imploding
because
because
there
there's
there's
no
way
that
you
can
establish
not.
G
G
Blocks
that
are
just
simply
missing
in
order
to
handle
this
and
we're
building
those.
H
Well,
I
think
a
a
really
cool
thing
that
I
I
read
this
week.
Actually
there
was
a
stablecoin
proposal
that
that
way
quickly
into
Congress
for
for
the
financial
services
and
interops,
actually
one
of
their
one
of
their
requirements,
actually
right
to
say,
like
hey,
we
know,
there's
going
to
be
all
of
these
different
networks
and
some
some
manner,
but
we
need
to
have
some
level
of
interoperability
to
have
assurances
across
each
one
if
you're
going
to
use
different
different
things
all
over
the
place
right,
so
you
know
I
think
about
interop.
H
We
we
think
about
interrupting
the
context
of
just
BPI
instances
right.
But
what
is
a
BPI
instance?
It
could
be
two
runtimes
of
like
API
API
Frameworks,
but
could
also
be
different
chains
right
it
could.
It
could
be
so
so
we
think
of
BPI
interrupt
between
like
bri
one,
be
it
like
bri
three,
but
reality
it's
not
only
just
that,
but
anywhere
there's.
You
know
zero
knowledge
applied
right
because
we
we
discussed
that
in
depth
of
like
hey.
H
You
know
a
zero
knowledge.
Proof
should
always
be
verifiable
in
any
context
where
you
have
the
the
same
work
group,
the
same
information.
The
same
could
like
credentials
involved
right,
so
I
I
may
have
butchered
some
of
that
on
the
standards
there,
but
the
I
think
the
idea
stays.
The
same
is
like
hey
look.
Interop
is
actually
an
important
thing
across
systems.
G
I
Right,
not
only
are
the
you
know,
the
validators
that
are
facilitating
that
Exchange
need
a
money
transmitter
license
right,
but
you
know
you
need
to
apply,
for
example,
the
travel
rule
right.
You
need
to
apply
all
factless
right,
it's
like
you,
you
just
can't
just
like,
and
this
must
be
done
in
in
zero
knowledge,
because
you
are
operating
on
public
networks,
even
if
they're
permission
they're
going
to
be
public
right.
I
I
You
know
on
each
Network
because
the
trust
assumption
again
vary
between
those
networks.
So
every
time
you
interact,
you
need
to
again
establish
truck.
So
again,
there
are
Trot
under
zero
knowledge
right
again
between
multiple
parties
right
because
you
need
to
because
we're
the
multiple
parties,
even
if
you
send
yourself
money,
there's
going
to
be
the
regulator
and.
H
Well,
I
mean,
on
the
other
side
too,
like
a
practical
business
operations,
level
I
like
and
my
sap
level
view
I.
Think
of
of
you
know
clearing
payments.
You
know
you
want
to
know
if
you're
the
actual
status
of
your
payment
or
ones
that
you're
receiving
you
know
typically
work
that
you
know
through
EDI.
You
work
that
through
Swift
there's
a
really
compelling
way
to
apply
Baseline.
You
know
in
the
context
of
you
know,
Swift
and
invoicing
and
payments.
You
know
B2B
wise.
H
So
if
we
could
ever
find
find
one
of
the
those
guys
I
think
chain
link
was
was
trying
to
to
do
something
with
swift.
A
while
ago.
H
Yeah
ccip
but
I
think
there's
a
missing
piece
of
the
puzzle
there
of
like
you
actually
have
to
do
some
other
assurances
with
payments
of
regarding
you
know
what
what
purchase
order
or
invoices
related
to
that
that's
not
captured
by
ccip
or
something
else
so
that
we
need
they.
They
call
me
call
me
Sergey,
I,
I
I,
have
ideas.
G
How
to
prove
that
two
data
sets
you
know
in
different
systems
are
the
same,
probably
the
same
right,
verifiably
the
same,
and
it's
it's
the
the
that
that
is
actually
without
exchanging
any
any
any
data
right,
so
that
that
is
is
is
that's
actually
quite
fascinating,
because
you
know,
as
Ryan,
as
you
pointed
out,
like
part
of
payment,
runs.
H
Yeah,
like
that's,
that's
the
deep
dark
secret
behind
B2B
payments
and
because
it's
it's
not
like
as
nice
as
striper
cash
app,
you
know
what
happens.
Is
you
know,
businesses,
you
know,
produce
a
file
on
a
daily
basis
of
of
all
the
vendors
they
pay
and
they
put
it
into
one
file
that
has
the
business
name.
H
Their
bank
account
numbers
what
amounts
they're
being
paid
and
there's
a
lot
of
of
Legacy
access
controls
on
that
right
to
make
sure
that
never
falls
in
the
right
hands,
but
it's
still
there
and
it's
it's
and
it's
subject
to
being
manipulated
because
like
when
we
sell
this
business
case
for
interrupt
working
group
around
bankmaster
data.
We're
talking
about
the
scenario
where
accounts
payable
departments
get
phone,
calls
or
emails.
Saying
hey!
You
should
change.
This
bank
account
number
from
one
to
three
XYZ
to
this
other
thing.
H
Well,
that's
a
well-known
tactic
by
yeah
buy
social
engineers
and
hackers
to
actually
steal
payments
right,
like
every
every
accounts
payable
department
has
a
story
or
or
nightmare
story
of
a
payment
that
was
stolen.
That
way
or
is
attempted
to
be
still
that
way
right.
H
H
I
C
We
are,
but
thank
you
for
the
very
great
conversation.
This
was
a
good
General
Assembly.
We
will
see
everyone
next
month,
but
we
will
also
see
you
next
week
where
we
will
host
ZK
pass
that
focuses
on
two
topics
that
are
relevant
to
Baseline
of
ZK
and
identity,
so
do
a
little
reading
up
on
them
and
they'll
walk
us
through
what
they
do
and
we'll
identify
any
relevance
for
the
Baseline
protocol
next
week.
Thank
you
all.