►
A
A
Okay,
okay,
so
okay,
so
yeah,
so
okay,
so
we're
gonna
dive
into
alice.
Now,
and
so
you
know,
if
you
have
questions
like
if
you
have
questions
or
comments
along
the
way,
but
let's
keep
it
focused
on
this
for
now
for
this,
this
stream
and
then
but
yeah,
and
then
we
can
sync
on
the
rest
of
stuff
in
the
weekly
next
week.
So
where
we're
where
we're
at
right
now
is
we
we
are
so
so
we
we
so
we're
within
operations
peer
id,
we
created
a
new.
A
We
created
a
new
directory
and
inside
operations
pure
id,
similar
to
the
what
what
the
model
tutorial
shows
and
then
we
we
said:
well,
you
know
we're
thinking
of
of
making
some
more
lightweight
packaging
interfaces
so
that
maybe
we
could
deploy
just
a
file
and
then
build
the
package
off
the
file
and
auto
package
and
maybe
commit
that
to
get
still
report
support
loading
from
git
directories.
B
Just
one
thing
is
that
I'm
looking
at
the
stream
right
now
and
it
kind
of
looks
a
bit
off,
especially
the
video
layoff.
A
A
I
made
them
opaque
so
that
you
could
see
through
them,
and
then
I
did
that
black
and
white
because
of
the
just
because
it
would
be
more
contrasting.
So
you
could
actually
see
better
over
the.
C
C
A
All
right
there
we
go
and
we
did
end
up
with
a
new
video.
Then
let's
see
did
we
or
did
we
not
okay,
so
we're
live
now,
so
we
did
lose
the
last
part
of
that.
Okay,
damn
that's
funky,
obs,
okay,
so,
okay!
So
where
are
we
so
yeah?
So
we
we
grabbed
from
the
test.
Vectors
of
the
pure
did
we're
trying
to
figure
out.
How
are
we
gonna?
A
We
we
want
to
confirm
we
so
so
we
started
with
basically
this
hypothesis
that
that
the
pure
that
the
did
spec
and
and
and
working
groups
around
that,
including
the
distributed
web
node
stuff,
could
offer
us
an
opportunity
to
you
know
to
use
it
as
as
a
as
a
as
a
you
know,
a
data
data
encapsulation
to
help
us
go
across
to
communicate
across
different
protocols.
So,
let's
see
so
we're
trying
to
figure
out
right
now
do
if
we
were
to
parse
this
thing
right.
A
If
we
were,
we
were
the
shim
and
we
see
this
blob
come
in
right.
How
are
we
gonna
know
that
it's
a
d
id
right
so
first
off?
Well,
we
see
we're
gonna,
we're
gonna,
validate
the
shima
right,
so
so
shim
shim,
let's
just
write
what
the
shim
does.
B
B
A
And
I
want
to
go
through
and
part
of
the
reason
why
we're
recording
all
this
is
so
that
as
we
go
through
and
do
it
and
explain,
then
we
can
have
alice
come
back
through
it
later
and
help
us
make
sure
that
we
captured
everything.
A
Okay,
got
that
one
yeah
and
then
just
comment
in
the
thread
with
anything
like
that.
If
you
think
of
you
know
miscellaneous
things,
but
also
yes,
please
please
tell
me
as
well,
so
I
can
be
aware,
because
I
don't
see
it
so
that's
a
good,
that's
a
good
idea.
Okay,
so
so
yeah,
so
the
the
shim
layer
is
linked
in
in
from
the
manifest
discussion
or
the
manifest
definition.
A
So
this
is
the
adr
for
what
a
manifest
is
in
the
manifest
schema,
and
so
essentially
as
a
recap
here,
so
the
manifest
is
a
document
right.
So
just
just
just
like
just
like
this,
and
it
has
you
know,
we've
basically
said
that
that
it
should
have
some
properties
right.
It
should
have
the
ability
to
identify
the
format
name
and
the
format
version
and
a
place
where
we
can
go,
get
the
schema
for
that
format,
name
and
version
right.
A
So
we
can
it's
an
extensible
little
piece
of
of
code
that
can
be
vendored
into
different
places,
at
least
that's
the
current
implementation
and
here's.
What
a
json
schema
looks
like,
which,
which
validates
this
example
manifest,
and
here's
some
example
code
to
to
to
do
that
verification.
A
Can't
see
the
screen,
are
you
on
this
stream.
A
A
A
Okay,
yeah,
okay,
so
I'm
gonna
have
to
change
this
real
quick.
So
let
me
go
into
the
obs
profile
and
change
this.
Your
stream
is
still
running.
We
just
pause
this
preview.
Okay,
so
I
just
need
to
move
this
properties.
A
C
A
A
C
C
C
A
All
right
so
yeah
the
manifest
okay,
so
basically
the
yeah
so
suggested
process
so
make
sure
okay.
So
basically,
this
is.
This
is
just
talking
about
writing
manifest,
and
so
basically,
what
you
can
think
of
is
like
this
is
your
your.
A
You
know
it's
almost
like
an
operation
right,
it's
like
the
documentation
for
an
operation
yeah,
but
it's
it's
a
high-level
way
of
describing
that
so
that
you
could
use
it
to
describe
anything
right
so
that
it
doesn't
have
to
be
just
you
know
some
kind
of
it's
just
a
generic
way
of
describing
some
data,
changing
changing
hands
and
and
how
we
could
apply
certain
properties
to
you
know
different
data
formats
in
terms
of
like
having
a
schema
and
stuff
to
enable
discovery
and
and
next
phase,
parsing
and
and
the
next
phase.
A
Parsing
is
what
we're
about
to
show
here
right.
So
what
we're
going
to
do
is
we're
going
to
have
something
that
says:
hey
alice
when
you
see
a
d
id
right
when
your
top
level
system
context
is
a
did.
Let's
write
this
in
the.
C
A
A
Okay,
so
in
here
there's
a
little
thing
about
do
you
know
that
I
think
the
turtles
all
the
way
down
when,
like
it's
basically
like
the
same
thing,
all
the
way
down.
So
what
we're
going
to
do
is
we're
basically
going
to
say,
like
there's,
this
sort
of
calling
convention
for
data
flows
and
the
calling
convention
for
data
flows
is
the
calling
convention
for
data
flows
is
sort
of
like
a
class
instantiation
with
a
context.
A
Entry
on
the
class,
where
the
context
entry
on
the
class
begins,
this
background
thread
that
runs
and
then
the
method
calls
are
like
correspond
to
the
adding
of
specific
inputs
and
then
querying
for
specific
outputs,
resulting
in
the
relation
of
those
inputs
being
passed.
You
know,
through
the
operations
in
the
data
flow
right,
so
this
is
going
to
allow
us
to
write
the
data
flow
as
class
stuff.
A
This
is
going
to
be
the
foundations
upon
which
we
built
that,
and
it's
also
going
to
be
just
like
a
generic
paradigm
that
we're
going
to
follow.
That's
going
to
allow
us
to
sort
of
like
create
this
very
seamless
experience
between
what
is
a
function
as
in
like
what
is
an
operation
and
then
what
is
like
an
interface
and
and
sort
of
mix
and
match
functions
to
define
interfaces
based
on
the
data
model
of
the
data
that
we're
working
with
right.
A
So,
basically
like
you're,
creating
classes
which
are,
you
know
inherently
connected
to
the
intent
behind
the
usage
of
the
data
within
the
data
model
right
of
your
application,
and
so
that
is
where
you
know
we're
gonna
get
into
a
lot
of
overlays
and
the
overlay
as
as
a
concept
when
you're
like
with
these
different
developer
branches,
as
as,
if
like
the
overlay
is,
is,
is
the
code
that
happened
and
commits
in
a
fork.
A
And
so
you
know,
then
then
we're
basically
gonna
apply
the
overlays
dependent
on
the
context,
and
you
know,
then
you
know
form,
because
we
have
this.
This
thing
where
we're
we're
running
the
calling
api
is
such
that
you
start
of
this
background
thread.
Basically,
there's
this.
A
These
background
says
threats
that
sit
around
and
they
think
and
they
think
of
new
system
context
or
new,
or
basically
new
data
flows
to
execute
right
or
new
inputs
to
add
or
new
context
to
execute,
and
so
those
what
whatever
and
those
those
those
are
are
called
in.
A
The
discussions
thread
strategic
plans,
and
so
you
apply
these
overlays
these
these
strategic
plans
as
overlays,
which
are
effectively
just
output
operations
right
and
which
are
effectively
just
your
methods
in
your
data
flow
as
class
stuff,
which
will
allow
you
to
say
basically
hey.
You
know,
I'm
looking
at
a
did
when
I'm
looking
at
a
did,
and
I'm
in
the
context
like
the
parent-parent
parent
context.
Maybe
is
the
getter
chat,
then
you
know
apply
these
restrictions
right.
You
know
make
sure
to
check
where
for
every
action
that
we
do
right.
A
So
it
should
allow
us
to
really.
You
know
create
these
little.
You
know,
basically,
hopefully
everything
will
basically
just
be
you
know
single
functions
or
just
functions
in
files
right.
So
maybe
you
just
have
a
bunch
of
regular
python
functions.
You
know
just
standard
python
functions
in
a
file
right
and
you
can
use
them
all.
However,
you
want
mix
and
match
with
other
functions
across
other
files,
and
the
docs
are
just
built
into
the
to
the
to
the
to
the
to
the
doc
string
right.
So
that's
our
goal
here.
A
So
when
your
top
level
system
contacts
is
looking
at
it
to
push
run
within
it,
it
should
have
an
overlay.
A
Data
flow
which
understands
the
gid
format
and
is
looking
to
our
spit,
and
so
basically
does
this
mean
that
we
should
have.
We
should
have
strategic
plans
in
place
which
take
any
input
and
attempt
to
convert
any
input,
matching
specific
definitions
and
attempt
to
convert
it
to
a
plug-in
instance.
A
If
does
this
mean
that
we
should
have
strategic
plans
in
place,
so
this
does
mean
so
this
means
this
means.
A
We
should
have
strategic
plans
available
a
strategic
plan,
a
strategic
plan
in
place
which
calls
to
the
shim
operation,
make
it
an
operation
and
takes
can
be
directed,
be
directed.
A
A
Traverse
the
yeah-
let's
just
say
this-
to
a
plugin
instance-
okay,
so
so
this
is
basically
shared.
Config
are
you?
Do
you
remember
the
shared
config
discussion?
Were
you
around
for
that?
Okay,.
A
Yeah,
so
I
think,
and
the
reason
why
I
kind
of
like
stop
ship
on
everything
else,
is
that
I
think
that
if
we
figure
this
some,
if
we
really
put
the
pedal
to
the
metal
on
figuring
some
of
this
stuff
out,
it
simplifies
a
lot
of
the
gsoc
project
work
and
it
simplifies
a
lot
of
just
all
of
the
development
work
in
general
because,
hopefully
everything
becomes
really.
You
know
just
data
flows
and
functions
and
and
data
flows
you
know,
hopefully
being
defined
via
some
ergonomic
api
that
we
figure
out.
A
I
think
you
know,
maybe
just
even
as
I
think
we
can.
I
think
your
class-based
flow
definition
thing
is
going
to
be
probably
very
close
to,
if
not
exactly
what
we
do.
So
you
know,
whenever
you
get
a
chance,
I'm
I'm
really
eager
to
see
that
okay,
so.
B
So
what
I
would
I
think
of
this
one
is
like
it
is
a
good
idea,
everything's
there,
but
one
thing
I
see
is
that
we
need
a
lot
of
polishing
to
do
for
it
to
be
kind
of
useful
because,
because
it
is,
it
is
going
to
take
a
lot
of
time.
I
don't
think
we'd
be
able
to
do
it
in
the
gsoc
timeline.
This.
A
Alice
project:
yes,
this
is
a
year-long
project,
and
so
basically,
what
we're
going
to
do
is
we're
going
to
identify
where
the
areas
in
the
gsoc
project
support
this
effort
and
then
just
acknowledge
that
and
include
that
in
the
tutorials,
where
appropriate,
because
the
tutorials
are
basically
going
to
be.
You
know
how
we
wrote
this
thing
and
ideally
what
we
can
do
is
get
the
base
layer
down
within
this
first
month
of
may.
What
I'm
hoping
to
do
is
get
the
base
layer
down
to
where
we
can
start
writing.
A
Everything
is
just
operations
right
and
then,
if
we
can
do
that,
then
our
tutorial
series
really
just
becomes
the
the
series
of
functions
that
we
write
to
enable
different
use
cases
in
our
bot
right
and
so
then
we'll
just
map
in
the
work
that
is
being
done
and
and
several
of
those
things
will
require.
You
know
different
ml
models
right,
so
the
projects
that
were
proposed
around
that
we'll
just
say:
oh
okay!
Well
what
does
our
timeline?
Look
like?
A
Okay,
well,
here's
how
those
gsoc
projects
fill
the
fill
the
activities
that
we
have
to
do
in
the
timelines.
So
therefore,
you
know
we're
going
to
cover
them
in
the
tutorials
that
are,
you
know
from
this
state
to
this
date,
which
is
when
gsoc
is
running,
so
we're
just
going
to
do
some
planning
and
then
map
that
out,
because
everything
that
we're
doing
relates
to
this
in
some
way,
and
so
it'll
just
be
a
matter
of
saying,
okay.
A
Well,
what
did
we
end
up
going
with
and
where
does
that
fit
into
to
what
gets
written
when
it
gets
written,
just
to
create
cohesive
tutorials
and-
and
we
may
source
things
you
know
from
written
at
different
times
whenever
we
finish
up
different
pieces
of
what
we
need
right:
cool.
A
So
data
flow
from
pure
did
so
so
this
service
stuff.
So
I
want
to
understand
this
distributed
web
node
spec
here.
So
let
me
show
this
to
youtube
or
to
twitter.
A
Oh
yeah,
okay,
let's
see
so,
can
you
paste
it
in
in
the
channel.
A
C
A
Great
this
way
we
have
the
thread
for
twitter.
A
Maybe
do
I
retweet
this
to
bring
it
up.
I
think
that'll
bring
it
up.
Okay,
so,
let's
see
so
we
need
to
and
and
we're
you
know
we're
inspecting
this
pdid
spec
stuff.
A
Another
thing
that
I
want
to
have
alice
do
is
comb
through
the
video
recordings
that
we
do
and
grab
every
single
url
and
then
add
in
what
urls
and
what
files
were
open
when
we
were
discussing
what
concepts
right
so
that,
as
you're
going
through
the
report
later
we'll
be
able
to
tie
in
the
git
history
to
the
stuff
we
did
in
the
meetings
right.
So
we
have
complete
visibility
into
any
context
around
any
discussion
with
anything
it'll.
A
Be
oh
it'll
be
great,
and
that's
why
there
was
some
comments
in
there
about
like
this
fully
connected
development
model
right
like
where
we're
capturing
all
your
activities,
to
just
like
make
things
as
easy
as
possible
and
just
working
on
solving
the
problems
rather
than
doing
all
the
the
overhead.
So
we
need
to
understand
so.
Here's
the
pdid
spec.
A
So
bob
enforces
ounces
up
so
key
management
using
verifiable
credentials
using
the
user
verifiable
with
the
connection
place.
One
credit
request
from
the
other
friday
verified
post
creation
of
third
party
claims.
Okay
provides
the
ids.
These
are
communicating
with
people
or
other
things,
conflict
resolution.
A
So
multiplex
authenticated
encryption,
single
signature,
multicast
ease
of
reference,
okay,
spot
flat.
I.
A
We
do
this
is
just
like
all
the
stuff
that
we
spend
so
much
time
doing
that
just
slows
us
down,
and
I
think
if
we
just
connected
a
few
dots
we'd
probably
be
I
mean
we
would
save
so
much
time
right
so
yeah
throw
all
your
ideas
about.
You
know
the
different
time
sinks.
You
know
the
different
little
tiny
things
that
just
like
don't
fit
with
exactly
in
your
workflow
right
and
we'll
just
throw
them
in
the
thread
and
we'll
we'll
put
them
on
the
list
of
things
that
we'll
write
in
the
tutorials
right.
A
So
and
hopefully
this
can
help
us
across
projects
right,
because
these
are
these
overlays
should
be
configurable
enough,
using
the
machine
learning
models
right
by
training
models
and
having
these
overlays,
which
allow
us
to
map.
You
know,
different
stats
in
different
projects
to
shared
metrics
across
the
models
across
projects
should
allow
us
to
have
these
little
assistants
that
help
us
as
we
go
from
project
to
project
right.
A
So
the
work
that
we
do
here
is
not
just
for
dfml,
but
it's
you
can
take
it
with
you
and
it's
your
own
personal
little
bot
right
that
you
train
to
your
liking,
because
it
has
all
this.
You
know
history
to
it,
and
so
that's
why
I
was
just
like
you
know.
A
I
feel
like
I
wanted
to
skip
this
part
about
the
did
stuff,
because
it's
a
lot
of
diving
through
specs
and
things,
but
I
was
like
you
know
if
we
just
put
it
on
top
of
this
now
we
will,
like
you
know,
we'll,
be
kicking
ourselves
later.
If
we're
trying
to
migrate
everybody's
different
things
out
of
all
their
different
data
sources
into
something
shared
to
communicate,
we'll
just
start
here
right
and
then
we
can
really
easily.
Hopefully,
if
this
is
true,
you
know
you
communicate
our
data
and
our
our
execution
flows.
A
A
You
can
think
of
it
like
tcp,
plus
gpg,
right
and
like
a
cross
between
tcp
and
gpg,
and
it's
something
that
we
can
encounter
use
to
encapsulate
our
data
and
build
these
linked
references.
So
now
what
I'm
trying
to
understand
is
you
know
how
do
we
effectively
communicate?
A
You
know
that
that
we
should
you
know
that,
like
how
do
we
send
and
receive
right
and
how
do
we
send
and
receive
how
what
do
we?
What
information
do
we
add
right
or
yeah?
Basically,
how
do
we
send
and
receive
right,
and
then,
where
does
our
information
go
and
our
information
is?
Essentially,
you
know
our
system
context
right,
and
so
what
I'm
seeing
right
now
is
that
you
know
my
guess
is
like
okay.
So
let
me
just
go
read
it.
I
don't
wanna.
This
is
the
thing
I
kept
guessing
yesterday.
A
Okay,
so
I'm
guessing
that
what
we
could
do
is:
okay!
No,
that's
right!
We're
going
to
read:
okay,
pudid
python!
Okay!
So
if
we
go
to
this
docs
okay,
they
do
have
a
demo
demo.
C
A
The
previous
one,
okay
only
had
okay,
so
the
other
thing
that
we're
wondering
is
how
do
we
actually
build
a
key
dynamically,
and
so
we
were
looking
at
this
jwk
library,
this
json
web
key,
because
it
looks
like
that
might
be
part
of
it,
because
right
now,
all
these
use
static
keys,
and
we
want
to
confirm
that
we
want
to
confirm
that
we
can
actually
use
this
did
stuff
before
we
start
doing
anything
else
right
if
we,
if,
if
we
confirm
that
we
can
use
it
by
successfully
encoding,
something
that
allows
us
to
execute
a
data
flow
right
and
retrieve
some
information
based
off
of
it.
A
Then
we
will
determine
that
we're
gonna
use
this
right
and
we'll
go
forward
with
that
as
an
assumption
right,
if
not
we're
gonna
raise
it
as
like,
a
pretty
big
risk
that
we
really
need
to
figure
out
what
we're
doing
for
communication
right
in
our
distributed,
setting
and
and
we'll
just
like
go
forward.
A
Knowing
that
we're
going
to
run
into
a
lot
of
problems
for
the
first
part,
sharing
data-
probably
right
and
archiving
data
in
in
in
different
ways,
instead
of
in
a
uniform
way
and
the
yeah,
so
so
we
want
to
make
sure
that
we
can
generate
keys
and
actually
do
this
stuff
on
the
on
the
fly
and
use
it
to
execute
so
encryption
keys,
verified
material
agreement,
verified
material
agreement
key
okay.
So
this
is
a
specific
type
of
key
assigning
keys.
Encryption
keys
service
did
com
messaging,
okay,
so
did
pure
from
json
okay.
A
A
A
And
there's
every
I've
keep
checking
the
discussions
thread
in
on
on
in
python
in
the
python.
Oh,
hey
look
at
that.
It
did
something
it
didn't
blow
up,
that's
great!
So
now
we
just
need
to
figure
out
how
to
add
keys
so
real
keys
generated.
So
that's
fantastic!
So
there's
some
discussions
threaded
in
python
and
on
the
python
discussions
that
happen
every
so
often
that
go
into
like
well.
A
Why
can't
we
just
like
have
something
that
allows
me
to
map
my
import
statement
to
what
package
that
the
package
is
in
pi
pie
right
and
it
looks
like
there's
several
people's
approaches,
but
nobody's
standardized
on
it
yet
so
it
would
be
interesting
if
it
could
happen
because
then
you
wouldn't
need
requirements
files
you
could
just
sort
of
do
them
in
line.
That
would
be
really
nice.
I
guess
that's
kind
of
what
we're
going
to
end
up
doing
here.
B
B
A
Yeah,
but
this
is
what
this
is,
what
we
can
use
the
ci
for
to.
This
is
what
we
can
use
this
cross
validation
for
right,
so
that
that
cross-validation
of
commits
you
can
cross-validate,
instead
of
different,
commits
that
you're
applying
different
versions
that
you're
trying
to
run
this
run
the
test
with
right.
So
that's
how
you
sort
of
solve
that
one
okay.
So
what
is
this
signing
keys
value?
So
what.
A
Okay,
so
create
pure
did
algo
2
num
I'll
go
to
create
pure
did
no
alcohol
0
and
one
number
I'll
go
to.
Let's
check
the
open,
pull
requests,
configure
renovate
compression
release
docs.
A
Okay,
so
this
is
something
else:
okay,
so
what
is
this
pure
d
id?
It's
light
on
documentation?
Obviously,
well,
let's
see,
maybe
I'm
just
not
looking
in
the
right
place.
Docs
testing
yeah
we're
a
little
light
on
docks
here,
we're
trying
to
figure
out
how
to
use
it.
So
example.
This
is
the
one
that
we're
looking
at
sample
did
documents.
A
Here,
layers
of
support,
different
software
needs
to
support
different
layers,
so
layer,
a3,
dynamic,
accept
dynamic,
give
dynamic,
static
access
layer,
one
must
understand
a
surface
level,
what
a
pure
id
is
and
what
it
and
how
it
works.
So
that
must
be
a
you
must
be
able
to
tell
what
their
string
is
about
the
purity
id
or
not.
So
this
is
like
part
of
that
shim
that
we
talked
about
unless
correctly
compared
to
dids,
when
sorting
or
testing
for
equality.
Okay,
so
that's
beyond
the
show
taking
into
account.
A
This
would
be
something
in
the
next
phase,
we're
getting
into
account
the
case,
sensitivity,
rules
and
associated
with
how
the
numeric
basis
is
encoded.
If
relevant,
implementation
should
display
hyphenate
or
abbreviate
purity.
Ids
correctly
see
recognizing
that
handling
pure
data
is,
might
be
appropriate.
Okay,
so
layer,
2a,
except
static,
pid
from
others,
flipper
implementation
offering
layer,
2a
support,
builds
up
on
layer,
1
support.
A
A
A
So
that's
what
I
recommend,
if,
like
I
try
to
keep
one
to
one
day
to
perform,
but
sometimes
it
just
you
know
wherever
it
lands
so
putting
it
in.
There
is
better
than
not
putting
it
in
there
so
to
perform
a
did,
exchange
and
additional
protocols.
As
long
as
other
parties
do
not
attempt
to
update
their
did
docs
this
static,
no
update,
consistently
drastically,
simplifies
them
for
limitation,
because
for
because
support
for
backing
storage,
delta
and
sync
connection
or
remote
did
resolution
protocols
is
not
required
for
layer
2a.
A
The
implementation
must
recognize
pure
dids,
basically,
layer
1..
It
must
store
prdid
docs.
It
must
look
up
their
cid
docs
as
a
form
of
resolution.
Implementation
may
engage
in
did
calm
based
protocols,
and
so
this
one
is
working
off
of
did
com
v2.
If
so,
it
should
handle
an
abandoned,
connect,
announce
message,
gracefully
deleting
the
pdid
doc
from
its
cache.
The
implementation
supports
did
comment,
must
return
a
report
problem
with
code
equals
unsupported
protocol
for
remote
party
attempts
use
dynamic,
pdid
protocols.
A
The
implementation
should
report
that
it
supports
the
did
exchange
protocol
if
it
receives
and
supports
it
discovers
features.
Query
message:
this
might
be
the
beginning
level
of
support
for
software.
That's
already
supporting
other
did
methods
that
wants
meaningful
interoperability
as
quick
and
cheaply
as
possible.
However,
it's
not
recommended
as
a
permanent
goal,
because
it
places
limits
on
the
behaviors
of
other
period.
A
Did
users
upgrading
to
layer
3
supported
strongly
preferred
expected
effort
a
few
hours
of
coder
time
if
a
code
base
already
has
some
did
or
a
couple
days
if
effort
starting
from
scratch
layer
2b
give
static
pure
dids
to
ever
others.
An
implementation
of
layer,
2b
supports,
includes
layer,
1
support.
It
also
creates
pd
ideas
and
gives
them
to
other
parties
as
a
basis
for
interactions,
layer,
2a
and
layer.
2B
are
not
hierarchical.
Cocoa,
either
or
both
may
be
chosen,
and
the
effort
to
implement
is
somewhat
independent.
A
Resolving
prdids
against
a
cache
version
of
a
pdid
dock
is
also
now
required.
However,
layer
2b
compliant
implementations
must
be
capable
of
generating
a
genesis
version
of
their
own
did.
Dock
and
calculating
the
numeric
basics
then
did
value
that
goes
with
it.
If
an
implementation
should
supports
the
id
com,
it
should
receive
and
may
emit,
abandon,
connect,
announce
message,
gracefully.
It
should
also
return
a
problem
report
with
code
equals
equals
unsupported
protocol.
A
If
remote
party
attempts
to
use
dynamic
did
pure
protocols-
and
it
implementation
should
report
that
it
exports
the
gid
exchange
protocol,
if
it
receives
a
discover,
features,
message
query:
this
might
be
an
appropriate
level
of
support
for
software
that
wants
to
use
peer
deids,
but
doesn't
intend
to
ever
rotate
its
keys.
Okay,
so
note
key
rotation
is
an
important
security
feature.
It's
generally
a
bad
idea
to
provide
no
way
to
change.
However,
how
proof
of
control
is
provided
the
wisdom
is
of
supporting
neither
key
location
nor
a
way
to
abandon
a
connection
is
particularly
dubious.
A
However,
these
static,
only
levels
of
support
are
provided.
Anyways
becomes
because
some
connections
might
be
so
short-lived
that
security
risks
are
acceptable.
Use
good
judgment
to
do
do
we
need
to
support
expiration
of
a
con
okay
expected
effort,
a
few
hours
of
credit,
especially
if
player
2a,
okay,
so
okay,
so
this
is
interesting.
So
this
is
something
that
may
tell
us
that
you
know
these
these
layer
2b
it
sounds
like
might
be
something
that
could
be
good
for,
like
inputs
within
a
given
execution
right.
A
But
then,
as
we
look
into
you
know,
actually
communicating
out
to
you,
know
external
entities
for
storage
and
long-term
storage
right,
because
we're
basically
going
to
create
these
ad
hoc
block
chains
on
the
fly,
probably
like
in
json
files
right
that
we're
going
to
send
around
to
each
other's
computers.
A
And
so
you
know,
via
you
know,
probably
via
you
know,
maybe
we
could
even
yeah.
We
could
do
it.
You
know
any
number
of
ways
right,
but
basically
that
will
allow
us
to
communicate
the
cache
state
of
data
flows
and
then
run
data
analysis
on
each
other's.
You
know
data
sets
to
try
our
models
out
or
to
try
even
just
have
daemons
running
in
the
background
constantly
trying
different
models
right.
A
So
we
can
write
models
to
try
different
models
right
and
we
can
just
sort
of
you
know,
try
to
really
figure
out
what
are
all
the
most
interesting
ways
you
can
use
the
data,
and
so.
A
Yeah
yeah
yeah
definitely
and
the
last
couple
videos
I
think
we
started.
We
went
through
it
as
well,
but
you
know
there's
a
lot
of
those
there's
also.
I
I
have
also
been
trying
to
explain
everything.
That's
been
in
the
thread
and
there's
a
lot
in
the
thread
to
explain
so
there's
several
hours
of
explanation.
So
layer
3a
accept
dynamic,
pure
dids
from
others,
so
software
with
layers,
3a
support,
includes
layer,
2a
support
and
it's
maximally
interoperable
with
others
who
wants
to
use
pure
dids
in
rich
ways.
A
It
may
or
may
not
use
any
pure
dids
of
its
own.
Such
an
implementation
must
provide
backing
storage
to
persist,
others,
pdid
docs
and
the
deltas
for
them.
It
must
support
the
sync
connection
and
did
resolution
protocols,
but
if
layer
3b
is
supported,
it
can
choose
those
protocols
only
for
the
dynamic
data
of
others,
meaning
the
sole
role
it
has
to
support.
In
query
connection
state
is
responder.
A
The
implementations
should
report
that
it
supports
these
protocols
if
it
receives
a
relevant
discover,
features,
query
message
and
we
had
also
referenced
the
dhcp
rfc
when
we
were
doing
some
of
the
background
context,
videos
and
and
there's
the
dhcp
rfc.
I
find
to
be
very
clear
and
readable,
and
so
interpreting
specs
like
this
for
anybody
who
you
know,
goes
and
wants
to
interpret
specs
and
stuff
like
this.
A
You
know
I
I
recommend
sort
of,
maybe
maybe
looking
at
that
dhcp
rfc
and
kind
of
understanding
how
they
map
out
because
you'll
see
you
know
they
talk
about
the
different
pieces
of
data
within
the
data
model
or
within
the
data
structure
and
how
that
maps
to
different
encodings
and
which
is
very
similar
to
these.
You
know
discovers
message
things
so,
and
these
are
adrs
right
and
so
these
adrs
are
perfect
because
they
are
also
manifest,
and
they
also
all
already
say
alice
in
them.
A
So
that's
great,
so
yeah
so
basically
yeah
see
because
these
people,
they
they
they're
thinking,
we're
all
thinking
in
the
same
direction
right
so
this
this
adr,
you
know
describes
it.
Has
you
know
the
message?
Format
right
and
then
you
know
kind
of
some
response,
and
I
added
that
to
the
today
to
the
comment
on
the
manifest
stuff
that
we
probably
need
to
to
look
at
documenting
responses,
as
their
own
manifests
saying
that
they
could
be
created
either
a
side
effect
or
like
an
error
or
something
else
so
abandon
connection.
A
A
This
is
stuff
that
we
could
use
to
for
for
our
shim
layer
where
we're
gonna
we're
basically
gonna
grab
some
of
that
code
from
the
shim
layer
and
we're
going
to
implement
it
as
an
operation
or
use
it
as
an
operation,
and
then
that
operation
will
be
used,
probably
within
strategic
plans,
different
strategic
plans
trying
to
guess,
if
just
different
data,
and
only
guess
you
know
different
data,
that
they've
been
assigned
to
see
as
input
values
and
then
create
instances
of
plug-ins
based
off
that,
and
this
is
how
we
can
do
this
when
we
instantiate
our
operations
and
we
want
to
create
operations
where
the
config
is
actually
dynamically
discovered
by
output.
A
The
running
of
the
of
another
operation
so
on
startup,
so
sort
of
you
know,
maybe
grab
your
secrets
for
your
ci
environment
from
different
different
variables,
depending
on
which
ci
environment
you're
running
under
right,
because
they
have
different
environment
variable
names
right,
so
you
could
then,
you
know
basically
say
filter
filter
through
this
thing.
That
will
then
you
know,
you
know,
go
do
that
specific
overlay
and
then
suggest
you
know
or
add
inputs
to
the
network
or
add
context
that
go
through
those
operations
based
off
of
that
right.
A
So
basically
like,
if
you
see
something
in
here
that
says:
service
endpoint,
github.com,
endpoint
one
and
then
you
would
know
and
and
you
knew
that
you
were
operating
in
a
parent
context-
that's
treating
this
as
a
top-level
ci
job.
You
would
say:
okay,
I'm
gonna
grab
from
the
you
know
the
github
specific
environment
variables
at
least.
I
believe,
that's
kind
of
going
to
be
some
of
what
happens
here.
A
Message
type
name:
porsche,
okay,
so
this
is
just
good
stuff.
Let's
also
include
this
link
and
we
can't
see.
I
can't
see
your
video
stream.
It's
like.
B
A
Okay,
here
it
is
okay,.
A
Okay,
so
okay,
so
we
got
our
keys
generating.
So
we
want
to
understand
what
is
this
right
and
how
do
we
add
our
own
keys
so
create
peer.
Did
no
I'll
go
to
okay
so,
and
we
were
still
going
through
this
document
to
understand
the
layers
to
understand.
You
know
what
is
numalgo
zero?
What
is
num
I'll
go
to
just
maybe
it'll
shed
some
light.
A
If
we
go
through
and
understand
what
does
this
thing
support
and
are
we
looking
at
the
right
thing
for
our
use
case
here
and
maybe
there's
another
module
that
already
is
out
there,
that
that
is,
you
know
different
than
this
package,
pure
id
pure,
did
and
might
be
layer,
3a
and
layer
3b
compatible.
Then
we
will
just
switch
to
that
right.
A
We've
got
here
by
finding
did
com
off
of
the
did
spec,
I
think,
and
so
we're
sort
of
slowly
traversing
these
specs
to
find
the
state
of
the
art
in
this
area,
so
key
management.
So
let's
try
to
find
let's
try
to
find
a
layer
three.
So
let's
understand
layer,
three
so
layer
includes
layer,
2a
so
provides
back
in
storage
persist.
Others
did
docs
and
the
deltas
for
them
must
support.
A
Sync
connection
did
resolution,
but
if
layer
3p
is
supported,
it
can
support
those
protocols
only
for
the
dynamic
data
of
others,
meaning
that
the
only
role
it
has
supporting
query
text
state
is
responder.
The
implementation
should
report
that
it
supports
these
protocols
if
irrelevant.
So.
Basically,
if
you
know
that
options
is
set,
so
this
level
of
support
for
accepting
pdids
is
recommended
for
software.
That
wants
to
offer
rich
pdid
support
to
others,
regardless
of
the
level
of
pure
did
usage,
it
intends
for
itself.
A
So
that
would
be
maybe
something
that
we
should
really
focus
on
as
a
as
a
sort
of
a
a
a
library
right
since
we're
more
of
a
library,
we
should
probably
target
you
know
usage
of
a
library
itself
that
supports
this.
So
we
can
support
this
more
wide
range
right,
transparently,
so
so
anything
less
than
this
level
of
support,
insofar
as
accepting
dids
from
others
is
concerned
in
software
that
expects
to
interact
richly
in
a
dad
landscape
will
hamper
pervasive
interoperability.
A
So
we
really
need
to
find
a
3a
or
3b
implementation,
because
we
want
to
be
a
part
of
this
ecosystem
software
with
layer.
3B
support
includes
layer,
2b
support.
It
must
also
provide
backing
storage
to
persist
its
own
pdid
docs
and
the
deltas
for
them.
It
must
support
the
same
protocols
as
to
be
a
couple
days.
The
code
base
for
work-
okay,
so,
but
with
its
own
data
management,
target,
okay
cool.
A
B
A
Worries
all
right.
Well,
it
was
good
to
talk
to
you.
You
know
please
post
any
comments
in
the
thread,
any
thoughts
right
just
trying
to
collect
all
thoughts
related
to
this.