►
From YouTube: CNCF Notary Project 2020-08-31
Description
CNCF Notary Project 2020-08-31
B
A
A
A
A
A
A
B
B
That
is
that
look
like
it's
okay,
cool
all
right,
so
so
what
I
have
here
basically
is
a
basic
kind
of
registry
setup
and
a
basically
basic
client
setup
just
to
demo
a
couple
of
tough
features.
B
So,
first
of
all,
the
registry
has
just
like
a
basic
some
tough
metadata
set
up
already
with
just
a
lot
of
different
sample
repositories
within
the
registry
just
for
for
demo
purposes.
Most
of
them
are
pretty
empty,
but
they're
just
so
that
we
have
some
stuff
to
work
with
and
then
there's
the
top-level
metadata
of
root
snapshot,
timestamp
and
targets
which
we'll
get
more
into
in
a
minute
and
then
a
few
example
targets
kind
of
within
all
those
example.
B
Registries,
just
just
sorry
exactly
a
few
example
artifact
files
within
all
those
example
repositories.
Just
so
we
can
show
kind
of
how
those
move
around
and
stuff
it's
pretty
bare
bones
for
the
demo
and
then
over
here
on
the
client
side
in
order
to
demonstrate
how
it
would
look
for
a
client
that
starts
out
as
an
ephemeral
client.
B
Then
make
some
quests
over
there
and
then
over
here.
As
you
can
see,
there
is
the
file
one
got
moved
over
into
the
the
targets
over
here
and
one
thing
I
want
to
show
over
here
really
fast,
so
it
downloaded.
Obviously,
all
the
metadata
needed
to
download
this
and
one
of
the
the
new
features
I
wanted
to
show
is
this
the
snapshot
file
and
what
that
looks
like
so.
B
So
the
the
difference
here
between
the
the
snapshot
and
the
classic
snapshot-
tough
snapshot
that
you
may
be
familiar
with
is
classically.
The
tough
snapshot
lists
every
single
repository,
every
single
targets,
file
targets,
metadata
file
on
the
registry
or
repo,
and
the
version
number
of
that,
so
that
you
can't
roll
back
to
different
versions,
because
it
keeps
track
of
all
these
different
version
numbers.
B
But
there
were
a
lot
of
concerns
about
things
like
private
private
repositories
on
registry
and
being
able
to
see
all
of
that
information
as
well
within
the
snapshot
file,
as
well
as
scalability
for
really
large
registries.
That
may
have
a
lot
of
different
repositories
and
a
lot
of
different
things
listed,
which
can
make
this
really
big
file.
That
would
have
to
be
downloaded
every
time
so
to
solve
that.
What
this
does
instead
is.
B
It
uses
a
merkle
tree
to
provide
the
same
guarantees
of
consistency
across
the
registry
of
of
files
so
that
you
can't
replay
old
files
or
mix
and
match
different
files
on
the
registry,
but
instead
it
just
lists
this.
This
amount,
information,
statically,
and
so
the
the
length
of
this
tree
path
is
proportional
to
the
log
of
the
number
of
repositories.
B
So
the
only
information
that's
leaked
is
the
number
of
targets
metadata
files
on
the
registry.
What's
not
leaked
is
any
information
about
what's
in
those
files,
and
these
are
all
secure,
hashes,
and
so
you
know
you
can't
get
any
information
about
how
those
are
made
or
what
other
artifacts
are
available
on
the
repository.
B
Does
that
all
make
sense
so
with
how
that
how
that
kind
of
works
and
then,
as
I'll
demonstrate,
this
still
provides
the
same
protection
against
rollback
attacks.
B
B
So
if,
for
example,
someone
were
just
to
to
look
at
this
on
the
on
the
network
and
they
were
to
save
this
old
website
metadata
file,
so
I'm
just
copying
it
into
this
this
other
place
and
then
say
you
know
we
find
a
security
patch
for
this,
this
rabbit's
network
image
and
we
want
to
to
make
sure
that
it
gets
pushed
to
all
our
new
users.
B
So
over
here
we
will
update
the
registry.
B
So
you
know
this
is
just
an
example
file
in
real
life.
This
should
be
an
actual
like
execuva
file,
which
should
be
the
actual
artifact
information
just
for
the
demo.
This
is
just
a
bunch
of
text,
but
we
update
that
file
so
that
it
changes
and
that
hash
will
no
longer
match
because
it's
a
new
file
and
then
we'll
do
here.
B
Okay,
so
we'll
assign
the
rabbit
networks
metadata.
So
this
is
just
saying
okay,
so
we
we
have
to
do
this
file,
and
this
is
the
new,
the
new
one.
We
verify
that
this
is
correct
and
then
so
not
only.
This
is
actually
doing
kind
of
two
steps
in
one
script,
just
for
simplicity.
This
is
having
the
rabbits
network
repository
sign
this
and
verify
that
they
they
tested.
B
This
image
is
what
they
want
to
be
sharing,
and
then
it's
also
having
the
targets
metadata
on
the
registry
say
that
this
is
correct
and
I'll
show
you
in
a
minute
how
we
can
how
the
repository
can
be
independent
of
the
registry
as
well,
but
for
now
they're
they're
the
same
so
so
that's
all
that's
signed,
and
so
now
we
have
like
a
local
copy
of
this
signed
metadata
file.
That
includes
the
new
version
of
the
image.
B
So
the
next
step
is
to
to
publish
that,
and
so
you
upload
it
to
the
the
registry
and
then,
as
it's
uploaded,
it's
signed
by
the
snapshot
and
timestamp
keys
on
the
registry.
So
that's
kind
of
an
automated
process
on
a
push
and
it
just
separated
out
here
for
for
the
demo
and
then
so
we
have
this
new
new
metadata
file
published
and
we
could
download
the
metadata.
But
let's
say,
there's
an
attacker
who
wants
to
replay
that
old
version.
B
So
so
we
copy
this
old
rabbit's
network
json
file
back
into
the
registry
as
pretending
it's
the
dl1
and
then
over
here.
B
If
the
client
tries
to
download
that
wiper's
network
file
just
like
before
it
gets
a
bad
version
number
error,
because
the
version
number
doesn't
match
the
version
number
that's
listed
in
that
target's
metadata
file
so
and
then
I'll
just
go
over
here
and
resign.
This
to
make
sure
that
the
registry
is
back
in
a
good
state.
B
Well
and
type:
it
correctly
work,
okay
and
then
okay,
so
so
so
that
that's
kind
of
the
idea
of
the
replay
attack,
and
so,
if
you
replay
the
old
target's
metadata,
then
the
image
can't
be
downloaded
by
the
client.
Okay.
So
next
I'll
show
you
this
other
new
feature
that
we
that
we
kind
of
looked
at,
which
I
think
would
be
especially
useful.
If
you
have
some
private
registry
reprimand
repositories
on
the
registry,
sorry,
I
always
get
that
confused,
so
the
so.
B
The
idea
here
is
that,
if
say,
you
are
the
runner
of
the
weapon
network
repository
and
you
have
your
your
clients
and
you
want
them
to
trust
throughout
anything
signed
by
you,
rabbits
networks,
but
you
don't
really
know
anyone
else
on
this
registry
and
what
they're
signing.
So
you
don't
necessarily
want
everything
on
the
registry
to
be
trusted
by
this
person.
B
You
just
want
the
stuff
that
you,
your
test
is,
is
good
and
valid
or
secure
or
whatever
your
your
parameters
are
so
the
way
we
support
that
is
using
this
idea
of
a
target's
map
file.
So
this
is
it's
basically
just
on
the
client
side.
What
it
does
is,
it
says:
okay,
I
trust
anything
signed
by
rabbit
networks
with
this
key
and
yeah,
so
anything
that's
assigned
by.
B
Why
does
that
work
with
this
key
is
what
I'll
trust,
and
so
it's
basically
it
kind
of
overrides
to,
in
a
certain
extent,
the
top
level
metadata
on
the
registry
itself,
because
in
some
cases
you
wanted,
you
want
to
get
all
of
your
key
information
from
the
registry
like
up
here.
What
we
were
doing
was
we
were
using
this
key,
but
we
were
getting
it
from
the
registry,
and
so
the
registries
root
metadata
was
telling
us.
This
is
the
key
that
you
should
use
to
sign
to.
B
B
B
Which
is
something
that's
available
over
here,
so
they
try
and
download
that
and
it
won't
be
found
because
of
this
map
file,
that's
blocking
it
and
actually
without
the
map
file
just
to
to
prove
my.
My
demo
is
accurate,
let's
see
so
without
the
map
trial.
Oh.
B
Well,
that's
weird,
but
without
the
map
file
we
would
be
able
to
download
the
the
image
one.
This
is
probably
a
problem
with
my
script,
not
with
anything
else,
because
it
might
my
demo
set
up
but
sorry,
but
they
are
able
to
download
something
from
webcams
rabbit.
B
That
downloads,
just
fine,
because
it's
available
in
this,
this
target's
map
file
I'll
look
into
that
later
there,
so
that
so
that's
kind
of
how
this
map
file
works,
so
it
it.
I
think
we
think
it
addresses
some
of
that.
The
issues
with
that
private
repository
situation
and
another
thing
I
want
to
just
demonstrate
really
fast-
is
key
rotation
and
how
that
works
within
within
this
system.
So
any
key
within
the
system
can
be
rotated
and
kind
of
transparently
to
the
client.
B
B
B
Timestamp.Com,
so
what
I'm
doing
here
is
I'm
basically
I'm
swishing
out
what
is
signing
the
timestamp
key
to
be
from
the
timestamp
key
in
case
there
are
multiple
keys
that
has
to
specify
which
key
is
no
longer
trusted
and
then
which
key
is
going
to
be
trusted
in
its
place.
B
So
we're
I'm
just
going
to
use
the
snapshot
key
just
to
to
show
so
you
can
see
that
they
match
that
it's
being
used
so
so
rotate
the
key.
That's
used
there.
B
And
then
that
will
what
I'm
doing
here
is
I'm
resigning
the
root
metadata
using
this
new
key
and
the
root
method
has
two
different
signatures,
so
both
of
those
signatures
are
now
used
to
say,
okay,
so,
instead
of
just
trusting
this
previous
time
stamp
key,
we'll
trust
this
new
key
to
sign
the
timestamp
metadata
and
then
when
we
do
really
fast,
is
update
the
script
to
use
the
the
new
key
rotation,
because
timestamp
is
an
automated
process.
So
you
have
to
tell
the
script
which
key
to
use
to
do
that
on
mini
process.
B
So
I
just
have
to
do
that
really
quickly
and
then
and
then
see
we
publish
this
new
metadata.
B
B
What
you'll
see
is
that
it's
actually
using
this
new
key,
so
this
time,
step
keys
now
matches
the
snapshot
key,
because
it's
using
the
same
key
for
both
of
these-
and
this
is
just
done
transparently
to
the
client
and
so
in
real
life.
You
probably
wouldn't
switch
it
to
the
same
key
you'd
switch
it
to
something
new
that
is
more
secure
or
you
know
fresher
or
whatever,
and
but
this
process
can
be
used
to
do
that
and
then.
B
Finally,
the
last
thing
I
want
to
show
is
moving
an
image
from
one
repository
on
the
registry
to
another
repository
on
the
registry
and
this
this
it'll
be
very
similar
to
move
it
to
a
new
registry
altogether.
I
just
didn't
set
up
a
second
demo
registry,
but
I
can
do
that
in
the
future.
If
there's,
if
there
is
interest
so
so,
what
I'll
do
here
is?
B
I
will
move
our
target
from
let's
see
from
repository
zero,
which
is
my
very
creative
named
example,
repository
over
into
this
web
networks,
one
so
currently
there's
the
file1.txt
in
rabbit
networks
and
now
we're
moving
also
this
filezero
file
into
there
and
then
all
we
have
to
do
next
is
so
the
rabbit
networks
need
to
attest
that
they
actually
trust
this
file
zero
and
want
to
want
to
sign
it.
So
all
they
do
to
do.
That
is.
B
B
B
Publish
and
then
it
should
be
available
over
here,
so
what
is
it
called
file?
Zero.
B
A
B
Yeah,
so
it's
it
is
it's
a
merkle
tree
which
is
basically
a
specialized
binary
tree
so
yeah
it
is
using
them
all,
but
the
so
yeah
it
does
require
some
civilization,
and
I
think
the
idea
is
that
it
doesn't
have
to
be
updated
every
single
time.
Any
image
is
updated.
B
It
can
be
updated
once
a
day
or
so
and
still
provide
a
day's
worth
of
rollback
protection,
for
example,
and
that
way
you
can
do
quick
updates
to
things
without
having
to
regenerate
that
tree
every
single
time,
and
then
it
would
just
categorize
updates
and
update
the
snapshot
and
timestamp
information
on
that
cycle.
B
Kind
of
whatever
time
period
makes
the
most
sense,
and
I
think
that
you
all
would
probably
know
what
time
period
better
than
I
would
something
on
the
order
of
a
day
every
day,
or
so,
I
think,
would
make
some
sense.
It
shouldn't
be
too
computationally
expensive.
I
think
even
every
hour
should
work,
so
it
would
be
done
on
that
on
that
cycle
and
then,
in
between
that
images
we
could
use
you
know,
clients
could
either
use
some
other
sort
of
protection
in
the
meantime
like
they
could.
B
Just
use
you
could
skip
that
check,
they
could
have
a
a
smaller
snapshot
file
that
just
includes
that
piece.
That's
used
in
the
meantime.
I
think
there's
a
little
bit
of
room
for
growth
there,
but
definitely
it
can
be.
It
does
have
to
be
regenerated,
but
it
can
be.
What's
it
called
grouping
stages.
A
I
mean,
as
I
think
the
question
that
we
keep
on
circling
around
is
not
the
what
or
how
you
know,
because
I
know
the
merkle
tree
is
one
way
to
do
some
more
optimization.
You
know
you
know
around
lots
of
registries
and
sorry
lots
of
repos
in
the
same
registry
and
different
registry
operators,
categorize
them
somewhat
differently.
So
there's
there's
lots
of
questions
around
you
know.
How
do
we
do
that
in
a
perfect,
reliable
and
scalable
way?
B
Yeah,
so
I
think
the
big
thing
is
that
it
just
improves
your
security,
because
if
basically,
if
you
are
downloading
something
from
any
of
these
different
repositories
on
the
registry,
you
and
then
you
download
something
from
any
of
the
other
ones.
You
get
the
same
kind
of
rollback
protection
and
actually
one
of
the
benefits
of
this.
This
merkle
tree
solution
is
even
an
ephemeral.
B
Client
can
get
this
rollback
protection
because
they
can
check
the
the
hash
of
the
smackdown,
even
if
they
don't
have
an
existing
one
on
disk,
which
is
pretty
powerful
and
so
yeah.
So
I
think
yeah.
The
big
thing
is
to
to
provide
that
that
protection.
Sorry,
I
lost
track
of
something
else.
I
was
gonna
say
anyway,.
A
A
So
it's
not
clear,
there's
a
benefit
even
for
the
long-standing
and
if
we
get
to
the
ephemeral
clients,
where
they're
pulling
one
or
two,
maybe
three
artifacts
from
a
much
smaller
scope
of
repos,
it's
just
we're
struggling
with.
Why
should
we
try
to
figure
out
a
perf-scale
security
solution
to
something
that
wouldn't
kind
of
struggle
with
is
isn't
a
problem.
B
Yeah-
and
I
think
that
that's
definitely
something
we
should
continue
to
talk
about.
I
think
that
the
issue
that
that
you
know
that
we
see
and
that
we
worry
about
is
this
issue
of
of
replaying
metadata
and
if,
if,
if
you,
even
if
you
have
an
ephemeral,
client
or
whatever,
if
someone
is
able
to
take
that
just
like
a
plain
signature
file
or
a
plain
targets
file
in
the
the
tough
terminology
and
save
that
and
then
wait,
you
know
six
months
a
year,
another
female
client
comes
up.
B
They
replay
that
you
know,
then
you
have
an
image,
that's
six
months
behind
which,
in
the
security
standpoint
can
be
really
bad,
because
there
can
be
any
number
of
security
patches
over
that
period
of
time.
B
So
I
think
you
need
kind
of
something
to
ensure
that
the
timeliness
of
of
the
images
and
it's
easier
to
do
it
across
the
registry
instead
of
repository
by
repository,
because
it
just
gives
you
a
bigger,
a
bigger
scale,
and
so
it
gives
you
more
protection
because
it,
I
think,
that's
the
question
and
you
only
have
to
do
it
once
again.
B
A
Yeah
I
mean
I
got
the
rollback
protection
like
I
we
get
like
it's,
certainly
something
we
want
to
be
able
to
add.
The
question
is:
is
the
scope
and
then
whether
it's
one
month
or
two
months
or
three
months,
you
know
that
the
longer
the
months,
the
more
risk
it
has
because
there's
vulnerabilities
every
day,
so
we
totally
get
that
it's
the
the
concept
that
it's
easier
for
one
across
all,
isn't
necessarily
true,
because
it's
it's
trying
to
maintain
those
at
a
scale
of
the
number
of
updates
that
get
pushed
with.
A
Also
the
assumption
of
what
you
have
access
to,
because
even
as
a
registry
operator,
while
we
technically
have
access
to
everything,
that's
in
the
registry,
we
of
course
attest
that
we're
not
accessing
things
that
we
shouldn't
be
doing.
So,
while
I
appreciate
that
you're,
you
know
making
sure
there's
no
no
everything's,
anonymized
and
so
forth,
we
are.
We
promise
our
customers
that
we
won't
do
anything
around
data
across
customers.
A
In
fact,
we
even
have
new
rules
that
are
forming
where
I
they
don't
want
their
data
even
to
be
used
in
anonymized
portions
there's,
some
higher
level
security
stuff.
That
they're
asking
so
again,
it's
not
that
it's!
You
know
it's
code
right.
We
can
write
anything
we
want.
A
B
Yeah,
I
think
that
the
the
downside
there
would
be
for
those
ephemeral
clients,
because
if
they
only
download
once
from
a
repository
say
then
the
only
protection
they
get.
Is
they
basically
don't
get
that
protection
of
that
rollback
protection
unless
they
do
something
to
check
that
to
check
against
some
external
source
and
the
thing
that
this
miracle
tube
provides
basically
is
like
kind
of
an
external
check.
So
if,
if
the
merkle
tree
hashes
match
up,
then
you
know
that
the
that
yeah,
that
is,
a
valid
metadata
file,
that.
A
A
B
Okay,
yeah.
I
think
that
definitely
something
we
should
talk
more
about
the
yeah,
I'm
trying
to
try
to
figure
out
how
to
kind
of.
A
I
think
you're
trying
to
secure
that
hey
at
any
one
time.
This
a
female
client
could
ask
for
anything
from
the
registry
and,
let's
give
them
some.
You
know
good
protection
that
they
can
just
doesn't
matter
which
one
they
pull
from
as
as
long
as
these
time
stamps
match
you're
good,
but
that
assumes
that
a
registry
can
have
access
to
all
that
information
which
we're
saying
we
can't
like.
We
literally
can't
do
that.
It's
not
like
from
a
not
technical
from
a
security
liability
commitment
to
our
customers.
A
We
are
saying
we
cannot
do
that,
so
that's
problem,
one
so,
and
all
performance,
reliability
issues
are
irrelevant.
B
There's
no
information
about
any
other
repository
on
the
registry
within
the
file.
A
Nothing
I
I
I
one
of
these
larger
calls
when
the
actual
customers
get
on
I've
been
known
to
say,
coke
and
pepsi,
because
those
are
obviously
competitors
and
come
out.
So
I
might
as
well
just
be
public
when
I
say
it
you
know,
and
for
those
that
are
actually
working
at
coke
and
pepsi,
I'm
trying
to
make
sure
you're
staying
secure.
So
please
don't
get
upset
that
I'm
using
you
guys
as
examples
the
we
can
never
make.
A
We
can
never
do
anything
that
their
data
has
any
knowledge
or
intermixing
or
anything
of
their
others
in
any
anonymizer
anyway.
So,
even
if
we
wanted
to,
we
have
a
challenge
that
we
shouldn't
be
doing,
but
I
the
the
problem
that
I
keep
on
coming
back
to
is
it's
I'm
not
I'm
still
struggling
on.
Why
we
need
to
do
that
across
multiple
clients,
because
it's
it's
not
even
that
there's
a
single
registry
like
customers
work
with
multiple
registries
over
time.
So
it's
not
like
hey.
A
If
we
can
just
get
this
for
this
one,
massive
docker
hub
and
it's
the
only
one-
that's
really
out
there
great
there's
docker
hubs,
there's
pie,
pies,
there's
npms,
as
you
know,
all
of
those,
but
we're
seeing
more
and
more
that
the
public
registries,
one
people
want
private
copies
of
them,
there's
multiple
public
ones
that
somebody
has
to
deal
with
so
there's
in
addition
to
everything
else.
At
best,
what
you're
offering
is
on
a
single
registry,
you
can
do
all
repos
and
that's
not
really
the
bigger
problem.
Much
less
do
we.
I
guess
I.
B
Guess
another
thing
that
you
could
do
is,
I
don't
have
to
think
more
about.
This
is
basically
you
have
the
root,
and
maybe
like
top
level,
whatever
metadata
for
the
registry,
just
basically
that
just
simplifies,
I
think
the
key
management
a
lot
to
have
kind
of
one
top
level
thing.
That
then,
is
downloaded
and
used,
except
for
obviously,
in
the
case
of
this,
this
map
file
right
or
anything
where
the
client
already
knows
the
keys
interest.
But
then
you
could
also
just
you
could
do
just
the
snapshot
per
repository.
A
B
B
Yeah,
because
the
snapshot
is
mostly
it's
like
a
it's
more
of
a
like
a
server
management
password,
whereas
all
of
the
targets
keys,
which
are
used
to
sign
the
actual
information
about
files
are,
are
independent
and
they're.
You
know
they
would
be
very
specific
to
that
that
one
repository.
A
I
like
the
if,
if
I
was
trying
to
take
something
like
docker
hub
and
I
want
to
say
all
the
docker
hub
content,
that
they
it's
official
content,
that
docker
is
certified,
regardless
of
where
they
got
it
from
and
docker
wanted
to
put
its
certified
key
on
it.
Okay,
that's
for
the
docker
certified
content,
but
for
all
the
other
content,
that's
not
certified
by
docker,
for
instance,
this
wabbit
networks,
one
that
we
refer
to
it's
built
by
externally.
A
It's
sent
to
dockerhub
in
fact
at
first,
it's
not
actually
certified
by
dockerhub,
because
we're
trying
to
show
examples
of
things.
So
I
wouldn't
expect.
I
mean
I
guess
you
could
say,
there's
time
stamp
data
on
that,
but
as
a
customer,
even
in
a
public
registry,
I'm
not
sure
that
makes
a
ton
of
sense,
no
exactly
makes
sense
in
a
private
registry
and.
B
So
like,
but
the
timestamp
actually
doesn't
have
much
to
do
with
like
rabbit
networks
themselves.
They
just
say
we
sign
this
one
thing
and
we
put
it
on
this
on
this
repository
or
this
registry,
and
then
the
registry
says
okay,
we
we
trust
you
just
to
trust
this
thing,
and
so
we'll
we'll
assign
this
to
you
and
we'll
say
this
is
when
we
uploaded
it.
This
is
like
when
we
received
it
and
then
anyone
else
who's
downloading.
B
It
can
check
that
the
time
they're
downloading
it
is
within
a
range
of
the
time
that
it
was
trusted
and
the
time
that
it
was
uploaded
and
if
there's
a
newer
one,
then
they're
not
trusting
the
other
one
they're
trusting
the
currently
uploaded
thing
from
weber
networks.
A
Yeah,
I
I
get
it
I
just
I'm,
not
I'm
not
seeing
that
as
the
priority
scenario,
with
all
the
complications
that
we
keep
on
discussing,
because
they,
I
don't
know
if
it's
as
much
of
a
problem
for
that
one
registry,
where
I
get
something
as
opposed
to
I'm
actually
taking
it
from
docker.
I've
been
putting
it
in
the
acme
rockets
registry.
A
B
Yeah
exactly-
and
I
think
that's
something
that
the
acme
rockets
just
needs
to
be
in
charge
of
doing,
because
that's
kind
of
their
they
can
make
it
automated
they
can
do
whatever.
But
if,
if
they're
trusting
it,
then
they
need
to
continually
test
that
they
trust
it,
and
that
is
something
that
they
they
sign
and
they
want
people
to
be
using.
C
Yeah,
I
think
the
long
long-term
theme
is
that
there
is
a
non-zero
security
improvement
to
go
in
the
full
registry,
but
it's
very
hard
to
articulate
in
a
way
that
users
could
rely
on
it.
C
But
another
point
is,
if
the
I
must
admit,
I've
forgotten
the
details
of
the
snapshot
screen,
but
if
the
overall
idea
is
that
the
snapshot
is
signed
and
updated
about
once
a
day
and
we
get
protection
on
the
granularity
of
days
and
not
for
production
of
rollbacks
during
the
day.
C
Well,
we
don't
really
need
a
cryptographic
structure
for
that.
We
just
need
to
sign
a
timestamp
every
day
with
the
current
state
and
that
scales
trivially,
because
we
can
just
do
it
for
every
single
ripple
and
we
don't.
We
don't
actually
need
even
the
timestamp.
We
just
can
rely
on
tls
as
long
as
it's
not
governed
by
by
some
syrian,
which
I'm
afraid
it
probably
is
yeah.
B
B
Yeah,
I
agree,
and
actually
that's
like
the
the
traditional
tough
method
is
to
just
list
all
of
the
kind
of
current
state
information,
which
is
basically
version
numbers
and
the
time
in
the
snapshot.
The
reason
for
this
this
structure
was
just
to
keep
the
separation
between
the
repositories
on
the
registry,
while
still
being
able
to
provide
coverage
of
the
entire
registry
so
yeah.
If
it
is
separated
down,
then
that's
no
longer.
C
Now,
what
I'm
saying
is
in
the
very
traditional
system,
the
snapshots
were
managed
by
the
individual,
individual
authors
and
they
were
sequential
and
the
the
snapshot
key
was
managed
by
the
authors.
So,
of
course,
they
had
to
be
consistent,
but
if
we
are
having
the
registry
sign
the
snapshots
and
snapshot
versions
every
day,
we
can
just
sign
the
snapshot
version
along
with
the
time
timestamp
and
that
that
gives
us
the
same
rollback
protection.
Doesn't
it
assuming
all
the
clients
have
synchronized
clocks,
which
is.
C
B
Rollback
protection
kind
of
the
same
way,
the
only
the
main
reason
for
this
additional
structure
again
was
for
two
big
reasons
is
for
the
scalability,
for
if
you
have
a
lot
of
different
repositories
on
a
registry
with
a
lot
of
different
images,
you
want
to
make
sure
that
this,
because
even
just
listing
the
the
version
information
can
make
a
really
long
file,
and
so
this
this
shortens
it,
and
it
also
allows
for
separation
between
the.
B
That
would
be
that
would
be
going
back
to
kind
of
the
the
notary
v1
model,
and
I
think
that
the
main
I
think,
concern
with
that
model
was
kind
of
the
trust
on
first
use
issue
where,
if
every
single
repository
have
it
has
its
own
base
structure,
you
have
to
figure
out
trust
for
every
single
repository
individually.
B
Oh,
that's
an
interesting
idea.
Yeah
I'd
have
to
look
into
that.
I
think
you
know
think
about
it
and
stuff,
but
I
think
that
might
might
work.
A
C
Well,
the
timestamp
private
key
would
somehow
still
have
to
be
used
for
all
the
repositories
would
not
allow
you
to
serve
parts
of
the
of
the
registry
absolutely
separately,
but
then
you
already
have
http
router
at
the
front
of
the
host
name,
so
there
is
some
something
that
is
shared
anyway.
I
guess.
C
C
It
would
be
one
direction,
it
would
get
the
private
key
to
the
customer
separate
container,
but
nothing
from
the
customer
second
container
outside.
A
Has
anything
to
do
with
what
pepsi's
data
is
there
there's
some
other
data
that
happens
to
be
used
across
both
of
them,
but
they
individually
have
their
own
security
content,
and
that
also
keeps
the
isolation
that
from
in
addition
to
the
security
boundary
we
keep
talking
about,
but
from
a
performance.
A
C
B
Okay,
yeah
yeah
I'll,
definitely
think
about
that
more
see.
If
there's
any
anything
I
come
up
with,
but
I
think
from
from
first
glance,
it
seems
to
make
sense
as
a
way
to
break
that
up.
A
Cool
well,
I
appreciate
you
guys
keep
on
iterating
on
different
parts
of
it.
So
that's
definitely
helpful
anybody
else
have
so
just
for
the
sake
of
time.
We've
got.
You
know,
15
minutes
left
approximately.
I
just
wanted
to
leave
time.
I
didn't
have
a
new,
updated
agenda.
We
have
a
couple
things
that
are
in
flight,
but
nothing
that
specifically
ready
to
kind
of
iterate
through.
A
I
figured
just
give
a
chance
for
folks
to
catch
up
with
what
we've
gotten
done
already
and
if
there's
any
questions,
it's
the
recording
is
good
for
people
listening,
but
it
doesn't
work
for
asking
new
questions.
So
I
don't
know
if
anybody
else
has
been
tracking.
What's
been
going
on
as
questions
that
we
should.
A
A
Okay,
that's
cool
too,
and
one
of
the
things
we
always
try
to
make
sure
is
we're
addressing
everybody's
style
of
communication
as
well.
So
we
have
the
slack
channel
that
we've
been
tracking
for
the
notably
v2
work,
of
course,
questions
here
and
any
comments
on
the
pr's.
So
we
on
the
so
just
do
it
we'll
address
it
in
every
channel
that
we
can
on
the
tough
stuff,
but
back
to
some
stuff
that
mariana
was
marina,
is
presenting.
A
We,
we
have
been
trying
to
make
progress
on
some
of
the
tough
work
as
well
so
several
weeks
ago,
we
realized
that
there's
a
lot
of
complications
here,
we're
trying
to
work
through
such
as
you
know
these
conversations
here
today
and
we
wanted
to
split
this
out
from
a
phase
one
and
phase
two,
because
there's
larger
end-end
workflow
pieces,
which
the
tough
metadata
will
be
a
part
of,
but
we
wanted
to
make
sure
we
kind
of
covered
other
unknowns
that
we
don't
necessarily
know,
and
if
you
remember,
we
were
kind
of
doing
the
sagrada
familiar
kind
of
model
where
we
don't
really
know
the
whole
piece.
A
So
until
we
until
we
get
sketching
it,
we
won't
really
be
able
to
communicate
and
everybody
can
look
at
like.
Oh,
I
haven't
thought
about
that.
If,
for
anybody
that
watched
the
kubecon
talk,
we
talked
about
a
sketch
of
a
bathroom,
and
so
I
had
done
the
sketch
of
a
bathroom
and
but
before
we
went
and
built
it,
I
showed
the
sketch
the
model
to
justin
and
justin's
like
where's
the
bidet.
I
didn't
thought
think
about
that.
A
You
know
we
don't
think
about
those
in
the
u.s,
not
until
covey,
but
we
run
out
of
toilet
paper.
So
we
wanted
to
be
able
to
get
that
whole
model
end
to
end,
because
we
know
there's
some
stuff
around
the
tough
metadata
we
have
to
think
about.
We
don't
know
about
the
other
pieces,
so
the
the
latest
we've
been
getting
there
is
we
have
the
signature
object
that
we've
been
kind
of
working
with
it's?
We
have
updated
with
the
jwt
token
format
or
civilization.
A
The
next
pieces
that
we're
working
on
is
how
do
we
get
that
information
in
and
out
of
our
registry
and
there's
been
a
bunch
of
conversations
around.
You
know
what
is
the
persistence
format?
What
do
we
leverage
that's
already
there
and
how
do
we
make
additions?
And
there
was
some
questions
specifically
today
on
ram
was
asking
some
questions
in
the
pr
around
the
impact
we
have
to
make
to
registry
so
and
this
one
actually
is
coupled
between
the
notary
working
group.
A
That
is
focused
on
a
signing
solution
that
could
work
within
and
across
registries,
but
we
also
need
to
make
changes
to
a
registry,
so
that
also
involves
the
oci
working
group
as
well.
So
we've
been
kind
of
having
one
foot
in
each
sandbox
there
to
make
sure
that
the
scenarios
we're
trying
to
cover
for
a
notary
are
covered
by
the
distribution
apis,
because
unless
this
works
across
all
registries,
we
didn't
really
meet
the
goal.
So.
B
A
Pretty
good,
we
had
a
good
call
last
week
around
that,
where
we
got
a
support
to
go
down
the
index
path,
which
means
that
registries
that
already
implement
index
and
have
to
do
that
tracking
between
two
things.
In
this
case
the
signature
and
the
artifact-
that
it's
signing
that
hard
part
of
doing
reference,
counting
and
garbage
collection
will
tie
into
that
infrastructure,
they're
already
building,
and
then
the
idea
that
index
will
be
able
to
declare.
It
is
not
just
a
multi-arc
container
image,
but
it
is
a
signature
object.
It
is
a
cmav.
A
A
So
that's
the
path,
we're
currently
working
down
and
we're
as
you
for
those
that
might
be
watching.
If
you
look
at
the
notary
project
under
github
you're,
starting
to
see
some
other
repos
get
added
there,
because
we
need
to
make
changes
to
the
nv2
client
that
we've
been
discussing,
we
need
to
make
changes
to
which
is
just
our
prototype.
But
then
we
to
make
the
end
and
work.
We
need
the
distribution
spec
we
need.
A
We
thought
we
needed
the
image
spec
to
update
the
manifest
and
index
schemas,
but
we're
going
to
move
that
into
the
artifacts
repo.
So
you
see
the
artifacts
repo
being
added
there
and
then
auras
also
oh
and
docker
distribution,
which
we
already
had.
So
the
idea
is,
we
want
to
be
able
to
prototype
these.
The
indent
experience
out
under
the
notary
project
make
sure
we're
comfortable
with
all
the
moving
parts.
A
So
we
don't
get
all
these
different
groups
randomized
and
as
we're
comfortable
with
that
end-to-end
changes
that
we
would
need,
then
we
can
make
the
appropriate
well,
one
will
make
the
spec,
but
then
we
can
also
make
the
appropriate
prs
back
to
the
as
a
group,
because
we'll
know
what
the
end
end
is
and
we'll
know
what
the
final
pr's
would
look
like
back
to
those
upstream
repos
the
stuff
we're
thinking
about.
With
this
phase,
two
of
the
tough
prototype.
A
We
believe
that,
at
least
from
a
registry
perspective
we've
captured,
like
I
don't
know
if
it's
80
or
90.
so
there'll-
be
some
more
additional
work
we
have
to
think
about,
but
we
believe
from
what
we
know
today.
It
will
accrue
up
so
we're
feeling
pretty
good
about
that
so
far
anyway.
So
that's
the
idea.
We
want
to
be
able
to
continue
to
iterate.
A
Do
that
model
see
if
we
like
it
everybody's
perspective,
can
look
at
it
and
understand
from
their
perspective,
what
this
thing
looks
like
and
if
we're
comfortable
and
as
we
find
new
things
we'll
keep
on
iterating
and
when
we
get
to
a
stable
state
across
all
of
those
affected,
repos
we'll
get
the
spec
put
together
and
do
the
upstream
changes.
So
that's
the
progress
we're
at
today.
D
The
the
prototype
that
you
mentioned
in
in
kubecon
is
that
the
one
named
under
the
under
the
repository
prototype
one
or
something
like
that.
A
Yeah,
that's
a
great
question.
So
last
week
I
think
was
last
week
we
were
trying
to
figure
out.
Where
did
we
commit
this?
Because
we
were
doing
prs
on
top
of
the
pr's
on
top
of
pr's
and
different
people's
repos?
It
was
getting
crazy
to
figure
out
what
exactly
we
would
look
at.
We
were
going
to
merge
it
into
the
master
and
of
nb2
or
main
to
be
fair.
We
could
also
change
things
to
me
now,
but
then
we
were
debating
well,
isn't
the
root
of
nb2
going
to
be
a
reference
implementation?
A
We
don't
want
to
stick
the
prototype
there,
because
we
know
the
prototype
will
change
where
we
don't
necessarily
want
to
just
evolve,
but
we'll
probably
toss
it
bring
the
irrelevant
things
over
so
rather
than
create
yet
another
repo
for
different
prototypes.
What
we're
going
to
do
is
maintain
branches
in
the
nv2
repo
and
keep
the
root
available
for
the
eventual
reference
implementation.
A
So
right
now
you
see
prototype
one,
because
we
were
being
very
creative,
the
name
and
we
just
decided
to
get
something
done,
and
if
we
need
to
go
down
a
different
prototype,
we
can
do
that
and
then,
wherever
we
wind
up
landing,
you'll
see
in
the
root
of
nv2.
So
I
did
push
an
update
to
nv2
readme,
the
main
readme
just
to
explain
what
we're
doing
so,
those
outside
of
our
group.
A
If
they
just
come
in
and
look,
they
should
be
able
to
find
the
path
and
that's
the
progress
we've
got
so
far.
Yeah
okay,
stuff
you've
been
seeing
is
in
the
prototype
one
brand.
A
Sound
good
and
just
for
those
tracking,
we
also
saw
aws
added
artifact
support
in
ecr,
which
is
awesome,
because
that
is
the
base
of
a
lot
of
this.
That
and
they've
been
working
on
it
for
a
while.
This
isn't
like
a
magic
thing.
It's
something
I
know
that's
been
working
for
a
while
and
omar
paul
and.
B
A
And
just
really
that
you're
starting
to
see
these
things
kind
of
roll
out
to
be
able
to
support
these
undead
scenarios
without
boiling
the
ocean.
We've
also
been
trying
to
generalize.
Is
there
some
general
metadata
apis
that
we
either
need
for
this,
or
would
make
this
a
little
helpful?
Or
if
we
had
a
metadata
api
on
all
registries?
A
How
would
that
impact
a
signing
solution,
because
at
least
though
definition
of
metadata
in
my
head
is
it's
not
something
you
would
necessarily
sign?
If
you
want
something
like
that,
you
can
put
it
on
the
manifest
itself,
but
it's
just
like
you
can
add
signatures.
We
would
add
additional
metadata
and
the
only
reason
I'm
even
bringing
it
up
here
is
it's
playing
into
some
of
the
distribution
api
conversations.
We're
trying
to
have
is
figuring
out
like
how
do
we
not
keep
on
throwing
stuff
on
here
and
by
the
time
we're
done?
A
A
Okay,
well
we'll
watch
on
slack
for
the
notary,
v2
conversations
and
pr's,
and
I'm
hopeful
we'll
have
some
more
I'm
hoping
some
more
progress
on
the
distribution
stuff
by
next
week.
There's
progress
being
made,
but
it's
we've
got
just
some
churn
happening
that
we
have
to
kind
of
work
through.
So
we
don't
randomize
too
many
people
on
the
conversations
we're
having.
So
that's
that's
about
where
we're
at
and
thank
you
all
for.