►
From YouTube: OCI Weekly Discussion 2020-03-11
Description
https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg?view#March-11-2020
Chat:
```
00:01:48 Vincent Batts: https://hackmd.io/El8Dd2xrTlCaCG59ns5cwg
00:03:16 Josh Dolitsky: Feel like I just talked about some of these topics - happy to go last
00:19:56 jschorr: https://docs.google.com/document/d/1V4vaF9mSTBFbIbUUDH0mk-4KizCSFlv7oqA_4ZkD9a8/edit#
00:22:35 Steve Lasker: no pressure @vincent
00:41:53 Josh Dolitsky: I like the JWK auth specification - maybe that can be extracted into its own capability? And its dependency of this?
```
A
A
A
A
C
And
then,
if
you
export
certain
settings
like
an
environment,
variable
test
push
test
the
whatever
the
other
two
categories
we
want
to
call.
It
would
then
do.
It
would
then
run
the
the
requests
that
are
scoped
to
that
category
and
when
you're
submitting
for
certification,
you
could
actually
denote
that
your
your
registry
has
act
functionality
that
is
defined
in
the
spec.
But
it's
not
necessarily
the
case
that
you're
not
conforming.
If
you
don't
do
all
of
them,
so
does
that?
C
No
I
can
I
can
work
on
that
and
I'll
I'll
work
on
getting
the
requests
grouped
and
I'll.
Send
an
I'll
send
a
note
to
the
email
chain.
That's
going
on
about
this,
but
you
and
Derek
had
put
something
in
the
mailing
list.
I
think
was
a
pretty
good
visual
like
geared
specifically
had
a
list
of
four
categories
that
I
thought
made
a
lot
of
sense,
but
yeah
I'll
group
them
out
in
and
write
a
proposal,
but
I'm
just
wondering
if
that
sounded
good
yeah.
D
I
think
the
goal
would
be.
We
don't
want
a
registry
that
doesn't
implement
something
that
makes
sense
for
them
to
not
look
like
they're,
not
conformant,
and
we
don't
want
a
registry.
That's
doing
something.
That's
part
of
the
spec
shouldn't
necessarily
look
like
it's
tough
doing
something
super
special,
it's
just
a
matter
of
if
I'm
looking
for
a
certain
behavior
from
registries,
this
is
what
each
of
them
support.
I,
don't
know.
D
If,
in
the
conformance
thing
we
want
to
list
other
features
that
are
not
in
the
spec
I
think
we
don't
want
to
turn
this
into
an
advertising
thing.
Maybe
there
is
a
link
that
goes
to
the
registries
overview
page,
but
I
think
it's
just
a
related
to
the
spec,
there's
varying
degrees
of
imputation.
In
fact,
it'll
be
a
great
base
for
as
Vincent
Vincent
could
ever
get
his
optional
capabilities.
Things
on
that
list
is
gonna,
expand
exponentially,
so
that'll
be
awesome.
C
D
C
I
think
I
think
what
Joe
is
describing
makes
a
lot
of
sense,
but
it
should
be
written
that
way
they
keep
a
bit
like
right.
Now,
it's
just
here's
all
the
end
points
so
yeah.
Let
me
think
about
that.
I'll
ping,
you
Joey
and
Steve
Albini
loss
blind,
said,
carry
on
and
then
this
kind
of
leads
into
so
we
have
the
so
we
have
a
dashboard
that
we
put
up.
C
Think
the
registries
should
be
having
more
availability
to
change
how
the
tests
runs
on
their
end.
So
what
I
and
somebody
had
suggested
this,
but
does
it
make
sense
for
that
dashboard
to
go
into
the
open
containers
or
gonna
be
hosted
on
open
containers?
Org
URL
I
mean
I,
don't
like
I,
don't
really
and.
D
A
A
C
C
D
C
All
our
for
every
single
registry,
so
it
actually
it's
it's
interesting,
because
if
you,
you
know
like
I'd
like
to
work
with
the
registry,
someone
configure
in
such
a
way
that
pools
from
master
from
Quay
or
cools
from
master
from
something
and
for
the
live
ones.
It's
running
against
the
live
site.
So,
if
ACR
were
to
fix
some
results
in
this
people
would
show
up
green
the
next
hour.
C
No,
not
at
all
this
is
this
is
just
this
is
something
that
came
out
of
me,
trying
to
experiment
with
the
conformance
testing
against
several
places
and
but,
like
I,
think
it
makes
sense
for
open
containers
to
for
it
to
have
their
does
it.
You
know
I
again,
I,
don't
think
this
is
official
conformance
results
or
we
could
talk
about
with
the
OCI
conformance
or
OC
I
certification,
repo
and
actually
make
it
part
of
that
process
and
build
the
table
out
from
people
who
have
submitted
conformance
results.
Yeah.
D
E
Just
I
should
know
just
for
for
the
project
wayside
we're
planning
to
add
it
to
our
CI,
so
that
if
it
doesn't
pass
green,
it
can't
merge.
So
that's
our
plan
and
I.
The
only
reason
we
haven't
done
that
yet
is
because
we
don't
support,
like
so
I've
run
the
conformance
test,
suite
locally
on
Punk
way
and
with
I
think
one
exception
where
we're
returning
like
a
202
instead
of
a
204
or
something
the
rest
of
the
issues
seem
to
be
primarily
rated
affected.
E
D
C
Realize
you're
such
an
environmentalist
but
I
think
we
should
know
I
think
the
table
is
drawn
from
the
static
results
that
are
that
are
submitted
like
like.
If
there's
a
table,
that's
official
and
quai
is
failing
on
master.
That
doesn't
mean
that
the
released
version
of
way
is
a
non-conforming
version,
so
I
think
they
should
be
responsible
for
submitting
their
results
to
be
open,
container
sing,
and
then
the
table
may
be
sources
from
that,
but
and
I
I
do
think
there
is
value
in
continuously
running,
especially
on
hosted
services.
C
D
Makes
perfect
either
we
should
absolutely
take
the
stuff
you've
got
there
and
promoted
it's
just
like
once
a
day
seems
perfectly
fine,
but
I,
just
I
still
I
and
I.
It's
funny
to
say
we
had
these
conversations
that
keep
on
arguing
against
ACR,
but
it's
like
I
I
would
want
to
make
sure
that
the
non
hosted
ones
have
just
as
much
accuracy,
visibility,
and
it
doesn't
mean
that
we
should
have
to
stand
up
harbor
every
hour
once
a
day
just
to
run
the
conformance
testing.
At
the
same
token,
I,
don't
know
how
we
hold
them.
D
F
C
A
A
C
D
Just
to
your
point
like
to
you
an
ableist,
that's
why
we
just
break
this
off,
except
for
conversation,
because
as
a
that,
the
bigger
piece
that
I
was
asking-
and
it
sounds
like
people
are
supportive.
That's
why
I
was
asking
it
should
we
be
one
of
these
online
ocean
teams
submit
and
what's
this
what's
the
standard
that
others
have
done
and
not
just
for
being
green
green.
The
plan
is
just
no
I.
C
C
I'll
send
out
an
email
that
outlines
exactly
what
I'm
talking
about
my
only
other
topic
and
then
I'll
I'll
pass.
The
time
is
I
I
want
to
publish.
Maybe
we
just
don't
do
it,
but
I
want
to
I
think
that
the
conformance
test
should
be
published
as
an
image
and
I
know.
Vince
was
talking
about
potentially
an
OC
ike
wave
org,
but
then
it's
like.
Are
we
choosing
it?
Does
that
become
a
political
thing?
Do
we
want
to
publish
to
multiple
places,
I?
Don't
care,
I'm,
gonna
drop
the
topic,
someone
give
me
credentials
and
I'll.
C
D
D
C
D
B
A
E
Also
put
it
in
the
chat,
so
here's
the
link,
hi
everyone,
I'm
Joey,
I'm,
the
tech
lead
of
the
end
of
the
Quai
team
I'm,
also
former
co-founder
of
it,
just
want
to
give
a
little
bit
background
on
proposal
the
bat.
The
proposal
is
fairly
in-depth
and
detailed
and
I'm
not
going
to
walk
through
the
entire
thing
right
now,
because
that
would
be
a
massive
waste
of
time.
E
But
I
just
wanted
to
give
a
little
bit
of
background
as
to
why
I'm
proposing
this
as
an
extension,
the
OCI,
what
problems
it
would
solve
and
a
high
level
description
of
how
it
works.
So,
basically,
the
problem
that
I
feel
needs
to
be
solved
and
I
think
a
bunch
of
other
people
tend
to
agree
with
is
today
we
have
a
lot
of
tooling
that
wants
to
be
able
to
operate
on
changes
that
occur
in
registries.
E
This
runs
the
gamut
from
things
like
security
scanners,
all
the
way
to
things
like
spinnaker,
where
you
know
a
new
tag
gets
pushed,
and
you
want
to
know
you
want
to.
You
know,
scan
that
tag,
or
you
want
to
take
some
action
in
response.
Traditionally.
This
has
been
solved
through
a
couple
of
different
ad
hoc
solutions.
A
bunch
of
the
different
registries
provide
web
hooks
where
you
can
register
like
callback
when
say
a
new
tag
is
pushed,
but
they're,
obviously
kind
of
ad
hoc
each
registry
has
its
own
format
its
own
support.
E
The
kind
of
events
that
can
be
used
are
different.
We
also
have
a
general
solution
which
is
to
make
use
of
the
catalog
and
tags
api's,
but
those
require
pulling
and
pulling,
and
in
our
experience
on
the
quayside
on
those
api's
are
incredibly
heavy
on
our
database.
They
are
incredibly
slow
and
since
they
don't
provide,
usually
any
information
as
to
changed,
it
results
in
a
problem,
whereas
the
registry
and
repositories
grow
larger
over
time.
The
overall
operation
time
slows
down
for
being
able
to
determine
updates.
E
Plus
it
requires
a
lot
of
state
tracking
on
the
part
of
the
various
clients
that
would
be
checking
the
catalog
and
tags
and
puentez.
They
have
to
themselves
track,
what's
changed
since
the
last
time
they
looked
so
the
goal
of
the
proposal,
as
I
stated
app,
is
to
basically
create
an
equivalency
of
a
pub/sub
solution
for
registries.
This
would
be
an
optional
extension,
ideally
supported
via
the
Bass's
optional
extension
proposal,
but
since
I
wrote
my
proposal
kind
of
concurrently
with
V
Vance's
one,
it
doesn't
get
reflect
that
proposal.
I
have
a
comment
in
there.
E
So
right
now,
I
haven't
honed
under
v2,
slash
funders
for
events,
but
we
probably
change
that,
but
that's
a
minor
detail,
the
basic
gist
is.
It
would
allow
what
I
hear
after
declare
as
a
client
to
register
with
the
registry
that
it
wishes
to
watch
a
portion
or
the
whole
of
events
that
occur
in
the
registry,
and
these
events
would
be
everything
from
tag:
updated
deletion,
repository
created
deletion
and
to
be
notified
via
web
hook.
Callback.
E
Whenever
one
of
those
events
occurred
within
the
watched
subscription
space
and
then
the
clients
would
then
be
able
to
take
whatever
action
they
want
index
it
for
security
scanning
deploy
was
provide.
New
UI
send
out
its
own
notifications
kind
of
anything
along
the
gamut.
The
key
aspects
of
the
proposal
that
I
want
to
highlight
briefly
are
one.
It
is
a
call
back
approach.
We
had
a
slightly.
We
have
a
slight
variant
internally
of
a
proposal
that
made
use
of
WebSockets
instead,
but
it
was
deemed
a
little
too
complex.
A
The
big,
the
big
value
difference
and
those
two
proposals
that
he's
talking
about
was
like
a
versus
WebSocket
called.
You
know.
Websocket
approach
is
that
the
WebSockets
can
traverse
like
a
net
firewall
so
that
you
could
have
something
handover,
a
WebSocket
that
is
behind
a
firewall,
whereas
a
web
hook
would
have
to
be
publicly
exposed
to
the
registry,
good
benefits
and
drawbacks.
Yes,.
E
And
then
the
second
big
benefit
is
that
it
uses
essentially
a
self
subscription
model
and
I'll.
Describe
that
briefly,
where
it's
it's
pseudo
model,
it's
all
modeled
on
Olaf,
but
the
idea
is
that
any
client
could
make
an
API
call
to
the
registry
and
say:
hey
I,
wish
to
subscribe
to
notifications
for
this
set
or
subset
of
the
registry.
The
registry
can
then
respond
with
the
URL
that
the
tool
or
the
client
could
redirect
the
user
to
to
approve
that
subscription
and
then
the
subscription
would
be
created.
E
So
this
would
allow
arbitrarily
tools
to
connect
to
arbitrary
registries
without
having
to
pre-register
a
client,
ID
or
client
secret.
This
would
all
be
done
and
verify
using
gwt's
so
that
every
call
made
from
the
registry
to
clients
and
clients
to
registries
would
be
validated
using
the
other
sides
published
gwt's
doesn't
have
to
be
gbc's,
I
used
it
because
that's
what
a
lot
of
REST
API
czar
using
we.
E
E
On
top
of
the
sum
total
of
tags
and
repositories
and
manifests
in
a
registry
to
be
alerted
about
those
changes
without
having
to
basically
pull
and
one
final
thing,
I
should
mention
before
I
kind
of
open
the
floor
to
what
I
imagine
will
be
a
withering
level
of
criticism
is
there
is
a
built-in
component
for
something
we
call
catch-up
and
the
idea
is
arm.
This
is
all
well
and
good
once
you
subscribe
to
events
and
events
have
are
starting
to
occur,
who
what
happened?
E
It's
a
simulation
of
all
the
events
that
have
occurred
until
I've
registered
or
a
subset
thereof,
and
then
the
registry
could
then
basically
puppet
those
events
as
a
way
of
having
the
client
catch
up
to
where
would
have
been
had
it
registered
from
you
know,
time
index
0
and
the
recommendation
and
I
put
into
the
proposal
is
that
those
catch
of
events
would
be
considered
lower
priority.
This
would
be
an
optional
thing
so
that
you
know
new
events
were
coming
immediately
and
then
I
have
the
registry
found,
and
it
had
some.
E
You
know
space
in
the
queue
for
lack
of
a
better
term.
He
could
issue
these
catch-up
events
and
also
likewise,
if
a
client
went
offline
for
a
period
of
time
when
it
came
back
online,
it
could
request
catch
up
to
it
from
where
it
was
to
where
it
is
now.
Also,
again
is
a
way
of
saying:
hey,
I
missed
a
bunch
of
events.
Can
you
get
me
caught
back
up
to
where
it
was
so?
E
This,
in
my
opinion,
would
essentially
negates
the
need
for
polling
and
it
would
minimize
the
need
to
hit
the
catalog
and
tags
endpoints.
They
still
have
value,
but
I
think
they
my
opinion.
They
should
be
optional
because
any
tool
that
needed
a
fully
global
view
of
a
register
or
even
just
a
name
space
within
a
registry
or
even
a
repo-
would
have
the
ability
to
just
register
as
a
client
of
this
solution
perform
catch-up
and
then
no
further
calls
would
be
necessary
unless
new
changes
came
along.
E
So
the
proposal
does
allow
for
that.
It
would
depend
on
whether
we
want
to
standardize
what
the
catch-up
hash
or
code
is.
We
could
decide
to
make
it
a
time-based
code
and
make
that
part
of
the
formal
proposal,
in
which
case
then,
yes,
we
could.
My
current
proposal
doesn't
define
whether
that's
registry
specific
or
whether
it
is
a
daytime
welcome
to
suggestions
on
that.
E
The
only
reason
I
didn't
make
it
a
daytime
before
was
because
it
means
the
implementations
a
little
more
complex
for
registries
like
for
on
the
clay
side,
we
have
time
machines,
that's
trivially
easy
for
us
to
do,
but
for
other
registries
and
maybe
harder
if
they
don't
keep
time
information
as
to
when
things
were
pushed
so
I,
don't
know.
This
is
where
the
community
really
needs.
To
give
me
some
feedback
on
like
is.
H
I
would
say
that
seems
ambitious,
just
to
be
able
to
go
back
before
a
subscription
existed,
so
we
model
it
based
off
how
kubernetes
watches
work.
You
get
a
watch
token
and
you
can.
You
can
reuse
that
token.
If
you
lose
connection
which
WebSockets
often
do
and
then
it
will
give
you
it
will
fast
forward.
You
all
the
events
since
that
watch
token,
but
you
can't
just
choose
a
time
before
you
have
a
watch
token
and
those
watch
tokens
tend
to
expire,
I
think
around
20
minutes.
D
E
As
I
mentioned
like
on
Quay,
we
have
this
built
into
our
data
model.
I
would
be
shocked
if
there
isn't
a
registry
implementation
out
there
that
doesn't
keep
track
of
the
date.
I'm
necessarily
like
things
that
are
based
off
of
like
just
a
flat
file.
Space
I'll,
probably
may
be
able
to
use
the
end
time
on
the
on
the
manifest
but
I'm,
not
hundred
percent
certain
that
they
would
have
the
time
codes.
E
I
didn't
want
to
put
that
in
necessarily
because
I
wasn't
sure
what
the
cognate
is
overhead
of
Doom's
sure
was,
but
if
we
find
that
the
vast
majority
of
current
implementations
supports
some
sort
of
time-based
lookup
yeah
I
mean
it
makes
perfect
sense
to
be
able
to
say,
hey,
I
want
to
be
able
to
replay
not
just
from
the
beginning,
but
from
this
particular
time
code,
because
then
it's
pretty
straightforward
to
do
so
again.
This
is
where
community
feedback
is
going
to
be
super
important.
Also.
E
A
certain
point
yeah,
so
one
thing
I
should
mention
on
on
the
deletion
side
because
it
can
be
a
little
confusing
is
from
my
personal
perspective.
If
a
tag
has
been
pushed
and
then
deleted
before
you
subscribe,
then
the
catch
up
would
not
have
either
of
those
events,
because
they
are
inverse
they've,
basically
created
a
Noah
so
for
most
clients
or
for
I'm.
Sorry
for
most
registries
that
would
implement.
E
This
is
the
proposal
if
you
created
it,
if
a
tag
was
created
in
a
repository
and
then
six
months
later,
that
tag
was
deleted
and
then,
a
month
after
that,
the
subscription
came
in
and
said,
hey
catch
me
up.
The
registry
would
essentially
just
look
at
the
current
tag
list
and
issue
synthetic
push,
events
for
each
of
those
tags.
It
would
not
have
to
go
back
and
say,
issue
a
push
and
then
a
delete
after
that.
That's
out
of
band
and
I.
Don't
think
it
provides
any
real
value
to
new
clients
agreed.
D
E
The
idea
is
that,
when
a
tag,
let's
say
a
tag
is
added
or
type
is
deleted.
All
of
the
reporting
is
tracked.
Purged
subscription
right.
So
if
it
was
delayed
an
hour
ago
and
I
haven't
yet
reported
it
to
this
particular
client,
then
it's
the
job
of
the
registry
to
report
those
right.
So
my
guess
is
the
way
registries
will
implement.
This
is
they'll,
keep
track
of
the
changes
and
then
they'll
only
remove
it
from
their
tracking
queue.
E
Once
all
subscriptions
have
been
caught
up
and
if
they
haven't
been,
then
it's
up
to
the
registry
to
either
keep
that
information
around
indefinitely
or
to
a
point
where
they
say:
okay,
we're
not
going
to
support
catch
up
after
a
certain
point
again
on
the
quayside,
we
have
time
machine.
So
we
keep
that
full
set
of
information
for,
however
long
the
time
machine
window
is
and
if,
if
a
subscription
comes
in
from
after
that
point,
we're
just
gonna
say:
hey
you're
too
far
out
and.
D
G
E
D
An
example
of
this
was
you
know:
we've
had
a
CR
for
about
four
years
or
something
whatever
it
is
it,
and
if
it's
not
when
the
others
were
out,
I
think
e
CR
was
out,
maybe
G.
Sorry
I,
don't
remember.
The
point
is
as
though,
when
we
first
came
on
the
scene,
and
we
asked
the
scanners:
hey
support
a
CR,
they
were
pulling
us
to
death.
Like
you
know,
Joe
is
talking
about
on
the
catalog
in
the
tags.
Api,
we
said
hey,
we
have
this
webhooks
implementation.
D
Can
you
use
that
and
it
was
different
than
somebody
else's
and
different
than
somebody
else's
and
basically
they
said
to
us
like
look,
you
guys
aren't
big
enough,
yet
they
didn't
quite
say
it,
but
this
is
ultimately
what
they
were,
meaning
we
weren't
big
enough
yet
for
them
to
care.
They
weren't
gonna
write
special
code
for
us.
So
what
with
this
proposal
does?
Is
it
inverts?
It
lets
I'll
pick
on
harbor
and
just
picking
on
them,
because
in
my
mind
you
know
harbors
new
to
the
scene.
D
Coupled
you
know,
whatever
should
aqua
twistlock
and
the
others
have
to
write
code
specific
to
harbor,
or
does
the
burden
just
get
put
back
on
harbor
if
they
just
implement
Joey's?
You
know
this.
This
new
distribution
spec
optional
API.
Then
they
do
the
work,
and
then
they
get
all
the
free
community
of
all
the
scanner
community,
not
for
free,
but
they
don't
put
the
burden
back
on
others.
That's
that's
kind
of
the
model
here
and
it
also
for
us
registry
operators
that
have
to
support
these
things
that
kill
us.
D
H
D
I
was
curious
if
anybody
else
had
some
opinions,
so
we've
had
web
hooks
for
a
while,
and
it
is
a
challenge
because
it's
like
kind
of
like
filesystemwatcher
notifications,
if
you
happen
to
miss
just
one,
that
you
never
trust
it
again,
there's
no
concept
of
voicemail.
So
what
Joe
and
I
were
talk
a
little
bit
about?
Is
there
a
way
to
do
a
durable
endpoint?
That
is
an
extension
that
can
be
cloud
specific.
So
for
Azure,
be
you
know,
as
your
cue
storage,
you
know,
AWS
and
others
have
similar
things.
D
A
A
That's
that's
that's
my
first
step,
that's
where
even
when
I
was
chatting
with
Joey
about
it
that
it
web
hooks
is
simpler
and
I.
Think
more
speaks
to
a
lot
of
the
current
stuff
of
like
within
the
urban
ad
space
like
a
lot
of
things
are
just
books
and
people
are
familiar
with
that,
but
it
would
be
simpler
on
the
in
a
lot
of
ways
for
the
client
and
on
the
the
pub/sub
implementation
side
and
then
along
the
long
Gold.
The
addy
have
been
ease
up
any
kind
of
library
or
like
you're
standing.
A
You
know
opens
where
to
look
at
everybody,
still
client
library,
side,
where
you
actually
make
the
type
the
query
and
then
get
to
get
the
requests
coming
in
afterwards.
I
think
almost
like
I
just
think.
If
somebody
like
trying
to
curl
something
out
of
make
that
happen,
but
I'm
not
not
truly
opposed
to
either
one
up
and
I
just
start
with
the
simple
one:
have
the
conversation
and
involved
it
from
there
and.
E
E
D
E
Well,
one
I
want
to
get
some
feedback
on
and
see
if
we're
missing
anything
obvious,
because
I'm
sure
there's
something
in
there.
Maybe
some
of
this
tombstoning
stuff
were
a
little
bit
of
a
readout
on
certain
things
and
then,
after
that,
we
need
to
make
a
decision
as
to
whether
web
multiverses,
WebSockets
kind
of
like
the
road
forward
and
then
I
was
gonna.
Start
writing
a
very
simple
prototype
implementation,
probably
on
the
quayside
just
because
I
know
that
could
base
the
best
and
see
if
it
works.
That
was
basically
what
my
plan
was.
Could.
D
E
Could
I
mean
my
career,
it's
proposal
so
that
the
actual
notification
and
registration
is
done
in
some
somewhat
of
a
generic
fashion?
The
only
concern
that
I
have
there
is
that
we
need
to
make
sure
that
it's
discoverable
and
we
probably
still
need
to
have
a
default
solution
right
so,
like
my
hope,
seems
like
the
best
default
and
then
maybe
we
haven't
and
I
know
view
bass
is
about
to
give
me
an
evil
glare.
E
But
maybe
we
have
the
extension
to
the
extension
where
we
report
that
we
support
web
hooks
and
web
sockets
or
web
books
and
this
events
plus
for
web
books
and
cloud
cloud
watch
right,
and
so
then
it's
extensible
for
everyone
to
employ.
Never
they
want,
but
the
standard
thing
is
always
we
always
support
web
books
or
something
and
quai
we'd
probably
do
web
books
and
web
sockets
because
they're
generic
and
they
work
Josh,
had
a
question.
E
The
JW
k
wall
specification
as
I
said:
I
kind
of
just
took
that
out
of
the
ether
because
we
use
it
for
other
things,
but
I
really
like
the
idea
of
API
calls
going
back
and
forth
between
tools
always
being
validated
by
the
other
tools
reported
key
or
keys,
and
JW
KS
and
gwt's
seem
to
meet
that
need
so
long
as
you
do
it
correctly,
like
don't
trust,
algorithm,
less
JW
T's,
so
one
thing
we
could
do
Josh
is.
We
could
definitely
define
a
like
if
you're
writing
an
extension.
E
E
That's
actually
not
a
bad
idea
and
we
could
even
pop
that
up
to
the
extension
level
right
and
so
the
lack
of
better
term,
we
could
have
two
classes
of
extensions
and
if
one
is
a
client
talking
to
the
registry
and
vice
versa,
it
could
make
use
of
a
client,
jadibooti
or
bi-directional
GWT
thing,
and
we
could
break
that
into
its
own
proposal
and
then
say:
pub/sub
uses
it
and
if
we
ever
add
something
else
it
uses
it
like.
We
were
to
add
a
dedicated
security
scanning,
API
thingy.
A
And
sir
I'm
glad
to
see
you
finally
bubbled
up
to
the
list.
A
I
I
So
so,
like
you
just
said
so
servant
and
picked
me
King
me
on
this
discussion.
So
I
I
moved
at
all
of
the
content
that
we
had
in
the
the
image
spec
PR
and
I
moved
it
to
the
artifacts
I.
Think
the
initial
discussions
yeah
for
around
having
this
as
either
additional
mime
type
or
suffix
addition
to
media
types
in
general,
so
I
put
everything
in
the
PR.
Here
is
basically
most
of
the
same
information
and
then
rima
sashing
it
for
being
more
generic.
The
artifacts.
A
So
what
maybe
is
fine-
and
this
is
where
I'm
curious
50
from
Stevens
from
the
other
artifacts
maintainer
is
like?
Is
this
gonna
break
down
into
maybe
even
just
like
a
machine
parsable
document
at
some
point
that
has
like
key
value
pair
of
like
mimetype
then
point
to
document?
Or
how
do
we
want
to
handle
this,
or
is
just
this
kind
of
document?
D
A
So
we'd
come
up
a
couple
of
times
in
a
couple
of
different
contexts:
no
so
Brandon
and
I'm
chatted
at
the
Container
Security
Summit,
but
the
context
of
that
discussion
was
or
about
the
TP.
Some
of
the
TPM
work,
that's
happening
and
whatnot
so
that
that
was
secure
enclaves
and
how
that
works
with
encrypted
layers.
And
then
you
know,
I.
Think
part
of
the
bigger
story
made
more
sense
to
me
that
did
not
before
that
moment
and
like.
A
We
should
get
back
to
reviewing
that
conversation,
because
it
kind
of
stalled
out
and
the
image
spec
side.
But
when
we
were
doing
the
review
last
week
was
why
it
was
like.
Oh,
this
is
pretty
artifacts
discussion.
This
really
ought
to
be
over
an
artifact,
so
Brandon
has
done
them
the
needful
to
make
a
move
effectively
moved
the
PR
over
to
artifacts.
D
Boo
I
was
going
to
spend
some
time
and
artifacts
this
afternoon.
I
was
gonna
do
this
morning
also,
but
the
me
what
I'm
trying
to
stand
here
is
this
one
seems.
Let
me
ask
the
question:
is
this
how
you
do
encrypted
layers
specific
to
a
runnable
image,
or
this
is
how
you
do
encrypted
blobs,
regardless
of
the
media
type,
so.
I
Initially,
I
think
the
discussion
of
this
was
around
containing
of
layers,
but
then
I
think
the
conversations
back.
Then
we
were
saying
that
there
should
be
more
generic,
so
it
could
be
applicable
to
any
type
of
OCR
block.
The
camera
implementations
today
only
handle
they
is
that's
why
it's
kind
of
written
within
that
context,
but
the
mechanisms
are
pretty
generic.
It's
just
that
I
think
the
idea
of
how
to
generically
handle
the
implementation
of
this
has
been
kind
of
like
the
gatekeeper
to
writing
this
document
to
be
a
generic
for
config
or
manifest
or
anything.
D
So
what
I'm
trying
to
wrap
my
are
to
you.
So
your
thinking
is
that
any
artifact
type,
whether
it
be
a
hound
chart
or
a
runnable
image-
or
you
know,
singularity
well
so
it'll
take
some
glare
out
of
it,
just
for
but
other
things
that
want
to
push
to
a
registry
that
they
could
encrypt
their
blobs
yeah.
That's
what
you're
saying
okay!
D
Odin
they're
blobs
in
image,
minimal
images,
we
think
of
the
blob
says,
have
a
layer
as
a
meaning
because
they
overlay
each
other
is,
but
when
you're
kind
of
getting
at
is
if
an
artifact
type
wants
to
encrypt
its
layers,
its
blobs
that
here's
a
pattern
for
doing
it,
but
wouldn't
each
tooling
for
a
runnable
image,
a
home
chart
need
to
know
how
to
encrypt
an
unencrypted.
Those
blobs
yeah.
I
I
I
D
I'm
trying
to
think
about
the
end
end
experience
like
so
I'm,
always
just
take
a
runnable
image
for
a
second
I'm
sitting
on
my
laptop
I.
You
know:
do
a
darker
build
what
do
I
do
between
the
docker
bill
being
the
the
CLI
that
I'm
using
and
pushing
it
to
a
registry
that
my
layers,
my
blobs,
get
encrypted
before
it
goes
into
the
registry
and
then,
when
I
deploy
it
to
a
cluster
which
would,
let's
just
say
it's
a
container
d
host.
How.
G
D
I
So
so
we've
done
this
exercise
with
a
couple
of
tools.
So
on
the
building
side,
what
we
do
is
in
the
bill
come
on
well,
not
in
the
bill
coupon,
but
in
the
push
come
on,
we
have
a
additional
flag
that
we
introduced
into
COI,
which
specifies
the
end
encryption
key,
which
is
a
symmetric.
So
it's
a
public
key
and
so
the
tool
does
encryption
of
the
layers
and
adds
annotations
to
the
manifest
before
uploading
it
to
the
registry.
So
you.
D
I
So
we
have
configured
I'll
put
the
link
since,
like
the
the
that
the
meeting
notes
so
container
D
and
cryo
now
have
capabilities
to
specify
a
key
path
in
which
the
private
keys
will
be
located
and
if
it
actually
bumps
into
a
Lea
that
has
this
particular.
Let
me
get
like
it's
gonna
try
and
decrypt
it
using
those
private
keys.
Okay,.
D
So
the
way
I
kind
of
that's
why
I
said
this
is
interesting,
because
it's
a
different
pivot.
What
I
think
you're
basically
saying
is
for
any
artifact
type.
If
you
want
to
encrypt
your
layers,
here's
the
standard
pattern
to
do
it,
and
but
that's
an
but
but
each
client
for
that
mediate.
Artifacts
I
would
have
to
support
it
like
it's
not
really
a
an
automatic
thing.
It's
here's
the
pattern.
Okay,
the
image
runtime
spec
has
adopted
this
as
queue
up
vincent's
next
topic.
D
D
I
D
A
The
parts
where
it
could
be
a
little
bit
tricky
is
navigating
the
what
you.
What
you
were
just
saying
like
what
clients
shouldn't
must
do.
Look
is
pretty
much
if
they
pull
down
something,
they
don't
know
what
to
do
with.
They
should
fail,
and
that
should
not
be
expected
that
what
you're
asking
yeah.
D
I
guess
what
I'm
saying
is:
this
is
a
slight
pivoting.
This
isn't
a
new
artifact
type,
it's
actually
a
new
type
of
layer
that
specific
artifacts
could
implement,
and
there
is
a
well-known
pattern
for
doing
it,
but
it
doesn't
come
for
free
and
the
toolchain
around
that
artifact
type
needs
to
know
about
it,
so
I.
What
I
like
about
this
is.
It
is
a
different
pivot
on
it.
It
is
a
way
for
us
to
well.
A
To
be
fair,
it's
it's
kind
of
bridging
something
that
I
thought
was
gonna,
be
a
slow,
slow
migration
with
the
artifacts
repo.
But
this
is
it's
effectively
making
one
of
the
jumps
that
Alexa
and
I
have
brainstormed
for
the
last
little
while
with
like
OCI
v2
they've
got
all
kinds
of
people
excited
of
like
here's.
This
thing,
that's
you
know
a
bunch
of
chunks
to
go
fetch
from
different
places,
possibly
the
registry
and
it's
effectively
something
that
clients
should
know
what
to
do
with
if
they
want
to
use
it.
A
J
Yeah,
so
one
thing
I
was
just
kind
of
wondering
like
my
problem
with
it
is
so,
if
I
have
the
encryption
decryption
key,
you
know
I
may
as
well.
Just
save
there's
no
point
now
in
fact
anywhere
or
you
know,
I
have
to
like
share
the
encryption
keys,
somehow
and
obviously
you're
not
gonna.
Talk
with
that
in
this
bit,
I
not
tends
to
be
a
really
difficult
problem,
and
so
I
was
kind
of
wonder
what
the
use
cases
are
for
this.
Like
I,
you
how
you
get
around
the
problem
of
sharing
yeah.
I
I
There's
a
way
in
which
we,
this
should
be
the
keys,
and
we
ensure
that
only
particular
notes
have
access
to
the
keys.
So
this
this
is
in
the
form
of
an
attestation.
So
this
is,
you
know:
well
the
the
Association
TPM
stuff
comes
in,
so
if
we
are
able
to
have
a
log
or
show
evidence,
basically,
if
a
note
had
a
particular
attention
that
it
was
only,
it
was
given
the
key
and
it
was
delivered
in
a
safe
way,
because
there's
repre
the
TPM
or
you
know
whatever,
so
so.
I
J
Yeah,
aren't
you
thinking
about
it?
I
guess
it
makes
sense
like
within
a
cluster
I,
see
it
being
more
problematic.
You
know
when
you're
sort
of
sharing
with
like
clients
or
whatever
but
we'd,
make
a
lot
of
sense.
I
guess
you
know
you
got
a
kubernetes
cluster,
you
want
store
things
and
remote
registry,
but
it's
sensitive
information,
I,
guess:
okay,
yeah.
A
Thanks
Thank
You
Brendon,
so
thanks
for
that,
as
a
PR
I've
made
a
few
NIT
comments
on
it,
but
please
everyone
review
that
where
we're
at
Tom
so
the
last
the
last
item
that
I'd
added
after
the
fact
you
know
if
the
meeting
it
already
got
started
was
the
extension
proposal.
A
couple
people
Allah
Allah
alluded
to
it's
very
simple.
I
sat
on
it
thinking
about
it
longer
than
there's
any
beef
to
the
proposal
itself.
The
PRS
here
in
the
notes,
but
please
take
a
look
at
that.
A
It's
just
a
way
to
do
these
kind
of
additive
things
it'd
be
neat.
If
this
gets
in
the
distribution
1.0
but
I'd
understand,
if
it
doesn't,
it
should
stay
additive,
even
if
it
doesn't
make
it
into
one
that
oh,
but
that's
that's
about
all.
That
was
just
a
for
comment.
If
we
had
time
so
at
the
top
of
the
hour,
thanks
everybody
for
joining,
please
add
any
notes
that
you
see
necessary
to
the
doc
and
I
will
see
you
all
online.