►
From YouTube: Kubernetes Community Meeting 20190815
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See this page for more information! https://github.com/kubernetes/community/blob/master/events/community-meeting.md
B
C
Essentially,
with
Q
plus,
we
are
solving
the
discovery
and
binding
problem
of
custom
resources.
This
is
a
rough
outline
of
what
I'm
going
to
do
so.
I
have
a
few
slides
I'll
in
the
first
four
minutes,
walk
through
them
and
then
give
a
demo.
Essentially,
if
you
look
at
as
code
systems
like
infrastructure
as
code
from
formation
and
terraform,
the
set
of
API
is
that
this
system
used
to
create
declarative,
orchestrate
and
create
technology.
C
Statics
declaratively
those
resources
is
sort
of
pre
boom
with
kubernetes,
though
that
is
not
the
case,
because
we
can
extend
the
set
of
API
0
kubernetes
cluster
by
adding
custom
resource
definitions
or
operators,
and
then
the
challenge
becomes.
How
does
a
micro
services
developer
or
application
developer
actually
discover,
bind
and
create
technology
stacks
from
these
customer
resources
and
that's
the
challenge
which
q+
is
finally
solved.
C
So
essentially,
we
have
focus
on
might
be
operator
environments,
you
can
think
of
it
as
a
platform
layer,
custom
platform,
their
extended
kubernetes
cluster
by
whatever
operators
we
want
and
the
set
of
Gammons
custom
resources
essentially
represent
a
platform
definition
as
cool.
So
if
you
look
at
the
person,
a
micro
services
developer,
who
is
going
to
need
to
use
these
custom
resources,
the
kind
of
discovery,
binding
and
orchestration
challenges
they
face
are
what
kind
of
custom
resources
exists.
What
are
their,
what
kind
of
functions
they
support,
and
how
do
I
bind
them
together?
C
The
binding
may
involve
one
custom
resource,
utilizing
a
another
custom
resource,
its
spec
property
or
some
sub
resource
that
another
custom
resource
creates
and,
ultimately,
how
do
I
essentially
represent
a
complete
set
of
custom
and
embedded
resources
as
a
stack.
So
these
are
discovery,
binding
and
orchestration
challenges
and
in
order
to
motivate
the
sample
scenario
of
the
demo,
this
is
the
example
that
I'm
going
to
use.
So
this
example
consists
of
a
standard
web
application
Moodle,
which
is
an
e-learning
application.
C
It
uses
my
sequin
database
as
the
backend,
and
one
way
to
create
a
stack
of
noodle
is
to
use
two
operators,
one
for
my
sequel
and
another
for
Moodle
and
the
my
sequel
customer
resource.
My
sequel
operator
manages
the
MySQL
cluster
customer
resource
and
the
Moodle
operator
manages
the
Moodle
customer
resource.
C
The
key
thing
to
note
that
the
binding
between
these
two
resources
happens
through
the
spec
property
of
Moodle
customer
resource,
where
the
name
of
the
underlying
service
that
a
cluster,
my
sequel,
tillis
state,
creates,
needs
to
be
provided
in
order
to
create
the
stack
and
the
Moodle
customer
resource
to
use
it
together.
The
question
is:
how
does
this
an
application
developer
discover
this,
and
then
how
does
the
binding?
Can
we
do
it
in
an
automatic
manner?
So
that's
where
q+
provide
certain
things
which
are
helpful
so
for
discovery.
C
What
we
provide
is
a
set
of
new
endpoints
which
help
an
application,
developer,
figure
out
additional
information
about
customer
resources,
and
this
information
essentially
goes
beyond
what
can
be
available
through
Cupid
I'll,
explain
from
spec
properties
so
for
static
information.
There
is
a
man
kind
of
endpoint
which
provides
a
way
to
provide
man,
page
kind
of
information
and
for
dynamic
information.
C
This
is
the
composition
endpoint,
so
he
is
a
output
of
using
the
composition,
endpoint
to
figure
out
the
dynamic
composition
tree
of
the
my
sequel,
the
cluster
one
instance
of
my
sequel
customer
resource,
and,
if
you
notice,
as
the
output
of
this,
we
can
see
the
specific
service
name.
Cluster
1,
my
sequel
master,
is
one
of
the
services
that
is
visible
as
part
of
the
composition
tree.
The
next
task
is
now
that
I
know
this
information
somehow
bind
these
things
together
in
an
automated
manner,
and
for
that
we
provide
a
couple
of
functional
constructs.
C
So
one
is
import,
one,
you
another
is
a
ID
label
and
what
they
do
is
they
run
time
resolution
of
the
customer,
resource
properties
and
sub
resources
and,
as
an
example,
how
it
looks
like
is
instead
of
this
is
the
yamen
spec
of
moodle
custom
resource.
So,
if
you
notice
in
the
my
sequel
service
name
now,
instead
of
directly
hard-coding,
the
actual
name
of
the
underlying
service
of
the
cluster,
one
resource,
I
am
providing
it
using
the
import
value
function
and
the
parameter
to
that
is
a
fully
qualified
sub
resource
name.
C
So
the
first
part
is
the
kind
the
custom
kind,
which
is
my
sequel
cluster,
then,
which
is
followed
by
the
name
space
and
the
name
of
the
my
sequential
hosted
instance
followed
by
the
sub,
find
and
then,
if
there
are
multiple
instances
of
the
soft
kind,
then
we
have
a
filter
accredited
which
allows
you
to
filter
the
one
that
you
need.
So
finally,
let's
say
I
want
to
define
these
all
these
resources
as
a
group
to
say
that
this
is
a
stag
that
I
want
to
orchestrate.
C
For
that
we
have
the
platform
stag
CID,
the
platform
stag
CID
does
not
actually
deploy
the
resources,
it
just
defines
the
resources
and
their
dependencies.
So
that's
at
a
high
level.
What
is
going
on
and
now
let
me
switch
to
the
demo.
This
demo
is
also
available
on
YouTube,
so
let
me
see
if
I
can
yeah
give
the
live
demo,
if
my
mini
cube
fails
for
some
reason,
yeah
looks
like
it's
not
returning.
Let
me
just
run
through
the
video,
so
what
this
demo
shows
is
on
my
mini
cube.
I
have
already
installed.
C
Q,
plus
and
I
have
installed
the
operators
as
well.
So
those
are
my
Q
plus
and
operators,
so
the
first
thing
I
can
do
is
get
the
man
page
to
find
out
information
about
the
Moodle
custom
resource
instance.
So
I
see
that
I
can
do
something
similar
with
the
yeah,
the
my
sequel
cluster
customer
resource
and
get
the
man
base
information
in
which
you
can
find
out.
Information
like
okay.
This
is
the
what
kind
of
name
needs
to
be
provided
and
so
on.
C
After
that
we
are
going
to
create
the
platform
ml,
which
provides
me
a
way
to
define
the
different
dependencies.
So
my
Moodle
custom
resource
depends
on
cluster
one
custom
resource
so
now
I'm
going
to
create
the
platform
Yaman
and
then
once
I
do
that.
The
next
thing
is:
if
I
try
to
create
the
Moodle
one
custom
resource,
then
it's
going
to
it's
going
to
say
that
ok,
the
dependencies
are
not
satisfied.
So
now
let's
go
and
actually
create
the
dependencies.
So
the
one
thing
it
needs
is
a
secret.
C
Another
unit
needs
is
the
cluster
one,
my
sequel
instance.
So
on
now
I'm
going
to
create
that
the
cluster
one
and
then
we
will
be
able
to
see
while
the
cluster
one
is
getting
provisioned.
We
should
be
able
to
see
the
composition
of
cluster
one.
So
let's
just
see
that
the
cluster
one
is
getting
created.
C
You
and
so
that's
my
composition,
output.
This
is
the
dynamic
composition
tree,
which
represents
all
the
resources
that
are
created
and,
as
you
see
one
of
the
resources,
the
surface
object
with
the
right,
the
name
that
we
want
to
specify
in
the
Moodle
custom
resource.
So
in
the
Moodle,
if
you
look,
I
have
used
the
import
value
function,
to
define
the
binding
and
then
once
we
create
it.
Let's
just
make
sure
that
the
cluster
one
is
actually
up
it.
It
takes
about
few
90
seconds
or
so
to
get
it
created.
C
C
So
that
the
dependency
is
dependent,
resources
are
now
created
and
let's
look
at
the
describe
of
Moodle
one,
because
that
shows
the
resolution
has
happened.
So
if
I
do
a
described
yeah
the
cluster
one,
my
sequel
master,
that
resolution
is
what
we
were
after
in
the
spec,
we
had
specified
import
value,
but
then
the
resolution
happen.
So
this
is
the
main
main
thing
that.
C
Sorry
yeah
that
q+
provides,
if
you
under
the
hood
what's
going
on,
is
we
define
certain
annotations
which
are
to
be
added
on
c
rd
definition,
so
the
compositional
notation,
which
defines
all
the
underlying
resources
that
are
created
by
an
operator
for
managing
a
custom
resource
and
then
a
usage
which
essentially
as
a
config
map,
with
whatever
information
you
want
to
provide.
So
as
these
are
the
three
components,
there
is
an
aggregated
api
server
which
provides
discovery,
mutating
babu,
which
provides
binding
and
resolution,
and
then
the
platform
strike
cid,
which
is
just
for
dependency
declaration.
C
So,
as
an
operator
developer,
you
include
the
notations
on
cid
definition,
install
q+
on
your
cluster
and
then
application
developers
should
be
able
to
use
the
endpoints
and
the
binding
primitives
to
define
the
binding.
As
far
as
comparison
with
other
tools
are
concerned,
the
main
thing
is:
we
focus
on
runtime
resolution
in
q+,
so
what
I'm
looking
for
is
input
all
the
discovery
and
binding
primitives.
If
there
are
any
suggestions
on
what
additional
information
can
be
helpful
would
be
happy
to
hear
about
that
and
yeah.
That's,
that's
all
I
had
thank
you
right.
B
D
You
Chris
hello,
kubernetes
community.
My
name
is
lucky
I'm.
The
116
release
lead
so
news
out
of
the
release
team.
For
this
week
we
cut
the
tag
for
116
b20,
so
we've
created
the
branch
in
upstream
kubernetes,
which
is
a
great
milestone
for
the
release
team.
So
thank
you
to
all
involved
next
week
is
week,
8,
which
is
coming
down
to
the
last
four
weeks
of
the
release.
D
So
burndown
begins
where
we
start
to
meet
more
after
and
burn
down
all
the
enhancements
that
need
to
go
in
and
start
getting
all
our
ducks
in
a
row
to
get
1/16
out
the
door
and
mid
September
1
beta
1
is
being
cut
next
Tuesday
on
August
20th
and
then
for
those
who
are
part
of
the
of
the
team
that
are
writing
the
enhancements
code.
Freeze
is
August
29th,
so
reminder
to
get
all
your
associated
POS
POS
in
and
merged
before
the
end
of
this
month.
That
would
be
very
much
appreciated.
D
As
patch
releases
are
concerned.
There
haven't
been
any
cut
this
week,
although
we're
targeting
819,
which
is
next
Monday
for
115
3,
140,
+,
6,
+,
1,
13
10,
cherry-pick
deadline
is
today.
So
if
you
have
things
you
would
like
to
go
into
those
patch
releases
of
the
current
supported
versions
of
kubernetes,
you
need
to
make
sure
they
emerge
today.
D
A
A
You
can
walk
along
with
me
if
you
would
like
by
going
through
the
meeting
notes,
so
I
wanted
to
show
you
today,
a
dashboard
I
started
working
on
earlier
this
week
to
try
and
give
us
some
statistics
about,
like
the
release
blocking
jobs
and
their
health
overall,
so
just
walking
through
these
graphs
real
quickly.
This
shows
sort
of
the
daily
failure
rate
of
the
release
blocking
jobs.
A
This
shows
the
release
blocking
jobs
that
have
been
continuously
failing
for
n
days
and
how
many
days
they've
been
failing
for
this
is
the
daily
flake
rate
of
the
release
blocking
jobs
for
those
really
slacking
jobs
that
are
flaking.
These
are
the
worst
flaky
jobs
according
to
the
last
week
and
over
the
last
week.
These
are
the
flakes
that
have
happened.
The
most
so,
for
example,
I,
could
click
on
this
and
view
it
I
could
copy
this
text.
I.
A
Could
go
over
to
go
to
Kate's
dot,
io
/,
triage
I
could
copy
paste
this
into
the
test
thing.
The
test
field
is
a
regular
expression,
so
those
brackets
get
interpreted
specially,
so
I'll
drop
them
I'll
hit
enter
and
now
I
can
see
like
everywhere
that
this
test
has
happened.
I
can
see
that
it
used
to
be
a
lot
worse
last
week,
so
it
seems
to
be
mildly.
Better
this
week,
I
can
scroll
down
and
I
can
see.
A
Continuing
on
I
can
also
see
sort
of
the
99th
percentile
duration
of
the
release
blocking
jobs.
The
average
interval
between
release
blocking
job
runs
and
out
of
curiosity,
the
average
number
of
tests
that
get
run
during
each
of
these
jobs.
So
I
can
tell,
for
example,
that
somebody's
been
really
awesome
and
been
adding
some
integration
tests
because
it's
been
going
up.
You'll
notice
I
have
these
little
threshold
lines
here
and
these
threshold
lines
sort
of
represent
release
blocking
criteria.
A
So,
if
I
sort
of
take
away,
if
I
filter
by
sort
of
the
max
here,
I
noticed
that
the
basel
tests
and
the
basil
build
jobs
are
the
ones
that
are
peaking
up
here,
it
turns
out.
These
are
because
it's
because
they're
posted
mitz,
so
they
only
run
when
code
has
been
committed
instead
of
periodically
so
they're
blowing.
That
threshold
I
can
also
tell
that
the
GCI
GCE,
the
serial
job
is
blowing
that
threshold.
A
You
know
it's
running
about
every
7
hours,
which
doesn't
seem
like
it's
giving
us
really
great
signal
if
I
remove
that
the
remainder
of
these
jobs
are
running
under
the
threshold
to
be
considered,
release
blocking
so
I
have
a
PR
out
to
address
these
and
I
think.
Next.
We
should
talk
about
how
we
can
make
the
serial
job,
something
that
can
run
a
little
bit
more
frequently
you'll
see.
A
There
are
similar
thresholds,
the
release
blocking
duration,
again,
I,
think
cereal
and
alpha
and
some
of
the
other
things,
maybe
things
we
want
to
talk
about,
and
then
similarly
here
what
I'd
love
to
be
able
to
do
is
to
boil
this
down
into
like
these
are.
This
is
the
probability
that
you
know
you're
going
to
see
a
green
release
and
it
turns
out
you've
never
had
the
opportunity
to
see
a
green
release
until
we
kind
of
go
back
to
about
June
7th
is
when
we
had
like
at
least
something
wasn't.
A
A
hundred
percent
build
so
I'm
hopeful
that
this
provides
us
the
opportunity
to
better
identify
like
our
test
health,
see
if
we
can
do
a
little
bit
better
about
it.
You
can
also
use
this
drop
down
here
to
filter
against
specific
things
like
I
was
doing
this
the
other
day
to
verify
just
the
relative
health
of
the
end-to-end
job,
the
integration
job
and
the
verified
job
to
get
those
those
statistics
looking
like
that
this
list
is
sadly
hard-coded.
A
These
are
slightly
different
than
the
metrics
that
are
available
here.
It's
unclear
to
me
how
helpful
some
of
these
things
have
been
for
folks
this.
This
larger
dashboard
here
looks
at
everything
for
all
repos
and
it
also
sort
of
computes
flakiness
relative
to
the
numbers
of
PRS
that
have
seen
flakes,
whereas
these
new
dashboards
that
I've
linked
in
the
doc
are
just
about
the
jobs
for
kubernetes
kubernetes
that
either
block
merge
or
block
the
release.
A
I
have
more
links
to
the
doc
that
I
won't
drag
you
through,
but
if
you're
curious
about
how
these
metrics
are
being
computed
or
the
actual
queries,
I'll
just
show
one.
You
can
avert
your
eyes
children
to
see
how
we're
getting
this
data
out
of
bigquery
you're
welcome
to
follow
up
with
me.
This
again
is
something
that
anybody
is
capable
of
constructing
a
query
and
putting
up
a
dashboard
on
their
own.
B
All
right
thanks,
everybody,
cool
I,
didn't
revert
my
eyes
so
I
can't
continue.
Sorry
bye,
everybody,
no
I'm,
kidding
Aaron.
Thank
you
very
much
for
the
update
that
was
very
cool,
sig
update
time.
Just
reminder:
sig
leads,
there's
a
sig
update
schedule.
It
is
linked
in
the
notes,
please
make
sure
you're
aware
of
it
and
when
your
stick
is
up
next.
B
E
All
right,
I
think
you
guys
can
see
my
screen
I
think
I
picked
the
wrong
one,
but
this
is
okay.
This
is
fine
all
right,
so
I
wanted
to
give
you
all
a
quick
update,
I'm
okon
I'm,
one
of
the
co-chairs
for
a
sig
auth
think
Tim
did
the
last
one
continuing
with
what
we
discussed
in
our
last
update
on
the
overall
theme
of
the
sig
right
now
is.
We
are
moving
features
towards
GA,
where
we
can
and
in
the
more
contentious
approach.
E
E
So,
if
anyone's
familiar
with
the
CSR
API,
it's
been
beta
and
cubed
since
I
believe
1.6.
So,
as
you
can
imagine,
that
was
a
long
time
ago
now
and
it
is
effectively
an
API
that
even
at
the
beta
level,
an
incredible
number
of
production
clusters
rely
on
to
function
correctly
right.
So
we
want
to
push
it
past.
The
beta
mark
get
it
to
v1,
so
to
help
us
sort
of
get
it
there,
Mike
Denis
one
of
the
other
co-chairs
hero
there
retroactively
kept
for
it,
so
that
we
can
kind
of
outline.
E
This
is
where
the
API
is
today.
We
know
a
large
so
issues
with
the
current
shape
of
that
API.
We
will
then
use
that
kept
to
effectively
define
those,
and
that
is
our
path
towards
effectively
saying.
Ok,
these
are
the
things
that
we
have
to
have
in
this
API
before
it
debones
and
probably
the
hard
part
of
that
is
deciding
what
things
can
we
drop
and
say
that
those
will
be
like
a
V
2
alpha,
saying
V
and
we
realize
we're
gonna
have
to
there's
too
many
things
that
people
possibly
want
to
do
with
this.
E
So
community
feedback,
especially
around
what
pain
points
you
had
been
using,
the
CSR
API
I,
think
one
of
the
most
common
ones
is
in
order
to
use
it
you're,
basically
submitting
an
actual
search
request
instead
of,
for
example,
a
lot
of
times.
You
just
want
to
have
a
workload
and
say:
hey
I'm
a
workload
and
have
a
service
can
I
have
like
a
cert
for
my
service.
Please
I,
guess
I
serving
cert
right
that
that's
more
that's
much
more!
E
Like
cube
style,
you
know
like
ask
for
something
not
necessarily
submit
an
entire,
like
you
know,
go
make
your
private
key,
submit
your
CSR
request,
all
that
that's
much
more
straightforward
to
what
people
actually
care
about.
So
that's,
probably
one
of
the
bigger
ones,
but
in
general
I
personally
written
like
a
customer
poober
for
this
year.
Sorry
API,
it's
not
particularly
fun.
It's
not
awful.
It's
not
the
fun
experience
that
I
would
hope
it
would
be.
So
if
anyone
has
feedback
on
that
we'd
appreciate
it.
E
Psps
I
think
we
cost
people
great
pain
when
we
started
discussing
this,
but
we
are
considering
the
deprecation
of
this
API.
It
has
been
at
beta
for
a
long
time.
There's
links
slides
that
we
have
gone
over
a
lot
of
the
issues
with
trying
to
move
it
to
GA
and
sort
of
the
things
that
sort
of
ale
to
accomplish
for
us.
I
don't
want
anyone
to
get
too
worried,
we're
not
even
remotely
going
to
remove
this
without
a
very
suitable
replacement
and
they
clear
migration
path.
E
So
some
type
of
movement
off
of
PSPs
it
is
I
mean
we
could.
If
there
was
a
strong
desire
from
the
community
to
try
to
GA
it
in
some
form,
I,
don't
think
we're
against
that
we
would.
There
would
have
to
be
a
pretty
strong
push
from
the
community
for
us
to
sort
of
be
able
to
find
the
people
to
do
the
work.
Do
that
there's
two
sort
of
things
we're
looking
at
as
possible.
Replacements
gatekeeper
is
a
very
early
stage
project
for
defining
sort
of
policies
that
you
can
apply
to
kubernetes
I.
E
Don't
think
at
this
current
point,
it's
at
the
stage
that
it
can
reflect
what
PS
bees
do
today
for
people,
but
they're
actively
investigating
how
they
could
make
that
a
possibility.
If
we
had
wanted
to
go
a
more
different
route,
openshift
has
had
the
security
context
constraints
api
for
the
last
three
years
at
a
stable
version.
E
They
have
the
same
functionality
as
PS
bees,
but
more
so
that's
also
been
discussed
as
a
if
SE
c--'s
were
a
standalone
cre
that
you
installed
into
your
cluster
with
some
type
of
validating
mutating
admission
web
poke
people
could
use
that
to
replace
PSPs.
We
could
certainly
write
a
controller
that
converted
all
your
PS
PS
into
SBC's.
For
you
that
is
certainly
an
option
as
well.
This
one
I
know
people
have
feedback.
E
It
was
one
of
the
most
requested
thing
that
we
cover
in
our
deep
dive,
who
we
will
do
so,
but
please
feel
free
to
reach
out
to
us
next,
a
little
bit
dynamic,
audio
I'm,
not
gonna,
go
into
too
much
detail
on
that
there,
the
linked
there
sort
of
takes
you
to
various
discussions.
We've
been
having
the
primary
thing
here
is
we're
looking
at
defining
all
the
use
cases
based
on
the
various
actors,
who
would
care
about
dynamic,
audit
and
sort
of
making
sure
that
the
shape
of
the
API
can
support.
E
Those
like
I
said
not
gonna
go
in
too
much
into
it,
but
the
docs
that
we
have
are
extremely
robust
on
this
now
in
the
sense
of
like
you,
know
like
what
does
it
cluster
owner
want
versus
a
cluster
admin
versus
a
normal
user
versus
like
an
auditor
or
so
forth?
So
if
you
work
in
like
an
IT
security
style
field
and
you
manage
Cabrini's
clusters
and
you
want
to
sort
of
know
what's
going
on
in
those,
this
is
probably
a
great
interest
to
you.
E
So
now
service
account.
So
this
is
probably
the
place
we've
been
doing
the
most.
So
if
anyone's
familiar
with
the
token
request,
API
Mike
also
made
a
retroactive
kept
for
it.
This
came
out
of
the
container
identity
working
group
a
while
back.
This
is
how
we
effectively
want
kubernetes
service
accounts
with
cert
identities
outside
of
clusters.
All
right
so
traditionally
for
any
service
accounts
have
had
long-lived
tokens,
but
that's
extremely
dangerous
to
start
sending
out
out
of
the
cluster
right.
E
If
something
is
trying
to
validate
the
service
account
job
token,
it
needs
to
know
like
what
public
keys
to
use.
So
a
path
that
has
been
proposed
is
to
allow
the
kubernetes
api
server
to
serve
the
OID,
see
discovery
doc,
which
defines
like
a
scheme
for
like
a
well
known
endpoint,
which
has
specific
Jason
structure
for
its
response.
That
then
tells
you
how
to
look
up
the
current
keys
that
are
being
used.
E
E
Api
you
get
into
the
problem
of
alright
on
some
external
entity
and
I'm
validating
these
tokens
as
they
go
by
how
do
I
know
when
I
should
necessarily
reload
tokens
I
mean
you
can
continue
to
keep
polling
and
just
hammer
the
cube,
API
server
all
the
time
and
keep
pulling
tokens
that
works.
It's
not
necessarily
the
nicest
thing
to
do
so.
We
have
been
considering
adding
key
IDs
to
service
account
tokens.
The
idea
there
would
be
is,
if
you
see
a
token
come
by
that,
has
a
key
ID
that
you've
never
seen
before.
E
Well,
that's
probably
a
good
indicator
that
you
should
go
fetch
the
current
discovery
metadata
and
see.
Can
you
validate
this
token
or
not?
I
raised
the
point
on
this
that
we
have
all.
We
said
the
service
account
tokens
are
opaque
and
now
we're
saying
we're
gonna
put
a
non
opaque,
key
ID
inside
the
token,
so
maybe
it's
time
to
just
say
that
they're
not
opaque,
we've
never
actually
broken.
The
structure
of
these
tokens
have
always
been
jots.
E
D
E
The
actor
that's
actually
signing
the
token.
The
main
concern
here
is
this
introduces
a
new
complex,
RPC
extension
point
to
the
cube
API
server,
similar
to
like
the
kubernetes
KMS
api,
so
I
think
what
we
probably
do
right
now
is
try
to
define
what
is
possible
with
just
dynamic
reloading
of
files
that
helps
with
signer
rotation
and
then
kind
of
see
why?
E
B
F
Thank
you,
I
have
turned
on
my
video
I
will
share
I
produce,
slides
I,
didn't
know
that
producing
a
doc
was
an
option
or
I
would
followed
that
route.
I
hope
we
can
see
them
not.
Yes,
thank
you
all
right,
so
sig
coaster,
life
cycle,
I,
am
just
in
Santa.
Barbara
I
am
a
lead
of
sequestered
life
cycles.
Sig
foster
life
cycle
is
for
those
no
effectively
working
on
creation
and
ongoing
management,
eventual
destruction
of
kubernetes
clusters
and
everything
is
involved
in
in
running
the
cluster
itself,
so
sort
of
the
components
thereof.
F
That
is
a
broad
mission.
It
is
one
of
the
biggest
communities
six.
We
have
hundreds
of
contributors
and
there
are
20
ish
sub
projects,
multiple
working
groups,
it
acts
as
a
home
for
both
open
source.
What
I
would
call
distros
and
open
source
building
blocks
for
those
distros?
So
the
building
blocks
are
sort
of
this
idea
that
we
are
trying
to
create
standard,
reusable
components
that
are
modular
and
that
can
be
plugged
together
to
produce
a
working,
fully
working
tool
to
create
kubernetes
clusters.
F
We
will
produce
some
of
those
working
tools,
I'm
calling
them
distros,
but
of
course
things.
There
are
commercial
distros
as
well,
and
a
lot
of
enterprises,
for
example,
build
their
own
distros.
The
hope
is
that
we
will
provide
great
components
that
they
can
choose
to
assemble
into
a
distro
and
give
themselves
a
great
head
start
so
that
everyone
basically
ends
up
with
a
better
kubernetes
experience
and
a
more
consistent
Cabrera's
experience,
whether
whatever
tooling
they
are
using.
F
So
within
our
sake,
we
have
I
would
say
five
primary
building
blocks,
starting
from
the
bottom,
we
have
image
builder,
a
new
project,
cube
ATM,
which
I
think
everyone
is
hopefully
familiar
with
at
CTA
DM,
which
is
managing
SCD
cluster
add-ons,
which
is
going
to
manage
the
add-ons
that
run
on
the
cluster
and
cluster
API,
which
is
going
to
write
a
kubernetes
api
on
to
manage
the
clusters
themselves,
and
then
we
have
at
least
four
and
possibly
more
distros
that
are
acting
as
sub
projects,
cops,
cube,
spray
and
mini
cube.
Hopefully,
people
are
familiar
with.
F
F
F
Historically,
the
kubernetes
project
did
not
want
to
protrude
to
produce
cloud
images.
Card
images
are
a
Mis
on
Amazon
or
OVA
zon
vSphere,
the
you
know
that
pre-baked
disk
images
that
you
boot
up
and
effectively
kubernetes
appears
and
as
a
result
of
not
standardizing,
we
ended
up
with
a
sort
of
at
least
a
dozen
ways
to
build
kubernetes
images
in
OSS
alone.
An
image
builder
is
a
project,
so
that
brings
sense
to
the
plethora
of
ways
and
try
to
identify
the
best
pieces
from
the
various
approaches.
F
At
least
at
the
moment.
Image
builder
is
a
tool
that
enables
you
to
create
images
rather
than
producing
images
as
its
output,
so
you
would
run
the
image
builder
tooling
in
your
account
and
get
an
image
in
your
account
rather
than
having
an
official,
kubernetes
ami
or
an
official
criminal's
OVA,
at
least
at
the
moment.
This
is
the
newest
project
yeah.
It
is
very
early
days.
Many
approaches
have
been
merged
into
community
SIG's
image
builder,
and
the
work
right
now
is
around
deduplicating,
identifying
the
best
bits
of
everything.
F
F
So
it's
not
an
invisible
black
box
early
days
for
this
project
as
well.
The
the
real
steps,
the
real
next
steps,
are
to
actually
get
seda
TM
integrated
into
Kubb
ATM
and
into
the
distros,
and
also
working
on
a
lot
of
the
functionality.
That's
missing
in
you
know,
self-driving
and
also
a
bunch
of
functionality
at
the
CLI
level,
so
a
great
opportunity
there
as
well
Kubb
ATM,
bootstrapping,
the
actual
kubernetes
control,
plane
itself
and
the
cubelets.
It
is
our
most
mature
product
of
the
building
blocks.
F
F
So
this
is
a
great
way
to
in
a
scalable
way
enable
karidian
to
be
a
more
reusable
building
block
so
that
every
district
should
be
able
to
use
q
beta
and
Covidien
is
actually
really
clever
in
that
it
can
both
act
as
a
building
block
of
a
distro,
and
it
can
also,
alternatively,
run
standalone
to
sort
of
produce
a
very
minimal
cluster,
so
I.
Imagine
a
lot
of
people
here
have
brought
up
a
cuvette,
DM
cluster.
It's
and
that's
having
both
options
is
a
a
wonderful
bit
of
functionality.
F
There,
comedian
has
done
a
great
job
of
having
a
very
well
groomed
and
active
backlog
for
easy
contributions
contribution,
and
so
there
are
plenty
of
ways
to
get
involved
and
I.
Think
if
you
want
a
big
impact
to
cube
a
DM,
integrating
with
the
other
building
blocks,
is
a
great
way
to
to
get
that
going.
Those
are
those
integrations
are
in
their
earlier
stages,
right
now,
cluster
add-ons
again
early.
This
is
dealing
with
the
installation
and
management
of
these
additional
cost
of
components.
Things
like
core
DNS
or
ingress
controllers
of
CNI
drivers.
F
We
are
exploring
the
idea
of
having
operators
that
enable
specification
of
these
in
a
declarative,
these
add-ons
in
a
declarative
way
and
then
apply
them
and
sort
of
maintain
their
health
and
do
sensible
upgrade
behavior
on
them.
The
goal
of
of
this
is
is
not
to
force
everyone
to
use
this
with
all
the
building
blocks,
but
it
is
to
produce
a
good
implementation
that
is
so
good
that
distros
choose
to
use
it,
but
no
one
is
of
course,
forced
to
use
it.
This
project
meets
bi-weekly
is
prototyping.
F
This
was
out
of
scope
for
kubernetes
and
we
ended
up
with
a
dozen
ways
of
doing
things,
and
so
cluster
API
is
replacing
those
with
a
homogenous
CID
and
controllers
for
clusters
and
machines,
building
a
lot
of
providers
that
enable
that
API
to
work
on
various
clouds,
as
well
as
docker
and
bare-metal
v1.
Alpha
2,
is
very
much
in
the
works.
I'd
say
if
not
done,
and
it
brings
a
greatly
improved
model
for
infrastructure
and
provisioning
abstraction
and
the
train
keeps
on
rolling.
F
So
if
you
want
to
get
involved,
help
us
plan
v1,
alpha
3
and
try
it
out
on
your
own
cloud
and-
and
you
know,
document
anything
you
find
fix
any
bugs
that
are
that
need
fixing
and,
as
I
hopefully
have
emphasized.
We
definitely
your
help.
You
know
the
the
puzzle
is
not
yet
all
green.
We
want
to
get
there.
We
want
to
have
all
these
pieces,
you
know
nga
and
working
well
together
and
all
the
distros
built
on
top
of
all
those
pieces.
F
There
is
a
lot
of
work
to
get
there,
but
I
think
we
have
a.
Although
we
have
all
these
projects
I
think
they
fit
together
very
well.
We
believe
that
the
the
5
building
blocks
are
their
full
set,
but
image
builder
was
a
relatively
recent
addition.
I
think.
Probably
last
time
I
said,
we
believe
the
four
building
blocks
are
the
other
full
sets.
F
So
there's
always
room
for
another
building
block,
if
you,
if
you
think
of
a
missing
one,
but
please
do
come
and
help
this
dock,
which
you
can
probably
read
in
the
link,
is
already
in
the
notes,
describes
how
you
can
contribute,
but
there
are
bi-weekly
meetings
of
the
top
level
project
and
then
lots
of
other
ways
to
get
involved,
including
at
each
projects.
Each
sub
project
has
its
own
meetings,
slack
channels,
things
of
that
nature
and
it
get
tough.
F
Of
course,
I
do
want
to
give
to
shoutouts,
which
I
think
are
particularly
propria
and
bring
up
valid.
Points
like
this
is
a
massive
sake,
and
so
thank
you
to
everyone
that
contributes
in
the
many
ways
that
people
contribute
and,
in
particular
my
co-leads
Timothy
st.
Clair,
who
did
I,
think
an
amazing
job
of
basically
corralling.
F
These
20
projects
into
a
sensible
and
sustainable
way
of
working,
and
so
in
this
main
sequestered
lifecycle
these
these
projects
report
in
and
provide
their
status
and
it's
sort
of
a
federated
approach,
and
it
all
works
very
well
he's
giving
everyone
best
practices
on
how
to
do
backlog,
grooming,
all
of
that
sort
of
stuff
helping
people
get
new
contributors
involved.
So
he
does
a
wonderful
job
of
that.
F
So
thank
you
to
him
and
also
thank
you
to
Lucas,
who
has
a
lot
of
people
probably
know
him
from
his
excellent
work
on
coop
ADM
and
as
he
cluster
bicycle,
lead
I
believe
he's
going
to
be
at
least
temporarily
stepping
away
to
go
to
university.
So
I
think
we
wish
him
all
the
best
and
thank
you
for
everything,
he's
done
and
another
page
of
links
for
people
that
want
to
know
where
to
find
us,
but
otherwise
that
is
all
I
have
and
thank
you
so
much.
Thank
you.
Any
questions.
B
B
This
week,
thank
you
George,
and
then
he
has
French
at
out
to
Glenn
Guinevere
Sanger,
for
helping
some
folks
on
my
team
get
started
with
upstream
contributions
and
when
I
hunt
through
the
docks
associated
issues,
I
keep
running
into
all
sorts
of
awesome
improvements,
Sheila
sparkles,
and
that
is
it
unless
anyone
has
anything
else
for
this
week's
community,
meaning
that
we
are
done
anybody
all
right.
Thanks
for
joining
everyone
have
a
wonderful
rest
of
your
week.
See
you
next
time,
happy.