►
From YouTube: Kubernetes Community Meeting 20190425
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 10am PT
See: https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more details.
A
A
Welcome
everyone
to
the
may
16th
kubernetes
community
meeting.
As
a
reminder,
this
is
a
community
meeting
and
it
will
be
posted
publicly
on
YouTube.
It's
also
being
live
streamed.
So
please
be
mindful
that
what
you
say
is
being
recorded
and
as
always
mute
when
not
speaking
as
a
courtesy
to
everyone
else,
so
I
am
Don.
Foster
I
will
be
your
host
for
today.
I
am
responsible
for
open
source
strategy
at
pivotal,
which
lately
is
mostly
our
kubernetes
contribution.
Strategy
and
I
am
also
an
active
member
of
the
sig
contributor
experience
group.
A
A
Perfect
and
anyone
else
who
wants
to
contribute
the
you
should
have
the
doc
linked
from
the
meeting
notice
as
well
and
with
that
I
think
we
will
just
go
ahead
and
get
started.
So
the
first
thing
on
the
agenda
is
the
demo,
so
we
have
metal
cubed
demo,
which
is
bare
metal
host
management
for
kubernetes,
backed
by
OpenStack
ironic
and
Chris.
Hodge
is
going
to
do
the
demo
so
with
that
I
will
turn
it
over
to
him.
Hi.
C
Everybody
so
yeah,
so
today
we're
gonna,
give
a
quick
demonstration
of
the
current
status
of
metal
cubed,
which
is
it's
a
its
ultimate
goal,
is
to
be
a
is
to
be
a
bare
metal
implementation
of
for
the
cluster
API,
so
that
you
can
control
you
know,
and
if
you're
not
familiar
with
the
cluster
API
is
a
it's
it's.
It's
a
kubernetes
api
extension
to
allow
you
to
control
kubernetes
clusters
from
kubernetes,
and
so
in
this
case
we're
trying
to
build
a
bare
metal
implementation
of
that.
C
So
this
demo
is
pre-recorded
because
it
takes
because
we're
working
with
bare
metal
takes
longer
than
10
minutes
to
brute
to
to
bring
all
of
this
up
so,
but
but
this
is
done
on
real
hardware.
I
have
a
laptop
plugged
into
a
three
three
machine
system
and
it
all
running
on
bare
metal.
You
know
you
know
through
it
through
switch,
so
the
way
this
is
built
is,
is
we
have
at
the
bottom
of
the
layer.
We
have
OpenStack
ironic,
which
is
controlling,
which
is
controlling
our
bare
metal
infrastructure,
and
this
script
right
here.
C
We're
kind
of
like
setting
up
our
keys
that
we're
going
to
inject
into
the
system
as
well
as
downloading
some
images
that
we're
going
to
build
ironic
is
a
pretty
small
set
of
services
that
we're
gonna
be
running
in
pod
man
in
this
example,
but
you
could
certainly
run
it
in.
You
know
any
way
that
you
want,
including
in
kubernetes,
where
we're
just
starting
up
the
basic
ironic
services.
Ironic
is
the
the
bare
metal
management
system
for
OpenStack
and
then
on
this
third
script.
What
this
script
does
is.
C
It
starts
up
the
the
bare
metal,
the
the
bare
metal
host
custom
resource
definition,
which
interfaces
directly
with
OpenStack
ironic
and
then
an
implementation
of
the
most
of
the
machine
custom
resource
type
which
implements
which
integrates
with
the
bare
metal
host.
So
we're
going
to
run
these
scripts.
C
We
download
the
ironic
containers,
you
know
and
I'll
be
kind
of
the
other
things
that
we
need
to
get
set
up
and
then
what
we're
gonna
do
with
here
at
the
very
top
of
the
screen,
is
you're
going
to
see
we're
gonna
watch
the
we're
gonna
watch,
the
Machine
custom
resource
definition,
the
bare
metal
hosts,
custom
resource
definition
and
also
watch
OpenStack
ironic
directly,
but
we're
primarily
going
to
be
interacting
with
the
bare
metal
hosts
type
and
the
machines
types.
So
all
the
interaction
with
OpenStack
ironic
is
going
to
happen
through
kubernetes.
C
C
Next,
we
start
up
the
custom
resource
definitions
for
the
bare
metal
hosts
and
for
them
and
for
the
machine
type
you'll
also
notice
that
I
had
a
I
misspelled.
The
that
can
end
up
there
too,
so
I
need
I'm,
gonna,
I'm
gonna
go
back
up
and
type
control
get
on
that.
So,
even
though
the
demo
isn't
live,
all
of
the
mistakes
are
certainly
still
there.
The
case
of
starting
this
back
up
the
server
doesn't
have
a
resource,
type
machines
or
bare
metal
hosts.
C
And
you
see
that
they
switch
from
not
having
the
resource
type
to
nor
we
didn't.
You
know,
resources
being
found
okay,
so
the
next
thing
we
have
to
do
is
is
enroll
bare
metal
machines
into
this,
and
it's
pretty
basic
to
do
this.
You
you
you,
you
create
a
manifest
with
the
types
that
includes
a
secret
to
connect
to
the
IPMI,
as
well
as
an
address
for
the
IPMI
and
a
boot
MAC
address
that
you
want
to
use.
C
That's
the
address
that
you're
going
to
be
the
the
MAC
address
that
you're
gonna
be
provisioning
against
now
you're
gonna,
see
on
the
lower
left
here.
I've
set
up
a
video
camera
to
watch
the
console
output
of
one
of
the
nodes,
and
so
you
can
actually
see
kind
of
the
whole
process
of
how
the
booting
and
the
provisioning
goes.
C
We
have
three
machines
that
have
been
that
have
been
a
that
have
been
created
and
are
bringing
coming
under
control
of
ironic.
So
the
first
thing
that
I
wanna
does
is
it
it
is
going
to
boot
all
of
the
machines
and
try
to
discover
what
resources
they
actually
have
inside
of
them.
So
this
is
gonna
pixie
boot,
what's
called
an
ironic
Python
agent,
and
it's
going
to
go
through
this
entire
process
of
breeding
the
agent
up,
communicating
back
with
the
ironic
and
Vector
service
and
discovering
all
of
the
hardware
that's
available
on
this.
C
This
has
been
sped
up
so
that
we
don't
have
to
watch
the
whole
thing
and
and
and
clipped
it
down
a
little
bit,
but
you
can
kind
of
see.
You
know
some
of
the
things
that
are
happening
for
that
discoverable
process,
and
then
you
see
the
systems
go
from
inspection
to
being
manageable.
You'll
also
notice
that
the
top
note
is
uninspected
weight.
C
C
And
so
we're
gonna
create
two
machines
or
create
a
machine
1
and
a
machine
2
and
again
almost
immediately.
We
see
that
machine,
1
and
and
machine
2
are
created
as
as
machine
types
that
that
is
then
pushed
down
to
the
bare
metal
host
type,
which
is
then
pushed
down
into
ironic
kit,
which
is
going
to
start
powering
on
the
machines
now
ironic
hasn't
now,
just
as
we
went
through
an
inspection
cycle
now
we're
gonna
start
off
by
going
through
a
cleaning
cycle.
C
A
lot
of
these
cycles
are
optional,
so
you
don't
necessarily
have
to
go
through
this
entire
process
of
inspecting
and
cleaning.
If
you
trust
the
the
systems,
you
know
the
operating
systems
that
are
going
on
here
on
your
machines,
you
can
you
can
skip
these
steps
and
have
faster
boot
times.
But
again
we
boot
the
ironic,
the
ironic
Python
agent,
we're
gonna
go
through
a
cleaning
process
where
all
of
the
discs
are
or
wiped
clean.
C
Then
we're
going
to
go
through
a
deployment
process
where
we're
going
to
grab
the
the
CentOS
image
that
we
had
to
find
this
machine
to
use
which
is
taken
from
a
an
HTTP
server.
That's
that's
running
as
part
of
the
ironic
service,
so
we've
gone
it
we've
gone
through
our
cleaning.
Now
we're
going
to
go
through
the
the
deployment
phase.
C
C
And
then,
once
the
deployment
phase
is
done,
we're
gonna
actually
gonna
be
able
to.
You
know
that
we're
gonna
boot
up
into
the
machines
that
we've
defined
now
metal
cubed
is
still
it's
in
heavy
development
right
now,
and
so,
although
we
have
this
machine
type
implemented,
we
don't
actually
have
the
cluster
actuators
implemented
yet,
but
this
is
moving
pretty
quickly
and
we
and
we'll
probably
see
those
you
know
weeks.
C
It's
hard
of
ejectment
work
will
actually
be
done,
but
the
hope
is
to
you
know,
being
able
to
deploy
complete
kubernetes
clusters
on
top
of
bare
metal
at
the
end
of
this.
Also,
if
you
have
a
different
bare
metal
management
system
that
you
like
metal,
cubes
is
meant
to
be
entirely
pluggable,
so
you
can,
you
know,
bring
your
own
management
system
if
you
like,
okay,
so
looking
over
here,
the
machines
are
now
deployed
and
active.
C
We
wait
for
them
to
boot
up
and
then
we
can
SSH
into
them
using
the
the
custom
SSH
key
that
we
had
defined
as
part
of
the
machine,
and
you
see
that
we
can
log
in
there
and
it's
all
and
it's
ready
to
go
so
that
was
a
pretty
quick
run-through.
Does
anybody
have
any
questions?
Is
there
anything
I
can
answer
about
this
in
the
few
minutes
we
have
remaining?
How
long
would
that
have
actually
taken
without
the
speed
up?
This
took
30
minutes.
Okay,.
B
C
Where
it's
install
so
ironic
has,
depending
upon
the
driver
that
you
use
if
you're,
if
the,
if
the
driver
for
your
hardware
supports
from
our
updates,
then
ironic
gives
you
a
way
to
manage
those
updates,
and
you
can,
you
know,
interact
the
ironic
API
directly
to
do
that.
But
as
a
pendant
on
the
driver.
A
D
D
D
We
are
going
to
have
a
face-to-face
session
on
the
Monday
of
contributor
summit
and
then
other
than
that
no
major
milestones,
but
coming
up
very
soon
after
cube
con
the
following
week
on
the
8th,
we're
gonna
be
starting
burned
down
and
we
have
our
code
freeze
on
May
30th
and
that
is
all
I've
got
to
share.
Let
me
know
if
there
are
any
questions.
A
A
A
So
if
you
set
your
status
to
busy,
the
bots
will
not
automatically
request
a
review
from
you,
so
we're
just
asking
people
to
please
take
care
and
how
they
use
this
busy
status
and
be
considerate
of
your
other
reviewers,
because
if
you're
not
doing
the
reviews,
because
you're
busy
Sears
says
is
set
to
busy,
then
someone
else
is
going
to
get
most
of
them.
But
it
is
a
great
way
if
you're
on
holiday
for
a
period
of
time
or
if
you
know
that
you've
got
some
big
deliverable
at
work.
A
A
E
Idea
here
is
that
we
want
to
introduce
a
new
feature
in
kubernetes
that
allows
users
to
specify
what
apology
domain
APOD
can
be
spread
upon.
So
basically,
you
can,
for
example,
say
are
one
of
my
past.
We
spread
among
zones
among
nodes
or
among
any
arbitrary
topology
label
that
you
put
on
you
under
nodes.
E
E
It
allows
you
to
put
only
one
pod
core
topology
domain,
so,
for
example,
if
you
have
like
10
pods
and
you
have
two
zones,
there
was
no
way
to
tell
commodities
to
spread
these
parts
among
zones
and
put
a
sort
of
like
hard
anti
definitely
on
these
parts,
because
only
in
our
case,
only
two
of
those
parts
would
be
scheduled
on
these
two
zones.
Now
we
were
introducing
this
new
feature
allows
you
to
specify
spreading
so
basically,
no
matter
how
many
parts
you
have,
it
automatically
spreads
among
the
available
to
quality
domains.
E
One
issue
that
we
faced
is
that
we
managed
this
cap
early
and
we
got
it
on
track
for
115.
Some
of
our
contributors,
actually
very
long,
has
spent
quite
a
bit
of
time
and
preparing
all
the
cab.
Api
changes,
code,
changes
and
everything
everything,
but
we
recently
notice
that
API
review
bandwidth
is
a
problem.
So
it's
at
this
point.
Basically,
this
feature
is
at
risk
of
missing
color
trees
deadline
only
because
API
magnet
is
limited
and
there
are
a
very
small
number
of
people
who
can
actually
approve
API
changes.
E
So
that's
why
I
reached
out
to
the
community
and
ask
for
help
and
some
solution.
At
least
it's
not
possible
for
this
release,
maybe
for
the
next
few
releases
I
wonder
if
we
can
find
some
solution
for
this
problem.
In
fact,
I
have
to
tell
that
Jordan
Leggett
has
gone
above
and
beyond
to
try
to
answer
to
address
this
problem
by
writing
documents
and
training,
API
reviewers
in
different
SIG's,
so
that
different
cichlids
and
sig
approvers
can
review
the
API
in
their
own
domains.
E
F
You,
okay,
so
who
is
successful
icicle?
What
do
we
do?
We
try
to
simplify
the
creation,
configuration
upgrade
all
that
kind
of
stuff
that
is
tied
to
the
life
cycle
of
a
cluster,
as
the
name
suggests
we,
the
quote
here.
We
spend
a
lot
of
time
trying
to
balance
user
experience
and
power
and
flexibility.
F
F
We
have
a
lot
of
members
with
pretty
pretty
large
sig.
This
is
the
the
overview
of
what
we
do
and,
as
you
can
see,
there
are
lots
of
moving
parts.
I
will
mention
a
couple
of
these
in
in
more
detail,
but
the
most
mature
one.
So
far
in
the
this
stack
is
cubed
m
and
in
this
diagram
we
show
that
we
have
defined
kind
of
three
layers
to
begin
with
and
we're
trying
to
create
project
for
each
each
of
these
layers.
F
Cubed
M
is
the
middle,
so
we
go
with
Cuba
and
we
go
from
the
place
where
we
have
infrastructure
to
deploy
on,
for
example,
physical
nodes
or
raspberry
pies
or
whatever
machines
you
have
to
the
place
where
you
have
kubernetes
bootstraps
and
the
API
up
and
running,
and
you
can
join
nodes.
Cubed
M
has
been
generally
available
since
1:13,
but
it's
not
the
full
solution.
F
Although
there
are
different
kind
of
more
end-to-end
solutions
in
Cygnus
lifecycle
to
like
mini
cube,
which
gets
you
a
running
cluster
on
Windows,
Mac
or
Linux,
one,
our
cluster
or
cube
spray
is
also
one
of
the
projects
that
do
multi,
node
that
we
we
sponsor,
and
we
have
some
a
couple
of
others
to
cops.
For
example,
however,
going
back
to
the
lower
layer
here
we
have
cluster
API
with
cluster
API.
We
try
to
manage
the
nodes
powering
bananas
themselves
with
some
kind
of
common
interface.
F
The
implementation
is
up
to
the
broader
community
do
and
while
cyclists
lifecycle
can
sponsor
these
projects,
it's
not
a
thing
that
we
build
into
some
kind
of
catch-all
product,
so
cluster
API
is
purely
an
API
which
others
can
implement.
But
when
we
have
this
API
we
can
declaratively
define
the
cluster
state
and
start
doing
things
like
reconciling
with
some
kind
of
operator,
and
this
uses
spec
and
status
like
the
rest
of
kubernetes
and
allows
for
get
ops,
for
example,
workflows
all
of
your
the
ODEs
that
are
running
in
your
cluster.
F
This
is
really
early
still
holidays.
For
for
this
project,
we
released
the
first
version
zero
one
zero,
some
some
months
ago,
your
marketing
it
contains
the
first
we
won
all
for
one
API,
but
now
we
have
loads
of
work
streams
towards
we
one
alpha
two,
including
these
four
different,
different
kinds
of
work
that
that's
to
be
to
be
done.
F
So,
if
you're
interested
in
in
kind
of
trying
to
find
a
common
interface
for
for
defining
the
the
machine,
some
kubernetes
that
are
running
convenience
themselves,
for
example,
I
noticed
in
this
earlier
demo,
C
cluster
API
was
actually
actually
use.
It
utilized
and
that's
really
what
we
wanted
to
do,
but
you
can
get
get
involved
in
through
this
link.
I've
shared
my
slides
in
the
mini
nut.
F
This
space
feel
free
to
join
a
meeting.
There's
a
cap
out
there
and
we
have
a
repo
for
exploring
and
doing
some
proof
of
concept
code.
We're
also
doing
the
same
with
a
TDI
DM.
So
in,
like
all
the
Combinator's
installations
there
are
out
there,
you
have
to
somehow
manage
a
TD,
and
that
means
in
all
these
90
plus
certified.
F
Certified
kubernetes
distributions:
there
is
the
same
kind
of
code
that
is
but
somehow
running
a
CD
and
with
a
CDI
DM.
We
try
to
collect
this
this
knowledge
into
a
Combinator
specific
tool
that
can
create
an
a
TD
cluster
for
you,
so
that
it
works
well
with
so
many
requests
for
the
API
server.
That's
also
there.
You
also
have
a
cap
for
it,
and
this
is
still
in
in
other
days
where
we,
the
first
release,
will
be
coming
later
later.
F
This
summer,
we
think
a
work
group
that
we
sponsor
is
work
group
component
standard
where
we
face
the
problem
in
in
the
core
components:
components
that
there
were
inconsistent
in
how
they
are
configured
howdy
general,
introduce
the
flags
how
they're
set
up
what
kind
of
HTTP
or
just
HTTP
end
points.
They
have
authentication
all
that
kind
of
stuff,
and
it's
pretty
hard
to
write
a
component
that
is
works
like
kubernetes
components,
if
your
out
of
three,
if
you're,
not
in
the
kubernetes
inside
of
the
kubernetes
main
tree.
F
We
have
a
problem
with
with
flags
today
in
the
different
components.
This
is
not
directly.
The
class
lifecycle
concern
it's
also
constants
of
other
SIG's
that
are
owning
components,
but
we
have
a
lot
of
flags
too
many
flags.
It
becomes
painful,
especially
when
upgrading
we
can't
have
some
when
doing
more
advanced
things.
We
can't
basically
use
key
value,
key
value
pairs
like
flags.
So
then
we
need
some
kind
of
JSON
or
llamo
resource
kheema,
and
this
is
what
components
configures
it's
all
about,
and
now
there
is
a
declarative.
We
can
generate
different
API
specifications.
F
So,
in
the
end
we
want
some
components.
Do
this
all
really
like
the
cubelet
to
some
extent,
but
most
components
do
do
not.
This
is
the
kind
of
end
goal
that
we
we
can
just
have
a
file
for
the
configuration
of
all
the
different
components,
and
this
is
solely
for
starting
the
component.
It's
not
an
API
in
third
from
the
API
server,
so
tying
this
all
together.
We
have
a
lot
of
different
projects,
starting
from
the
bottom
at
CDA
DM,
which
is
free
alpha.
F
We
have
cubed
M,
which
will
eventually
depend
on
at
CDA,
DM
and
cubed.
M
is
GA
at
the
moment.
Has
its
own
functionality,
cluster
add-ons
builds
on
top
of
cubed,
I'm
kind
of
and
cluster
API
owning.
All
of
this
implementing
the
cluster
API
are
all
the
different
cluster
provisioners
that
sit
close
to
lifecycle
sponsors
and
across
all
these
components.
In
this
stack,
we
have
different
components:
configs
the
Declaration
schema
of
of
these
components.
So
there's
a
lot
of
things
needed
to
get
this
puzzle
in
place.
So
so
we
really
want
your
help
in
contributing.
F
We
have
actually
a
YouTube
video
with
the
new
contributor
onboarding,
which
you
can
see
there.
We
have
bi-weekly
meetings,
but
loads
of
breakouts
I.
Think
that,
like
ten
breakout
meetings
per
week,
the
pattern
we
use
is
we
have
office
hours
for
Cuba
diem,
cluster
API,
cops,
cube
spray
allows
a
TD,
ATM
and
a
lot
of
others
mm-hmm.
C
B
G
He
looks
frozen
okay,
I'm
trying
to
remove
him.
It's
not
working
one
sec.
C
Okay,
well,
I
can
I
can
start
in
on
the
update,
and
when
we
get
back
to
the
slides,
we
can
we'll
just
we'll
just
turn
those
back
on.
If
it's
possible
yeah.
C
Great
so
hi
everyone
I
thanks
again
for
your
time:
I'm
name's,
Chris,
Hodge
from
the
OpenStack
foundation
and
I
work
on
I'm.
One
of
the
co
leads
for
cig
OpenStack
and
are
saying
what
we
do
is
try
to
coordinate
the
cross
community
efforts
of
the
OpenStack
and
kubernetes
communities,
and
so
we
actually
have
a
cig
kubernetes
within
the
OpenStack
community.
That
also
shares
the
same
members
and
covers
a
lot
of
the
same
things,
and
what
we
do
is
try
to
cover
three
distinct
use
cases.
C
Openstack
is
a
free
and
open
source
deployment
platform
for
kubernetes,
with
integrations
given
by
cloud
provider
OpenStack,
as
well
as
different
infrastructure
services
for
communities
clusters
and
so
like.
We
saw
earlier
on
bare
metal,
but
we
also
have
identity,
plugins,
plugins
for
secrets,
storage,
as
well
as
networking
I'm,
also,
a
collection
of
infrastructure
applications
to
run
on
kubernetes,
and
so
probably
the
biggest
project
projects
are.
These
are
OpenStack,
helm
and
airship,
which
are
kind
of
used
as
a
common
deployment
framework
for
OpenStack
on
kubernetes,
as
well
as
different
integration
points.
C
C
So,
if
you
and
the
this
is
the
week
that
we
have
our
meetings
on
so
we'll
we'll
skip
next
week
because
because
of
cubic
combat
have
a
meeting
after
that
and
and
so
on,
we
have
two
new
co-leads
within
the
cig:
there's
a
Dede
Sharma
from
NEC
technologies
India
as
well
as
Christopher
Krista
Christophe
glove,
it's
from
in
oh
cloud,
sorry
to
interrupt
Chris,
but
you
can
share
your
slides
now.
Okay,
great.
C
So
right
now,
some
of
the
biggest
efforts
that
we're
working
on
are
maintaining
the
cloud
provider,
OpenStack
implementation
and
the
drivers
we're
also
working
on
the
the
cinder
and
mineral
SCSI
drivers,
which
are
a
little
bit
distinct
from
the
cloud
provider
as
well
as
the
keystone
authentication
and
authorization
wegg
webhook,
an
Octavia
ingress
controller,
a
Barbican
secrets
plug-in
as
well
as
a
cluster
autoscaler
kind
of
wrapped
into
all.
This
effort
is
removing
the
entry
provider
and
storage
provider
from
kubernetes
kubernetes.
C
We're
also,
we
also
have
tools
to
have
hosted
communities
on
OpenStack
with
Magnum,
which
has
a
huge
success
in
production
right
now,
particularly
at
CERN,
but
also
in
some
public
clouds
like
Mexico's,
as
well
as
soft-serve
service
deployment
tools.
We
have
two
cluster
API
implementations
in
the
works,
one
for
one
for
OpenStack
cloud
specifically
and
another
which
we
demonstrated
earlier
for
bare
metal
with
ironic,
using
metal
cubed.
C
You
know
really
one
of
the
biggest
goals
is
removing
the
entry
code.
This
work
is
in
collaboration
with
sig
cloud
provider
and
we're
basically
ready
to
move
on
a
lot
of
these.
You
know,
particularly
because
we
we've
had
our
auditory
provider
for
over
a
year
now,
and
we
don't
really
have
any
plans
to
migrate
to
entry
stage
England.
C
C
Another
big
project
that
we
have
once
we
once
we've
rinse
were
doing
with
the
extraction,
is
breaking
out
our
mono
repo.
So
we
have
a
mishmash
of
a
mono
repo
that
has
a
ton
of
these
drivers,
but
we
have
a
few
projects
that
are
associated
with
the
other
SIG's,
particularly
the
the
the
cluster
API
providers.
And
so
what
we
want
to
do
is
break
up
the
monomi
repo
into
individual
projects
like
a
common
authentication,
library,
the
cloud
provider,
library,
the
CSI
providers,
but
also
looking
consistently
naming
all
of
the
projects.
C
You
know
to
make
them
consistent
and
discoverable,
and
this
is
something
that
we
really
want
to
work
with
sig
cloud
provider
on,
so
we
can
to
produce
a
common
standard
across
across
all
of
the
cloud
providers,
so
that
so
the
end
of
the
day.
You
know
users,
no
matter
what
cloud
they're
using
will
be
able
to
find
the
the
tools
they
need
to
do
to
have
kubernetes
operate
with
their
cloud
on
the
OpenStack
provider
for
cluster
API.
C
We
currently
implement
the
version
one
alpha
1
and
can
install
kubernetes
versions,
one
13.5
and
higher
up
to
1.14.
We
rely
upon
OpenStack
versions
of
pike,
Queens,
rocky
and
Stine
for
this
to
be
able
to
work,
and
it's
a
pretty
fast-paced,
Avella
effort.
We're
looking
for
more
developers
and
to
make
the
implementation
a
bit
more
robust.
We
also
have
a
cops
implementation,
which
is
last
I
checked.
It
was
currently
in
alpha,
as
well
as
the
bare
metal
provider
that
we
demonstrated
earlier.
C
We've
also
been
active
in
testing,
conformance
and
integrations,
and
so
we
continuously
report
our
test
results
on
the
cloud
provider
to
test
grid
and
we're
sending
up
new
Magnum
conformance
results.
That's
in
progress,
one
also
exciting
thing
that
recently
happened
was
OpenStack.
Xun
was
added
as
a
provider
for
the
virtual
cubelet
project.
Xun
is
a
container
is
an
openstack
container
service
that
gives
you
access
to
containers
through
it
through
an
API,
backed
by
any
number
of
container
engines.
C
A
H
So
it
includes
authentication
and
authorization
and
also
other
features
like
auditing
and
security
policy.
That
might
sound
a
lot
like
the
security
sig,
but
the
reason
we
don't
we
don't
go
by
that
moniker
is
that
we
really
feel
that
security
should
be
the
responsibility
for
all
of
the
SIG's
in
securing
the
specific
components
that
they're
working
on
we
in
sig
off
just
try
and
provide
tooling
to
make
that
easier
for
the
community.
H
H
We
in
order
to
get
to
beta,
we
need
to
make
some
in
performance
improvements,
add
some
scalability
testing
and
we're
talking
about
a
new
policy
language
for
auditing
sort
of
an
evolution
of
the
old
one,
and
this
isn't
a
beta
blocker
and
might
land
past
116.
But
that's
something
we're
working
on
there's
also
some
web
webhook
requirements
around
versioning
and
authentication
that
might
need
to
happen
before
dynamic
audit
logging
gets
to
beta.
H
H
H
H
Node
scope,
daemon
sets
was
a
proposal
we
were
targeting
for
115,
but
after
some
discussion
and
the
sig
decided
that
we
want
to
think
about
a
more
generic
solution,
possibly
around
authorization
constraints,
rather
than
just
trying
to
expand
the
scope
of
the
the
new
to
authorizer
to
apply
to
arbitrary
workloads.
One.
E
H
So
that
the
idea
of
node
scopes,
daemon
sets
is
actually
maybe
it's
easier
to.
First
describe
the
node
authorizer
prior
to
I
think
it
was
one
eight
that
we
introduced.
The
new
authorizer
a
cubelet
had
a
bunch
of
permissions.
It
needs
to
do
things
like
read
and
update
pods,
read
secrets,
and
so
because
we
didn't
have
any
way
of
scoping
those
things
to
a
specific
node.
It
meant
that
if
you
managed
to
get
to
compromise
the
cubelet
credential,
you
essentially
have
root
in
the
cluster.
H
Now,
with
the
node
authorizer
and
node
restriction
plugins,
we
maintain
a
graph
of
all
of
the
resources
that
are
required
by
pods
running
on
a
specific
node,
and
so
we
can
actually
say.
Does
this
cubelet
have
any
need
to
read
the
secret,
or
is
it
trying
to
read
something
that
it
should
actually
need?
I
mean,
so
we
can
restrict
that
to
only
stuff.
That's
already
running
on
that
node.
The
problem
is,
you
have
daemon
sets
that
are
providing
are
doing
similar
things.
H
They
need
to
update
pods
on
the
node
or
maybe
do
something
like
rotate
I.
Don't
know
audit
secrets
running
on
the
node
or
something
like
that
they
are
going
to
require
similar
permissions,
but
because
we
don't
apply
the
same
restrictions
to
daemon
sets.
It
means
that
if
you,
if
you
compromise
the
node,
then
you
can
get
access
to
those
things,
so
that
was
that
was
kind
of
the
motivation
as
a
short-term
fix.
H
We're
looking
at
basically
reusing
the
cubelets
credentials
for
daemon
sets
that
need
those
sorts
of
privileged
permissions,
and
so
in
doing
so,
essentially
go
through
the
new
authorizer
for
that
and
bound
service
account.
Tokens
is
kind
of
the
it's
sort
of
like
service
accounts.
V2
our
service
account
tokens
v2.
H
These
add
things
like
audience
scoping
on
service
accounts
which
lets
you
say
this
token
is
only
good
for
talking
to
this
specific
component,
so
scoping
it
to
a
specific
audience
and
also
time
bounding
those
service
accounts.
We
want
to
make
these
the
defaults
eventually
we're
working
on
rolling
over
a
bunch
of
the
core
controllers
to
using
these
new
service
accounts,
but
there's
still
some
backwards.
Compatibility
issues
to
work
out
before
we
actually
move
to
the
new
service
accounts
by
default.
H
And
yeah
in
terms
of
upcoming
themes,
we,
as
you
can
kind
of
see
from
these
updates,
we're
working
on
sort
of
maturing
a
lot
of
the
features
we
have
in
progress,
trying
to
get
everything
to
GA
and
the
one
other
thing
I
wanted
to
mention
is
that
we've
been
talking
a
bit
about
entry
policies,
so
things
like
pod
security
policy
is
a
big
one.
There's
also
proposals
out
for
scheduling
policies
and
metadata
policies
and
with
the
introduction
of
admission
web
hooks.
H
A
lot
of
these
things
become
possible
out
of
tree,
and
so
we're
kind
of
talking
about
what
the
role
of
the
entry
policies
should
be.
Should
they
exist
at
all
and
I
wanted
to
give
a
shout
out
to
the
gatekeeper
project
which
is
introducing
it
is
integrating
the
open
policy
agent
semantics
that
let
you
describe
these
really
generic
policies
and
that's
kind
of
one
possible
solution,
we're
looking
at
for
general
policy
management
in
kubernetes.
H
H
New
contributors
are
always
welcome,
we're
trying
to
do
a
better
job
of
issue
triage
and
our
bi-weekly
meetings,
and
so
hopefully
we'll
be
tagging,
some
more
issues
with
a
good
first
issue
or
Help
Wanted
and
I
hope
for
those
our
reach
out
to
us
on
slack
and
if
you
are
looking
for
how
to
get
involved.
Thanks.
A
I
Hi
everyone,
a
quick
update
for
Cuba
con
Barcelona-
we
are,
it
is
the
biggest
contributor
summit.
Ever
we
have
roughly
330
people
registered
and
approved
for
the
contributor
summit.
We
are
kicking
things
off
this
Sunday
with
a
contributor
celebration
and
then
on
Monday.
We
have
two
different
workshops:
a
101
workshop
and
a
2-1
workshops
and
a
bunch
of
face-to-face
meetings.
I
A
Thanks
Janice,
so
the
last
that
we
have.
We
have
some
some
shout
outs
this
week.
So
as
a
reminder
of
the
shout
outs
go
and
the
shout
outs
slack
channel,
which
is
where
I
pulled
these
from.
So
bob
has
a
shout
out
to
Claudia
King,
Felipe
and
Aviva
for
localizing
the
contributor
cheat
sheet
to
Korean,
Portuguese
and
bahasa
Indonesian,
and
to
ruie
for
organizing
the
whole
effort.
So
thank
you
all
for
doing
that.
A
A
Jonas
has
a
couple
of
shout
outs,
so
a
huge
shout
out
to
Paris
myself,
George
dub
Geils,
a
hora
code,
Ranger
and
mr.
Bobby
tables
for
an
amazing
job
planning
out.
The
kubernetes
contribution
tribute
err
summit
in
Barcelona
over
the
past
few
months
and
Jonas
has
another
enormous
shout
out
to
Tim,
pepper
and
Gwen
singer
for
updating
and
taking
on
the
role
of
workshop
leads
for
the
kubernetes
contributors
summit.
Barcelona,
which
I
could
say,
is
a
pretty
massive
effort.