►
From YouTube: Kubernetes Community Meeting 20190228
Description
We have PUBLIC and RECORDED weekly meeting every Thursday at 6pm UTC. The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
Check this out for more information: https://github.com/kubernetes/community/blob/master/events/community-meeting.md
A
Good
afternoon,
good
morning,
good
evening,
wherever
you
may
are,
thank
you
for
joining
us
for
this
week's
community
meeting
kubernetes
community
meeting
a
friendly
reminder.
This
meeting
is
recorded.
It
is
also
streamed
live.
So
if
there's
something
you
want
to
say
make
sure
you
want
to
say
it
to
everyone
in
the
world
there's
also
a
code
of
conduct.
Please
be
mindful
of
that
as
well.
I
think
that's
it
for
me.
So
without
further
ado,
pastry
Bowl
will
run
the
demo
today
on
through
the
midis
policy
controller.
A
B
There
we
go
my
name's
Dave
straight,
why
my
open
source
architect
at
Microsoft
and
today
I'll
talk
about
a
demo
gatekeeper,
which
is
a
policy
controller
for
kubernetes.
B
So
if
you
look
at
what
gatekeeper
is
and
where
kind
of
came
from,
there
was
two
projects
announced
back
and
probably
around
October
time
frame,
which
was
the
kubernetes
policy
controller
and
one
called
gatekeeper.
They
both
solved
kind
of
the
same
issues
around
policy,
but
they
were
separate
projects,
so
those
came
to
merge
into
the
open
policy
agent
as
open
policy
agent
is
really
the
magic
behind
the
policy
controllers.
B
They
both
kind
of
had
kind
of
different
directions,
but
the
cool
thing
I
think
here
is
that
kind
of
community
and
ecosystem
consolidate
they're
trying
to
solve
the
same
problems
and
that
also
brought
in
a
lot
of
other
companies
and
end-users.
Collaborating
on
this
policy
controller,
which
is
going
to
be
officially
called
gatekeeper.
This
is
in
pretty
heavy
development,
taking
some
of
the
stuff
from
the
previous
projects
and
also
doing
kind
of
refactoring
on
it.
B
So
some
of
the
stuff
I'll
show
in
the
demo
today
is
kind
of
based
on
the
older
policy
controller,
but
I
will
talk
about
what
the
components
are
of
this.
The
first
thing
to
know
is
really
the
magic
behind
it
is
using
open
policy
agent.
If
you're
not
familiar
with
open
policy
agent,
it
is
a
CNC
F,
hosted
project,
it's
a
general-purpose
policy
engine,
so
it's
just
not
for
kubernetes.
There
are
end
users
out
there
using
it
for
kubernetes
Capital
One,
which
did
a
cube
con
talk
on
how
they're
utilizing
it
there's
also
like
Netflix.
B
They
use
it
for
protecting
SSH
into
their
Linux
host.
It
can
also
be
used
for
things
around.
You
know
protecting
different
api's,
like
sto
API,
linker,
D,
pod
foundry.
Things
like
that
today,
I'll
talk
about
it
in
the
forum,
what
it
provides
you
from
a
kubernetes
standpoint,
so
what
it
actually
uses
is
a
declarative
policy
language
code
Rigo.
This
came
from
a
long
time
ago
what
was
called
data
log
and
it's
really
focused
on
protecting
the
kubernetes
api.
B
So
some
basic
scenarios
for
the
actual
policy
controller,
a
few
different
things
it
does
is
from
admission
authorization
and
audit.
So
these
can
answer
different
questions,
like
maybe
I
went
to
whitelist
or
blacklist
different
registries,
so
it
has
to
come
from
a
specific
URL
if
that
doesn't
match.
I'll
invalidate
that-
and
this
is
all
really
based
on
how
it
ties
in
is
through
admission
control
so
being
able
to
utilize
things
like
mutating
web
hooks
and
validating
web
hooks.
B
Now
we
can
also
do
things
like
not
allow
conflicting
hosts
for
ingress
is
so
maybe
we
have
the
same
host
name,
that's
defined
in
our
object
across
different
namespaces.
It
could
also
deny
requests
like
that.
We
can
also
mutate
objects
on
the
fly
too.
So
if
somebody
submits
a
object
that
has
a
specific
label
or
annotation,
it
could
also
mutate
that
object
to
add
something
of
user.
X
submits
a
object
and
it
applies
label
Y
for
their
department.
B
So
the
components
of
this
there's
a
couple
different
components:
if
you
look
at
it
open
policy
agents,
kind
of
the
heart
of
its
what's
doing
the
validation
of
the
different
policies,
you
define
the
cube
management
piece
of
this
today.
This
isn't
based
on
C
or
DS,
but
it
will
be
based
on
C
or
D,
so
it's
watching
CR
DS
for
the
policy
and
it's
also
providing
the
Cades
object
watcher
which
is
validating
that
data.
B
So
it's
just
looking
at
the
JSON
data
and
kind
of
evaluating
that
against
the
policy
it's
been
defined,
then
the
policy
controller
is
what's
doing
the
query
against
the
open
policy
agents
for
anything
like
audit
policy
and
a
mission
and
I
think
I
forgot
emit.
It
can
also
do
a
audit,
so
I
can
look
at
a
current
environment
and
I
can
also
audit
that
against
my
policies
without
enforcing
those
policies,
alright,
so
for
the
demo
here,
Oh.
B
So
for
the
demo,
there's
two
different
things:
I'll
quickly
show
here,
so
one
I
have
is
a
container
image
whitelist.
So
essentially,
this
is
a
Rago
policy
here
that
I've
defined
within
here.
What
its
first
looking
at
this
is
a
deny
rule.
Then
it's
looking
at
the
type
of
resources
I
want
to
apply
this
to
so
it's
looking
at
any
type
of
object
defined
as
a
pod
and
it's
looking
at
all
namespaces.
B
The
resolution
is
whatever
message:
it's
going
to
return
to
the
user
if
this
is
actually
denied
here.
So
if
I
go
ahead,
I
already
deployed
the
policy
agent.
You
can
see
it
creates
a
few
different
things
like
some
cluster
roles
and
bindings
creates
a
service
secret
and
kid
config
maps.
Today,
what
I'm
going
to
show
the
policies
here
are
actually
defined
as
config
maps,
so
you
can
see
I
have
like
a
annotation
policy.
B
Acme
Corp,
or
at
least
it
needs
to
start
with
that.
So
I
can
whitelist
what
registries
I
actually
want
to
allow
users
to
pull
from
the
second
one
is
a
annotation
policy
here,
the
annotation
policy.
What
this
policy
is
doing,
it's
going
to
provide
a
patch
on
the
actual
annotation
and
it's
going
to
match
this
object.
So
any
time
it
sees
an
annotation
with
this
test
mutation,
it's
going
to
also
add
an
annotation
for
foo,
with
the
value
dep
bar.
So
if
I
go
through
this
one.
B
B
So
in
here,
if
I
go
and
get
the
metadata
now
so
once
I
annotated
that
with
this
test
mutation,
that's
what
I
added
in
this
annotation
up
here
once
I.
Did
that
test
me
added
that
test
mutation?
True,
it
also
automatically
added
fubar
here
so
example
a
good
example.
This
is
maybe
you
want
to
allow
users
to
only
deploy
internal
load
balancers
rather
than
being
able
to
deploy
publicly
exposed,
load
balancers,
typically
doing
that
within
majority
of
your
cloud
environments
and
other
environments
is
going
to
be
through
a
annotation,
so
X.
So
this
would.
B
That
would
be
a
good
example
where
I
could
automatically
when
somebody
submits
a
object
in
for
a
service
with
the
type
of
load
balancer,
it
will
automatically
annotate
that
so
it'll
only
get
a
internal
load
balancer
with
the
mutations.
You
do
have
to
be
cautious
with
them,
because
when
you
mutate
an
object,
they
use
it
automatically
mutate
object.
Your
end
user
doesn't
always
know
what
that
desired.
States
going
to
be
because
it's
outside
of
what
they
define.
So
you
do
have
to
be
careful
with
those
all
right,
so
that
shows
a
quick
demo
Lessing
here.
B
Please
get
involved
in
this
I
think
this
provides
a
huge
pain
point
that
a
lot
of
end
users
in
the
kubernetes
community
have
from
a
lot
of
people
I've
talked
to,
and
it
would
be
great
to
see
kind
of
a
whole
repository
of
these
different
policies,
because
I
think
a
lot
of
the
policies
could
be
applied
across
different
type
of
end
users
in
their
environments.
So
you
can
go
to
open
policy
agent
org!
That's
where
you'll
find
all
the
documentation
and
tutorials
there.
B
They
also
have
a
slack
channel
where
there's
a
lot
of
activity
in
there.
So
if
you're
just
getting
started
with
this,
especially
of
like
the
policy
language
is
usually
the
learning
curve
of
tons
of
people
to
help
you
out
there
and
they
also
hold
bi-weekly
meetings.
Talking
about
the
policy
controller,
an
open
policy
agent-
and
that
is
all
I-
have
awesome.
A
Thank
you,
Dave
appreciate
it.
We
don't
really
have
time
for
questions,
but
there's
a
ton
of
notes
in
the
demo
Dave.
You
could
drop
your
slides
in
there
and
the
community
meeting
notes.
I'd
be
great.
This
is
an
excellent.
Next
up
is
his
royal
beard,
miss
Erin
crackin
burger
for
our
release,
updates
royal
beingness,
okay,.
C
Hi
everybody
I
am
Erin
of
cig
beard
and
today
is
your
kubernetes
114
release
update
this
is
week.
8
this
week's
cat
t-shirt
is
the
cat
that
is
tool
to
cool,
to
look
back
walking
away
from
an
explosion,
because
guess
what
folks
we
are
in
burn
down,
I
also
prepared
slides
for
today.
Let's
see
how
long
this
works.
Okay,
so
share
the
thing,
go
to
the
thing,
minimize
that
and
try
to
present
okay,
so
welcome
to
burn
down
everybody.
C
What
is
burned
down.
I
can
click
this
link,
and
it
will
take
me
to
our
wonderful
release,
114
schedule
and
it
talks
about
how
the
fact
that
now
we
want
to
burn
issues
down,
we
want
to
focus
on
fixing
bugs
eliminating
test
Lakes
and
just
in
general,
stabilizing
the
release,
we're
going
to
encourage
people
who
are
on
the
hook
to
help
fix
these
things
to
start
showing
up
to
these
meetings.
They
are
at
present
occurring
Monday,
Wednesday
and
Friday,
and
you
can
get
a
meeting
invite
for
these
by
going
by
joining
the
kubernetes
sake.
C
So
some
of
you
may
feel,
like
things
are
going
to
get
a
little
intense
and
what's
the
deal
with
the
release
team
going
around
and
poking
all
of
the
things.
Well,
it's
because
we
actually
want
to
get
the
release
out
the
door,
so
we're
really
interested
in
taking
a
close
look
at
whether
or
not
things
are
really
ready
to
ship.
You
know
whether
they're
really
documented
or
whether
they
really
have
tests
whether
you've
really
thought
about
the
upgrade
downgrade
considerations.
C
So,
let's
go
to
the
enhancement,
spreadsheet,
we're
clear
and
the
wonderful
enhancements
team
have
been
going
through
and
poking
at
the
I
believe
it
is
now
32
enhancements
down
from
the
original
40
planned
enhancements,
and
you
can
see
that
we
have
a
number
of
these
that
are
marked
at
risk.
The
reason
they're
marked
at
risk
is
because
we've
gone
and
poked
on
the
individual
and
hits
myth
issues
and
asked
folks:
hey.
Do
you
actually
have
a
test
plan?
C
C
We
have
somebody
who's,
really
hoping
that
this
makes
it
into
114,
but
we
don't
actually
have
a
cap
that
describes
how
this
is
going
to
make
it
into
114
right
now
and
the
cap
doesn't
have
a
test
plan
and
if
we
go
to
supposedly
where
the
proposal
is
still
being
fleshed
out,
there
is
ongoing
discussion
about
the
design
of
this
particular
enhancement.
This
feels
like,
maybe
not
the
most
stable
thing
to
try
and
jam
in
a
week
before
a
code
freeze.
C
If
we
find
that
they're
not
really
looking
like
they're
gonna
make
it
and
huge
thanks
to
everybody
who
has
in
fact
already
punted
their
issues
to
the
next
milestone,
because
they
know
it's
unreasonable
to
land
it
next
up.
Let's
talk
about
docks,
so
Friday
March
1st
be
sure
to
bring
it
at
your
docks,
bringing
out
your
docks
open
up
a
docks.
Pr
Jimmy
Angelo
posted
a
message
to
the
kubernetes
to
have
mailing
list
into
a
couple
other
places
to
remind
you
all
to
open
up
a
PR
against
the
kubernetes
website,
repo
by
Friday,
March
1st.
C
You
will
forever
be
our
special
place
best
friend
if
you
actually
open
up
real
full
docks
by
Friday
March
1st,
but
all
we're
really
asking
for
is
just
a
placeholder
PR
and
then
finally,
the
part
I'm
so
excited
about
and
be
literal
entire
reason.
I
made
the
slide
deck
was
tell
you
to
brace
yourself
because
memes
about
code
freeze
are
coming
remember:
we
dropped
the
idea
of
code
slush
and
so,
instead
of
code
slush,
where
we
made
things
a
little
bit
trickier
to
merge.
We
just
said
that
we're
gonna
use
this
week
before
code.
C
Freeze
which
was
previously
called
code,
slashes
I,
don't
know
code
meme
or
something
where
we
tell
you
all
that
code
freeze
is
coming
and
really
think
long
and
hard
about
whether
or
not
you
want
to
try
and
land
your
code
before
code
freeze,
so
come
freeze
is
March
7th.
That's
when
burndown
turns
into
something
a
little
bit
different
where
things
slow
down,
because
everything
is
so
frozen
and
we
can
hopefully
start
to
see
a
more
reasonable
pace
of
change.
C
Things
that
are
going
to
land
after
code
freeze
are
the
things
that
only
have
the
V
114
milestone
attached
to
them.
There's
none
of
this
priority
critical,
urgent
stuff.
There's
none
of
this
labeling
stuff,
it's
just
the
milestone.
Who
can
apply
the
milestone?
People
in
the
milestone,
main
kubernetes,
milestone,
maintainer
x',
github
team,
which
should
correspond
roughly
to
the
kubernetes
release
team
and
the
cig
leads
I.
C
Guess
I
should
be
calling
them
seek
chairs
or
say
technical
leads
for
all
of
the
different
sakes,
so
some
people
think
again
code
free
seems
a
little
crazy
and
wild.
What's
going
on
it's
this
super
hardcore
thing.
Other
people
think
that
maybe
code
freeze
is
more
of
what
you
would
call
a
guideline
where
we
don't
actually
really
mean
it.
You
can
sneak
in
whatever
you
want.
C
Perhaps
some
of
you
are
considering
pushing
a
large
change
right
now
to
try
and
sneak
it
in
by
freeze,
or
maybe
you
want
to
sneak
it
in
today.
Right
of
code,
freeze
and
I
also
like
to
live
dangerously,
but
fortunately,
I
have
release
team.
Who
is
a
little
bit
more
considerate
than
me
always
wanting
to
ship
stuff
and
so,
for
example,
like
we're,
not
gonna,
be
pushing
to
have
go,
laying
112
used
to
build
kubernetes
114,
even
though
it
was
released
a
whole
two
days
ago.
C
It
seems
like
maybe
that's
something
we
should
wait
until
the
next
release
cycle.
To
consider
so
remember
that
the
code
is
freezing
in
here,
mr.
bigglesworth,
and
think
long
and
hard
about
whether
or
not
your
codes
actually
going
to
lay
it
next.
Let's
talk
about
difference
or
PSA's.
Why
am
I
so
upset
about
differece
or
PSA's?
Well,
it's
gonna.
Take
a
look
at
the
release
master
blocking
dashboards.
C
C
Well,
it
turns
out
this
is
kind
of
difficult
to
fix,
I'm,
not
actually
trying
to
pick
on
anybody,
but
I
am
trying
to
point
out
that
we
should
really
start
paying
attention
to
these
a
lot
more,
just
collectively
as
a
community.
But
thankfully
we
do
have
a
CI
signal
team
who
is
also
paying
attention
to
this
stuff,
and
they
have
brought
this
to
the
attention
of
SIG's
storage.
So
how
do
I
know
that?
C
Well,
let's
move
on
to
CI
signal,
because
of
course,
we
all
know
that
this
is
generally
speaking
what
the
experience
of
trying
to
land
a
change
in
kubernetes
is
like,
and
we
need
people
to
help
us
out
to
understand
whether
these
real
failures
or
these
flakes.
What's
going
on
who's
fixing.
This
bug
who's
responsible
for
this
bug
and
so
CI
signal
has
hopefully
created
a
project
dashboard
which,
as
far
as
I
know,
should
be
available
to
everybody.
They
even
have
these
awesome
instructions.
C
On
the
left-hand
side
here
telling
you
like,
welcome
you
like
to
help
out
here's
how
you
can
help
out
feel
free
to
contribute,
especially
if
you
see
something
say
something
maybe
even
go
ahead
and
put
it
on
this
board.
What
goes
on
this
board?
Well,
it's
cards
that
have
kind
failing
tasks
to
kind.
Flake
then
happen
to
have
the
114
milestone
attached
to
them.
You'll
notice
there's
a
lot
of
red
on
this
board,
and
that's
because
we
are
labeling
as
priority
critical,
urgent.
Any
test
failures
that
happen
to
actually
definitely
be
blocking.
C
That's
why
that
particular
check
fails
so
shout
out
to
Michelle
for
getting
on
top
of
this
and
making
sure
that
SIG's
storage,
who's
continually
awesome
is
troubleshooting
this.
Hopefully
we
can
get
this
brought
to
ground
shortly.
There
are
a
bunch
of
other
things
and
because
we're
kind
of
running
along
I'm
not
going
to
walk
you
through
all
of
these
tests,
but
just
to
say
that,
like
if
you're
on
this
board,
people
are
going
to
come
and
talk
to
you
and
ask
like
what's
going
on?
What
can
we
do
to
help?
C
Finally,
I
just
wanted
to
give
another
quick
brief,
shout
out
to
Jeff
shadow
in
the
release,
notes
roll,
who
kind
of
made
a
plea
that
perhaps
this
time
around
the
release,
notes
for
kubernetes
should
actually
be
about
the
release.
If
you
take
a
look
at
the
changelog
for
a
kubernetes
113,
it's
basically
about
what
each
and
every
cig
did.
It
almost
looks
kind
of
like
a
written
dump
of
every
SIG's
community
meeting
update,
and
why
is
that
in
a
repository
called
kubernetes
in
a
file
called
change
logs
shouldn't
that
really
be
about
the
release?
C
So
he
did
a
couple
things
about
this
first
off.
If
you
want
to
see
what
the
release
notes
look
like
today,
you
can
take
a
look
at
the
release,
notes
draft
that
is
continually
published
by
the
release,
notes
team.
They
were
doing
it
on
a
weekly,
cadence
and
I
believe
we
talked
about
bumping
it
up
to
at
least
bi-weekly
at
this
point,
and
you
can
see
even
right
now,
it's
it's!
Maybe
what
some
of
you
are
familiar
with.
There's
an
action
required
section.
C
There's
a
new
feature
section
and
then
there's
this
section
down
here,
we're
like
literally
every
stick,
has
their
name
attached
to
everything
and
I
feel
like.
Maybe
this
is
incomprehensible
to
end-users,
and
this
seems
like
it's.
A
little
dense
to
sort
through
so
Jeff
also
started
a
personal
project
that
he
suggested.
C
Let
them
have
an
editorial
voice,
I,
give
them
full
power
and
authority
to
actually
try
and
create
some
kind
of
narrative
out
of
this
crazy
herd
of
cats.
That's
having
a
an
army
of
monkeys
banging
on
typewriters
banging
out
code
and
documentation,
like
somebody
at
the
end
of
the
day,
has
to
try
and
have
that
make
sense
to
end
users.
So
a
quick
shout
out
to
that
and
I
think
that
is
all
that
I
have
and
I've
had
my
screen
share
this
entire
time.
I
have
no
idea.
A
A
D
Okay,
so
some
of
you
may
have
seen
comments
show
up
on
your
home
requests
in
the
last
week,
advising
you
that
your
pool
request
might
need
an
API
review,
and
in
the
past
this
was
kind
of
a
secret
handshake
kind
of
process.
If,
if
you
need
an
API
review,
you
would
know-
and
you
would
get
it
by
asking
someone
and
the
person-
you
would
ask
you
you
would
just
know,
and
if
you
didn't
know,
you
would
need
to
ask
around
and
we're
trying
to
improve
that.
D
D
There
are
links
to
those
descriptions
and
an
example
of
the
the
comments
that
the
bot
will
leave
for
you,
and
the
idea
is
to
make
this
process
transparent
and
lightweight
and
hopefully
make
it
easy
for
people
to
get
get
involved
when
they
need
it
on
their
changes.
So
if
you
sell
that
joke,
although
those
links
read
that,
if
you
have
questions
reach
out
to
stick
architecture,
they're
the
ones
that
are
driving
the
API
review
process
and
and
give
us
feedback
on
that,
be
happy
to
hear
it.
A
Already
moving
along
to
sig
updates,
so
if
you're,
a
sig
lead
and
you're
in
this
call
check
out
the
community,
you
know
it's
as
community
meeting
notes.
There's
some
tips
about
what
to
cover
during
these
there's.
Even
a
slide
templates
and
there's
a
schedule
as
well.
But
first
up
is
sig
cluster,
a
lifecycle.
A
E
So
we'll
give
a
brief
update,
I'll
try
to
make
this
quick,
because
I
made
my
slides
really
quick.
So
what
do
we
do?
Just
for
the
PSA
for
folks
who
are
new
contributors
in
cholesterol,
icicles
objective
is
to
simplify
the
creation,
configuration
upgrade
our
internal
clusters
and
their
components,
which
basically
means
we
pass
the
butter.
E
So
what
are
we
doing
right
now,
but
this
was
coming
up
in
the
1:14
release
cycle
that
there
are
some
highlight
reels
that
I
want
folks
to
know
about
so
in
114
there's
a
couple
of
there
are
several
major
sub
projects
that
kind
of
sit
under
the
umbrella
of
sequestered
life,
so
that
could
be
a
DM
is
one
of
them.
So
for
114,
there's
no
better
test
automation!
The
movie
did
a
ton
of
integration
work
with
using
kind
as
a
default
to
player
for
being
able
to
test
updates
to
comedian
and
we're
starting
to
investigate.
E
E
As
folks
who
are
aware
of
this
effort
for
several
years,
we've
had
the
question
of
what
does
it
need
to
be
a
chicken
egg
with
regrets
tha
and
over
time
we
have
we've
had
a
good
story,
but
it
keeps
on
evolving
and
becoming
cleaner
and
simpler.
We
have
several
improvements
that
are
going
on
the
cycle
and
every
single
iteration.
It
gets
a
little
bit
cleaner,
a
little
bit
simpler
to
the
point
where
we
want
to
have
a
single
control
plan,
joint
command
and
magically.
E
You
have
AJ
last
but
not
least,
are
the
modification
to
the
joints
that
command.
This
is
a
comedian
to
break
it
out
to
a
several
multiple
phases.
So
those
who
build
turkey
eye
automation
around
Covidien,
which
there
are
many
tools
that
do
that,
can
use
the
join
the
different
operations
and
the
joints
of
command
to
do
customizations
as
they
see
fit,
and
a
number
of
bug
fixes.
E
So
what's
going
on
cluster
API,
closer
API
is
another
major
self
project
of
say:
cluster
lifecycle,
we're
committee
and
stuffs
cluster
API
begins.
I,
don't
give
the
synopsis
there,
but
I
had
to
recommend
going
to
the
sub
project
page.
If
you
want
to
know
more,
the
TLDR
is
that
we
are
going
to
be
releasing
a
v1
alpha,
one
in
the
near
term,
probably
around
the
same
time,
same
ish
time
frame
as
114
release.
E
One
of
the
problems
with
releasing
asynchronously
from
the
main
line
is
that,
especially
with
multiple
retos,
we're
working
on
methodology
and
process
that
we
want
to
do
follow
for
doing
releases.
There
are
documents
that
are
posted
and
I'll
put
the
slides
inside
of
the
making
you
need
to
as
well
for
folks
are
interested.
E
There's
a
bunch
of
olive
oil
work
with
regards
to
cleanup
of
machine
deployments
and
machine
sets.
We
also
talked
about
how
to
do
this
a
little
more
gracefully
and
that
work
is
going
to
be
going
into
b1l,
one
just
sort
of
for
folks
who
are
aware.
We
have
about
17,
more
open
issues
and
where
we
can
actively
use
testers.
So
if
people
are
interested
in
testing
out
with
latest
and
greatest
sand
coming
in
cluster
API,
as
well
as
the
different
providers
we
would
love
to
hear
from,
you
ought
to
mini
cube.
E
E
There's
details
in
the
repo,
as
well
as
in
the
slide
deck
that
outlined
the
detail
there
there's
as
well
as
support,
ongoing
support
for
multiple
CR
eyes
and
they're,
going
to
aim
for
a
b1
release
in
the
last
week
of
March.
So
if
you're
interested
in
helping
out
in
mini
cube
or
if
you
want
to
use
it
and
want
to
be
able
to
test
off
features,
please
talk
with
native
team
they'd
love
to
hear
feedback.
E
A
couple
of
other
sub
projects
kind
is
not
necessarily
a
sub-project
of
c
cluster
lifecycle,
but
we
we
actually
have
a
ton
of
people
who
are
executing
its
kind,
so
I'm
gonna
give
a
minor
update
there,
but
some
of
the
things
we've
added
in
for
cops
the
PSAs
are
there.
Every
cops
is
now
upgraded
to
sed
3
they're,
currently
working
through
the
the
latest
CDE
4
run
C
issue,
they're
working
also
through
the
integration
of
cluster
API,
and
that
means
that
there's
a
bunch
of
other
follow-on
work.
E
That
kind
of
falls
through
that
integration
and
in
the
long-term
picture
in
the
Ark
copses
Justin
likes
to
say,
is
that
it's
meant
to
live
outside
of
the
scope
and
build
upon
the
other
tools
so
that
they
eventually
it
will
reduce
the
scope
of
cops
itself
over
time.
So
that
would
be
great.
So
one
of
the
big
foundational
principles
of
sequester
lifecycle
is
quick
invisibility
and,
as
we
kind
of
start,
to
break
down
the
layers
over
time.
E
What
we
find
is
that
the
scope
of
each
individual
tool
becomes
smaller
and
it
becomes
much
more
of
a
composable
problem
for
a
composition
problem
with
regards
to
kind
folks
have
added
offline
support,
the
upgraded
to
the
latest
version
of
v1
13-3
release
and
benin
is
currently
working
in
a
monthly
case.
I
know,
there's
a
bunch
of
been
a
bunch
of
other
features
that
the
cluster
lexical
scopes
have
added,
primarily
because
they
want
to.
We
want
to
be
able
to
use
kind
as
the
default
testing
tool
for
developers.
E
Here's
my
like
long-term
goal
I
would
love
to
use
kind
as
the
tool
that
replaces
local
cluster
up
for
every
kubernetes
developer.
We're
not
there
yet.
So
what
are
some
PSA
is
about
some
of
the
other
working
groups
in
some
projects
there
is
a
component
config
working
group.
If
you
don't
like
knobs
and
you'd
like
to
reevaluate
your
life
choices,
we
highly
recommend
going
to
talk
with
that
group,
because
there
is
a
lot
of
work
to
do
and
please
reach
out
to
Lucas
and
Mike
Thompson.
E
If
you
are
interested
in
helping
reduce
the
state
space
of
configuration
options
that
exist,
which
you
know,
if
you
look
at
the
API
server,
there
are
hundreds
of
notes,
we're
starting
to
think
seriously
about
and
on
management
know
we're
seriously
serious
this
time.
For
those
who
have
have
known
this
space,
it
has
existed
for
a
long
time,
but
we
kind
of
have
a
forcing
function
nowadays,
which
is
the
CRD
life
cycle,
and
we
are
going
to
be
working
on
that,
probably
in
the
115th
timeframe,
as
I
mentioned,
AJ
gets
more
better.
E
Last
but
not
least,
in
cluster
ap
Island
for
a
post
year
alpha
one
world
in
a
V
1
alpha
2
we're
going
to
plan
three
architects,
some
of
the
pieces
of
the
puzzle
there
and
try
again,
as
you
could
probably
tell
from
my
previous
statements,
is
paying
for
a
more
composable
model.
So
it's
kind
of
cool
s,
the
provider,
fragmentation
that
exists.
So
what's
coming
up,
the
answer
is
we
have
planning?
E
So
if
you
are
interested
in
engaging
in
sequestered
lifecycle
in
the
beginning
of
every
cycle,
we
go
through
a
planning
phase
and
you
can
look
at
the
backlog
currently
for
what
already
exists
in
some
of
the
milestones
to
get
an
idea
of
what's
coming
up.
But
if
you
want,
if
you
care
about
the
future-
and
you
want
to
see
it
land
and
I've,
given
a
release,
we
highly
recommend
going
to
a
clinic
session
which
what
happened
in
the
next
couple
of
weeks
to
get
an
idea
of
how
we
do
these
things.
E
There's
a
link
to
the
how
the
Buried
document,
which
actually
outlines
our
process,
that
we
follow
for
a
lot
of
the
sub
projects,
instant
concert
icicle.
Where
can
you
find
us?
You
can
add
us
on
the
channels
there's
a
home
page,
which
is
the
community
repo
or
used
to
be
the
courier
I?
Guess
this
slide?
Deck
is
old.
But
if
you
look
in
the
community
repo,
there
exists
a
link
there,
which
outlines
all
of
the
different
sub
project
meetings
and
the
channels
and
senator
centers
diagram.
E
A
F
My
name
is
Chris
Hodge
I
am
right
now
I'm
the
D
chair
of
cig
OpenStack.
We
are
in
the
process
of
adding
two
more
co-chairs
right
now.
There's
a
flight
there's
a
patch
in
flight
for
that
which
I'll
get
to
more
a
little
bit
later.
So
to
start
off
with
what
is
sick
OpenStack,
we
try
to
coordinate
the
cross
community
efforts
of
the
OpenStack
and
kubernetes
communities,
and
so
in
our
minds.
This
covers
three
distinct
use
cases.
F
The
first
is
OpenStack
is
a
free
and
open-source
deployment
platform
for
kubernetes
with
integrations
provided
by
cloud
provider
OpenStack,
and
so
there
are
a
number
of
different
cloud
providers
within
the
community
and
we
we
develop
the
OpenStack
static
version
of
that
cloud
provider
also
with
OpenStack
as
providing
infrastructure
services
for
kubernetes
clusters,
and
so
we're
seeing
a
number
of
use
cases
where
we're
in
production.
You
know
OpenStack,
identity
or
storage.
You
know
you
know,
block,
storage
or
or
object
storage
secrets.
F
Networking
kubernetes
isn't
necessarily
necessarily
deployed
on
top
of
a
cluster
that
has
these
services,
but
it
might
be
consuming
these
these
these
api's,
and
so
in
this
case,
it's
created.
Openstack
burning
excited
my
side
and
finally,
as
a
collection
as
a
collection
of
infrastructure,
application
store
on
top
of
kubernetes,
and
so
one
of
the
one
of
the
methods
that
we're
seeing
within
the
you
know,
with
larger
adoption
in
the
OpenStack
community,
is
setting
up
a
bare-metal,
kubernetes
cluster.
F
F
So
what
are
we
doing
in
practice
right
now?
Well,
one
of
our
biggest
fastest
maintain
is
maintaining
the
cloud
provider.
It
has
been
syndrome
and
all
SCSI
drivers,
Keystone
authentication,
Thracian
authorisation,
webhook,
webhooks,
ingress
and
egress
controllers
through
Octavia
and
Barbican
can't
miss
plugins.
F
Another
big
focus
of
our
efforts
is
actually
removing
the
entry
provider
from
497
IDs,
and
so
this
has
been
in
flight
for
quite
a
while.
Now
we
pretty
much
have
a
patch
that's
ready
to
go
the
the
last
bits
of
effort
that
we
have
on.
This
are
largely
related
to
collaborations
that
are
having
with
state
cloud
provider
we're
also
building
deployment
tooling,
and
so
we
have
hosted
kubernetes
on
OpenStack
clouds.
Magnum
we've
seen
a
tremendous
amount
of
success
in
production.
F
You
know
to
see
you
know
to
see
the
uptake
of
of
magnimous
hosted
as
hosted
cornetti
service.
We
also
have
projects
that
are
doing
self-service
deployments
with
cops,
and
so
this
feeds
in
nicely
with
cig
lifecycle,
a
presentation
that
they
just
that
they
just
gave.
We
have
an
alpha
implementation
of
OpenStack
for
cops,
and
we
also
have
a
cluster
API
implementation
for
OpenStack
clouds
and
we're
going
to
be
working
on
a
bare
metal
implementation
of
that
too.
That's
backed
by
the
OpenStack
ironic
service.
F
So
so,
as
we
stand
before,
one
of
the
biggest
goals
is
to
completely
remove
the
entry
code.
This
work
is
in
collaboration
from
the
state
cloud
provider,
and
a
lot
of
it
now
involves
identifying
entry
dependencies
and
moving
them
into
staging.
You
know
what
I
think
shout
out
to
Walter
and
Andrew,
you
know,
and
so
the
other
sig
cloud
provider
folks
for
for
identifying
these,
and
you
know
doing
work
on
that.
F
You
know
this
is
touching
a
lot
of
the
providers
and
we're
mostly
ready
to
go
we're
just
spending
these
common
structural
changes
and,
and
our
goal
is
to
actually
be
completely
out
by
the
end
of
2019.
This
is
much
later
than
we'd
originally
intended,
but
this
is
more
in
line
with
the
larger
cloud
provider.
F
Cluster
API
is
in
progress,
and
this
is
actually
a
pretty
exciting
effort
and
we're
really
happy
to
be
involved
with
it.
It's
very
fast-paced
we're
looking
for
more
developers
and
more
about
more
robust
implementations
as
well
as
help
with
the
the
cop
support.
You
know
it's
great
to
see
the
cop
support
moving
along
and
you
know
the
power
and
having
single
deployments
for
the
command
line.
You
know
simple
deployments
from
the
command
line.
F
We
can
also
run
it
in
an
integrated,
multi-tenant
cloud
like
mode
you
know
so
you're,
essentially
treating
bare
metal
like
like
a
like
a
cloud
like
a
virtualization
cloud.
You
know,
so
we
think
that
there's
a
lot
of
opportunities
to
be
able
to
have
some
pretty
good
integrations
with
these
kind
of
cloud
like
services
that
Kerber
Nettie's
expects
to
be
able
to
run
successfully
and
also
bringing
the
the
power
of
bare
metal
to
that.
F
We
have
support
in
place
for
more
than
a
dozen
open
hardware
types
and
boot
interface,
as
well
as
some
proprietary
hardware
types,
and
we
also
have
advanced
feature
event,
features
like
hardware
discovery.
So
you
can
actually
point
ironic
at
your
at
your
bare
metal
cloud
at
your
network
with
credentials
to
log
into
the
server,
so
it'll
actually
discover
the
resources
and
capabilities
that
are
available
on
those
servers,
and
so
that's
pretty
exciting
too.
F
We've
also,
we've
also
had
a
pretty
long
historic
collaboration
with
cig
testing,
and
you
know
we
had
and
testing
in
place
for
a
while
now
and
we're
using
that
to
gate
against
ephemeral
processes.
One
thing
that
that
came
up
recently
was
with
some
articles
that
came
out
this
week
about
work,
that
the
CN
CF
and
the
kubernetes
community
are
doing
with
respect
to
to
performance
testing
of
different
network
and
cloud
architectures.
F
The
I
would
I
would
say
the
one
thing
that
concerns
us
about
this
is
you
know
doing
a
first
pass
at
the
results?
Is
we
don't
entirely
understand
the
methodology,
assumptions
or
goals
of
what's
happening
with
this
work?
You
know
particular
we've
noticed
some.
You
know
inconsistent
deployment
types.
You
know
as
well
as
data
deployment
technologies
which
I
don't
think,
shines
a
favorable
light
on
an
OpenStack
and
its
actual
potential.
So
we
kind
of
actually
we're
wanted
to
ask
seek
testing
in
the
kubernetes
community
in
general.
F
C
That's
a
really
good
question
that
probably
deserves
a
longer
answer
than
we
can
afford
to
give
in
this
forum,
but
I
am
in
agreement
with
you.
I
think
it
would
come
down
to
a
question
of
who
owns
and
is
responsible
for
the
continued
maintenance
and
whatnot
of
running
this,
for
example,
their
efforts,
the
C
and
C
F
does
in
the
CI
space
for
its
cross
cloud
testing,
for
example,
that
have
literally
not
a
thing
to
do
with
cig
testing.
So
I
think
it's.
C
F
So
we're
coming
up
to
the
end
of
the
time
if
you'd
like
to
find
out
more,
if
you'd
like
to
get
involved,
we
have
a
slack
channel
we're
at
seek
OpenStack
on
slack
as
well
as
a
Google
Group
commit
a
TCG
OpenStack.
We
have
a
patch
in
flight
to
add
a
couple
new
co-chairs.
The
first
is
Christoph
bobbit's
from
Innova
clout
in
Germany.
The
other
is
Flavio
Percoco
from
Red
Hat
and
we're
pretty
excited
to
have
them
both
step
in
and
take
leadership
roles.
G
You
hi
everybody
I'm
Mike,
Denise
I,
am
a
chair
of
cigars
and
I'll,
give
be
given
the
cigars
update
today,
so
at
cigar
were
chaired
by
Tim
me
and
Moe.
We
just
elected
some
TLS,
so
David
EADS
in
Jordan,
our
TLS
with
me,
and
we
have
very
sub-project
approvers
and
many
sub-project
reviewers.
The
list
almost
goes
off
the
screen,
so
we
have
probably
the
same
a
set
of
sub
projects
identified,
as
we
did
last
update
so
I'll
skip
over
this.
But
what
we're
actually
driving
technically
we're
working
on
a
rollout
of
improved
service
account
tokens.
G
So
over
the
last
couple
releases
we
have
built
infrastructure
to
provision
service
account
tokens
that
are
time
bound
and
audience
bounce,
so
they
can
be
used
more
securely
and
against
clients
that
are
not
just
the
kubernetes
api
server.
So
we
would
like
to
eventually
replace
the
old
service
account
provisioning
infrastructure
through
secrets,
with
this
new
token
provisioning
flow,
so
we're
working
through
how
to
do
this
with
minimal
impact
to
clients,
we're
also
working
on
a
dynamic
audit.
So
this
feature
allows
configuring
audit
sinks
through
a
dynamic
admission.
G
Webhook
we're
also
taking
a
deeper
look
at
some
of
the
policy
objects
that
we
have
in
kubernetes
today.
So
we
have
many
different
policy
objects
like
limit
range
and
pod
security
policy
and
network
policy.
The
API
is
and
how
people
interact
and
understand
these
things
are
kind
of
disjoint,
so
we're
looking
at.
G
Either
rethrick
rethinking
some
of
these
or
allowing
extensions
like
the
one
we
saw
earlier
in
the
demo
to
really
take
advantage
of
of
the
admission
webhook
functionality
and
have
a
very
have
a
more
consistent
story
around
some
of
our
policy
objects.
So
if,
if
you've
noticed,
hot
security
policy
has
been
around
for
about
a
couple
years
at
this
point
and
is
still
in
beta,
and
that's
because
it
has
some
pretty
fundamental
usability
issues,
so
we're
also
working
on
API
server,
authentication,
there's
an
open
kept
out.
G
So
we
have
this
new
mechanism
for
extending
kubernetes,
which
is
these
dynamic
web
hooks.
Often
web
hooks
deal
with
accept
sensitive
data,
return,
sensitive
data
or
do
expensive
work
on
behalf
of
a
web
hook.
So
we
need
a
way
to
for
these
web
hooks
to
authenticate
the
API
server
in
these
flows,
and
we
would
also
like
to
bring
more
of
the
api's
that
have
been
around
for
a
little
while
to
GA.
G
So
the
certificates
API,
is
something
that's
been
in
beta
for
three
years
now,
and
it's
fundamental
to
many
of
the
kubernetes
setups,
so
setups
that
are
using
cubelet,
TLS
bootstrap,
which
is
probably
the
I,
would
say.
The
majority
of
setups
at
this
point
are
depending
on
the
certificates,
API,
and
it
is
still
beta
the
token
request
infrastructure
we
would
like
to
get
to
GA
so
that
we
can
start
migrating
clients
on
to
new
and
improved
tokens,
stand
better
secure.
These
tokens.
G
So
organizationally
we
have
recently
identified
sub
projects
and
TLS
we're
looking
into
more
intentional
way
in
proactively
driving
progress
on
these
sub
projects,
so
we're
in
the
experimentation
phase
to
figure
out
how
we
can
engage
the
cig
more
and
make
sure
that
these
sub
projects
are
moving
forward
in
a
steady
manner.
Sometimes
our
sub
projects
have
floundered
and
we
could
do
a
better
job
at
tracking.
G
So
we're
kind
of
learning
trying
to
take
take
some
lessons
from
other
SIG's
that
that
have
more
structured
engagement,
such
as
SiC
cluster
lifecycle
and
where
we're
going
to
think
about
what
standing
items
and
in
sig
off
meetings
that
we
could
put
in
place
to
make
sure
that
these
things
are
getting
the
attention
they
need.
So
recently
we
also
reabsorbed
the
container
identity
working
group.
We
had
a.
G
We
had
this
working
group
for
about
a
year
a
couple
good
things
came
out
of
it.
Most
of
the
service
account
token
infrastructure
was
developed
and
designed
collaboratively
inside
that
working
group.
We
also
got
some
items
on
the
CSI
path
to
GA
ticket.
That
would
help
integrators
that
are
using
the
container
storage
interface
to
build
identity
solutions.
G
G
G
We
have
a
good
first
issue
label
that
we
are
trying
to
to
keep
up
to
date.
This
is
something
that
we
might
try
to
more
proactively
apply
during
our
bugs
grubs.
So
if
you're
interested
in
getting
involved
with
Sagat,
there's
also
a
number
of
sub
projects
that
we
could
always
use
more
contribution
and
we're
always
looking
for
new
contributors.
A
Awesome
thanks
Mike
any
questions,
no
NH.
Moving
along
to
announcements
in
general,
we
need
more
slack
moderators,
there's
a
link
for
folks
to
apply
to
moderate
all
the
slacks
for
the
kubernetes
world,
which
is
not
an
easy
undertaking,
but
would
appreciate
all
the
help
you
can
get.
You
gotta
be
a
caesura
member
already
and
a
pack
and
GU
moderators
are
mostly
the
ones
that
we
need.
A
A
Just
the
deployment
CLI
tests
same
sort
of
thing
you
can
do
with
make
tests
and
make
tests
immigration,
mom,
nifty,
Thank,
You
Ben
the
elder
shout
out
to
whenever
Sanger.
For
writing
a
wonderful
new
message
for
the
welcome
bot.
It
is
very
lovely
looking
forward
to
seeing
this
in
more
places
check
out
the
link
there.
A
If
you
keep
on
top
of
flagging
issues
from
it
and
tests
and
courtney
follow-ups
and
to
georgia,
look
around
for
that's
right,
corey
allogram
for
spotting
an
opportunity
to
offer
rudder
transparency
to
what
the
CI
signal
team
is
working
on,
suggesting
a
structuring
kicking
off
on
the
plantation
find
current
version
at
link
in
the
notes.
Jorge
allogram
know
that
person
from
a
previous
life,
aaron
berger
berger,
also
shout
out
to
Josh
Marcus
for
taking
notes
during
today's
steering
committee
meeting.
D
A
A
There's
a
link
in
the
notes
there,
cohdon
cotton
Roden
a
shout
out
to
Michelle,
who
I
don't
say
that
a
you
for
her
and
helpful
guidance
and
getting
a
very
large
PR
emerged
and
they
free
will
shout
out
to
Jeffy
and
yang
for
all
their
all
some
work
on
the
release.
Notes
team
for
1.14,
especially
if
he's
work
on
the
release
website
concept.
I
will
+1
that
anything
else
before
we
are
done.