►
From YouTube: Kubernetes Community Meeting 20181108
Description
The Kubernetes community meeting is intended to provide a holistic overview of community activities, critical release information, and governance updates. It also provides a forum for discussion of project-level concerns that might need a wider audience than a single special interest group (SIG).
See https://github.com/kubernetes/community/blob/master/events/community-meeting.md for more information.
A
All
right
welcome
everybody.
It
is
November
8th
2018.
This
is
a
weekly
kubernetes
community
meeting.
Thank
you
for
joining
us.
This
is
our
weekly
public
meeting,
where
we
discuss
all
things
going
around
in
the
project
as
usual,
we'll
be
streaming
and
recording
this
meeting
to
YouTube.
So
please
be
cognizant
that
everything
you
say
out
for
the
internet,
a
few
things
if
you're,
not
speaking
right
now,
please
ensure
that
your
mute
button
is
on.
A
A
Alright,
so
we
have
a
quick
demo
with
Steve's
gonna
do
ingress
route
and
then
the
usual
release
updates
important
week
and
release
this
this
week,
and
we
have
three
sig
updates
today,
after
errands
of
a
contributor
tip
of
the
week,
we'll
be
doing
a
Sikh
cluster
lifecycle,
sig,
OpenStack
and
sig
off.
So
with
that,
let's
turn
it
Steve.
You
have
ten
minutes.
Okay,
thanks
George.
B
Okay,
so
what
I
want
to
talk
about
today
is
a
thing
called
ingress
route
with
Hep
B,
Oh
contour,
so
ingress
route
is
a
steer
to
either
we're
using
to
help
solve
some
some
new,
some
new
interesting
problems
with
ingress
into
your
cluster.
Again,
it's
it's
a
contour
is
an
ingress
controller
that
leverages
ongoing
as
the
data
plane.
It
features
dynamic
updates.
So
all
the
configuration
changes
happen
to
envoy
from
contour
without
any
dropped
connections.
It
was
built
from
a
code
about
my
project
with
Yahoo
Japan
corporation
subsidiary
act
app.
B
You
know
if
you're
curious
how
they
are
using
this
in
their
environment.
Another
project
we've
done
called
gimbal.
You
can
check
out
their
coop
con
talk
in
Seattle.
This
December
I've
also
done
a
fair
amount
of
performance
testing
around
this,
and,
if
you're
interested
in
that,
you
can
go
see
Alex
brands.
B
Talk
on
this
is
poop
con
as
well
and
there's
links
to
there
if
you're
interested,
so
a
quick
overview
of
how
contour
works
and
kind
of
generically
requests
coming
from
the
internet,
and
they
hit
some
l3
l4
load,
balancer
and
basically
get
routed
to
envoy.
So
again,
envoy
is
our
data
playing
component.
It's
the
component.
That's
going
to
be
I'm!
Hailing
all
the
traffic
between
between
the
endpoints
contour
acts
as
a
server
in
this.
B
In
this
scenario,
so
when
Envoy
spins
up
it
looks
out
for
its
server,
which
is
contour
in
contour,
is
then
watching
all
the
resources
in
kubernetes?
It's
looking
for
services,
endpoint
secrets,
all
those
information
bits.
It
basically
creates
the
configuration
and
passes
that
off
to
envoy
VOD
RPC.
So
that
connection
is
this
happens
again,
real
time
without
dropping
connections
and
then
once
the
request
hits
envoy
it
routes
it
over
to
the
upstream
and
the
cluster,
and
the
request
is
fulfilled.
B
B
We
have
Tignes
and
we
have
team
B
and
each
team
deploys
their
own
ingress
resource
to
their
own
namespace
and,
if
you'll
see
here,
they're
both
using
Kepler
comm
is
the
host
and
both
referencing
blog
as
the
path
so
but
the
upstream
they're
going
to
send
the
traffic
to
our
different
services,
so
team
a
is
using
WordPress
and
Team
B
is
using
this
private
service.
So
the
real
question
is:
is
what
happens
right
when
these
resources
get
created?
What
should
the
ingress
controller?
B
Do
it
really
is
kind
of
undefined
right
and
there's
danger
in
this
as
well.
We've
had
customers
that
help
do
that.
I've
done
this
and
have
taken
out
production
systems.
You
know
unknowingly,
just
because
of
some
of
these.
These
shortcomings
that
we
run
into
so
some
of
the
goals
we
wanted
to
deal
with
the
ingress
route
was
to
basically
solve
some
of
these
issues.
B
Let
me
just
discuss
so
multi-team
Cabrini's
clusters
having
multiple
users,
multiple
teams
within
the
same
cluster,
operate
on
their
own
without
having
to
go
through
other
admins,
and
we
can
do
this
through
a
thing
called
delegation.
So
delegation
helps
us
solve
that.
Multi
team
configuration
and
we'll
discuss
this
here
in
a
second.
It
also
lets
us
split
out
secrets
from
from
different
places,
so
we
don't
have
to
let
secrets
sprawl
all
over
our
cluster.
B
So
here's
how
delegation
is
gonna
work
and
this
is
gonna
solve
our
multi
team
problem
here.
So
this
is
kind
of
my
laughter
DNS.
So
we
have
this
root
ingress
route
here
right
and
it
has
authority
over
hefty
Oh
calm,
and
then
we
can
pass
that
Authority
off
to
other
folks
within
the
cluster
right.
So
in
this
example,
the
/blog
path
has
authority
to
handle
that
resource
because
the
route
has
delegated
that
to
it.
B
If
we
look
at
a
scenario
where
someone
else
wants
to
also
use
/blog
path,
they
can't
do
that
right,
because
contour
is
going
to
look
at
that
and
say:
hey,
you
don't
have
the
proper
authority,
the
proper
delegation,
so
that
way,
it'll
throw
it
out
and
then
nothing,
nothing
is
going
to
break
within
your
cluster.
So
let's
go
ahead
and
take
a
look
at
this
and
how
this
looks
for
real.
So
what
I
have
here
is
this:
is
my
ingress
route
you'll
see
this?
It's
it's
for
a
fake,
a
marketing
website
right.
B
B
Here
you
can
see
this
is
the
marketing
page
right.
So
that's
pretty
straightforward.
That's
just
to
get
us
started
now.
What
I
want
to
do
is
I
want
to
delegate
I
want
to
pass
off
to
the
marketing
team.
They
want
to
spin
up
a
blog.
So
what
we
do
here
is
we'll
turn
on
this
delegation
and
we'll
say:
I
need
to
slash
blog
will
route
to
the
blog
ingress
route
in
the
name,
states
marketing,
again
right
now,
I'm
living
in
this
route
ingress
throughout
these
days.
B
B
Now
it's
there
so
now,
if
we
go
to
/blog,
we
should
see
the
the
the
new
blog
that
we
split
up
right.
So
here's
our
blog
website
cool,
so
that
works.
So
in
the
same
scenario,
if
we
have
someone
who
wants
to
again
claim
that
same
that
same
path,
so
here
this
is
the
the
blog
to
ingress
route,
and
they
also
want
to
claim
that
same
blog.
B
So
if
we
go
ahead-
and
we
apply
this
one,
what
we'll
see
is
that
nothing
should
break
so
if
I
go
ahead
and
get
my
ingress
routes
in
marketing,
what
we'll
see
is
that
we
have
two
right.
We
have
one
which
is
our
valid
one.
You
see
the
other
one
has
the
status
of
orphaned
right,
no-one's
delegating
to
it.
So
what
that
means
is
that
contour
solace
and
threw
this
out
so
right
now
our
application
is
not
broken.
B
Everything's
functioning
properly
this
invalid
one
isn't
isn't
being
sent
traffic,
though
so
what
we
can
do
is
we
can
change
this
outright.
So,
if
I
make
this
foo,
it's
now
changing
the
path
on
this
ingress
route,
which
is
making
it
unique.
But
at
the
same
time,
even
though
it's
unique,
it
still
doesn't
have
any
delegations
to
it
right.
We
check
the
status
of
foo.
Now
it
still
doesn't
not
part
of
a
delegation
chain.
So
to
make
this
accurate
make
this
function
will
go
ahead
and
delegate
to
it.
B
B
Now
that
we've
passed
off
that
delegation
now,
our
ingress
reality
should
be
valid.
I
check
its
status,
and
now
it
is
so
we
can
validate
this
here
really
quickly
by
going
to
su
right
cool.
So
now
that's
working
cool.
The
final
thing
I
want
to
demonstrate
really
quickly
in
relation
to
delegation
is
a
thing
called
Bluegreen
deployments
right
and
that's
where
we
have
a
blue,
a
current
version.
B
We
want
to
migrate
to
a
new
version,
so
what
I
have
already
set
up
is
I
have
a
blue,
a
blue
deployment
which
is
living
here
at
blue
and
I,
have
a
green
one,
which
is
here,
and
what
we
can
do
is
we
can
have.
We
have
a
Bluegreen
namespace
now
this
is
our
production.
Application
right
and
right
now
you
can
see.
Is
that
we're
using
the
blue
version?
So
what
I'm
gonna
do
here
is
I'm
going
to
do
a
curl
on
this
and
you'll
see
that
this?
This
is
going
to
loop
forever.
B
Right,
it's
hailing
the
blue
version
and
what
we'll
do
is
we'll
go
into
our
our
route
here
and
right
now.
You
can
see
that
right
now,
Bluegreen
is
pointing
to
the
blue
deployment.
So
what
I
will
do
is
we'll
swap
this
now.
So
all
the
blue
things
will
happen.
Let's
make
this,
so
you
can
see
it
there.
We
go
we'll,
go
ahead
and
we'll
swap
this
from
blue
to
green
and
while
we're
doing
is
swapping
delegation
and
because
the
power
of
this
delegation
piece,
we
can
now
implement
this
piece.
B
So
if
I
swap
the
green
you'll
see
now
that
this
should
switch
to
green
I
goes
the
green
one.
If
I
watch
my
metrics
and
see
that
things
are
breaking
and
the
applet,
the
new
version
is
not
not
functioning
correctly.
What
I
can
do
is
go
ahead
and
apply
the
route
back.
We're
switched
back
from
blue
back
to
green
or
from
green
back
to
blue
I'm.
Sorry,
and
now
we're
back
on
the
blue
version.
You
can
see
this
popping
through
so
there's
a
powerful
way.
We
can
use
delegation
to
also
understand
so
really
quickly.
B
To
summarize,
we
have
some
resources
here
if
you're
interested
to
follow
up
more
the
github
repo,
which
you
can
go
check
out
the
source
code.
Joe
did
a
TGI
K
last
week
on
this.
You
can
check
the
link
there.
We
have
some
specifications
about
how
ingress
route
works
and
come
to
these
finest
and
contour
and
the
Coverity
slack
if
you've
more
questions.
So
thank
you
very
much.
A
C
C
Get
it
share
desktop,
okay,
so
welcome
to
this
week's
contributor
tip
of
week.
This
will
be
the
super
non-controversial
very
straightforward.
One
answer
to
rule
of
what
do
LG,
TM
and
approve
really
mean,
and
what
do
they
really
do
and
how
do
they
work?
I
am
completely
lying.
Of
course.
This
email
threat
here
that
I
have
linked
and
I
will
add
to.
C
The
meeting
notes
is
just
an
example
of
how
somebody
wanted
to
propose
a
simple
change
of
his
process
to
one
repo
and
then
lots
of
people
started
to
express
opinions
about
how
it
could
be
done
better
or
what
lgt
means
to
them
or
what
approved
means
to
them.
There
seem
to
be
lots
of
conflicting
answers,
so
I'm
just
going
to
give
you
my
opinion.
What
works
for
me
keep
in
mind
that
other
people
may
disagree
with
me,
but
I've
spent
a
little
bit
of
time
working
on
this,
so
hopefully
this
has
some
strength.
C
C
The
owners
link
I
have
tried
to
put
this
at
the
top
of
any
owners
file
that
I
find,
because
it
tells
you
sort
of
what
the
file
is,
why
we
have
them
there
loosely
inspired
by
github
code
owner's
files.
They
are
files
that
let
you
list
approvers
and
reviewers.
We
do
have
some
support
for
regular
expressions.
C
Much
like
github
skoda,
owners
files,
but
the
thing
I'm
really
trying
to
scroll
down
to
here
is
the
code
review
using
owners
files
thing
that
sort
of
describes
an
abstract
code
review
process,
which
I
think
is
roughly
what
these
are
intended
to
support.
So
in-phase,
you
know
use
of
meta
PR
in
phase
zero.
Our
automation
suggests
for
viewers
and
approvers
for
the
PR
use
the
owners
file.
So
reviewers
are
people
who
should
have
knowledge
of
the
code
and
when
they
say
it
looks
good
to
them.
C
That's
meaningful
and
approvers
are
people
who
should
know
a
lot
more
about
that
code
and
how
it
interacts
with
other
parts
of
the
code
base.
So
to
say,
it
is
another
way:
reviewers
should
generally
be
looking
for
code,
quality
and
correctness
for
sane
software
engineering.
Are
you
checking
for
nil
in
the
right
places?
Dear
god,
what
have
you
done
with
your
formatting
things
like
that?
C
Approvers
should
be
looking
at
this
more
from
a
holistic
perspective.
Does
this
change
make
sense?
Does
this
change
belong
here
and
not
over
there?
Is
this
change
going
to
be
forwards
compatible
with
this
thing
over
here?
Is
it
going
to
be
backwards,
compatible,
doesn't
clash
with
some
other
piece
of
functionality,
etc,
etc.
C
So,
generally
speaking,
this
is
why
the
approver
is
the
person
who
has
the
final
say,
but
it
is
extraordinarily
helpful
in
speeds
the
process
up
if
we
can
split
these
responsibilities
up
amongst
two
people,
so
that
or
one
group
of
people
who
doesn't
necessarily
have
as
much
subject-matter
expertise
at
least
can
go
through
and
look
at
a
lot
of
the
details
and
the
code
style
stuff.
If
the
code
both
looks
about
correct,
so
looks
good
to
me,
and
it
is
approved
in
that
it
does
the
right
thing,
then
that
merges
so.
C
How
do
people
actually
do
this?
They
issue
a
slash,
approve
or
/l
GTM,
so
I'm,
linking
here
to
the
help
page
on
crowd,
kate's
that
io
anytime,
the
bot
posts,
a
comment.
It
says
you
links
instructions
on
how
to
interact
with
me
here.
That'll.
Take
you
to
this
page.
So
this
is
the
description
for
the
approved
command.
You
can
issue
and
approve,
like
this
slash,
approve,
I,
believe
to
remove
and
approve
you.
Slash,
approve,
cancel
same
thing
for
LG
TM,
so
LGBT
means
looks
good
to
me.
I
can
slash
lgt
and
cancel
you'll,
also
see.
C
Let's
see
here
so
I
tried
to
summarize
this
a
little
bit
for
a
talk.
I
did
a
kook
on
encase
slides
help.
So
this
is
the
part
of
our
PR
workflow,
where
you
know
the
PR
must
be
approved,
and
so
what
the
problem
that
this
tries
to
solve
is
making
sure
that
the
right
people
are
looking
at
your
PR,
not
that
you
have
random
drive-bys
looking
at
your
PR.
C
So
although
we
do
encourage
new
members
to
the
kubernetes
community
to
contribute
by
reviewing
PRS,
this
is
why
we
let
them
/l
GTM
the
reason
we
put
certain
people
in
owners
files
I'm
just
going
to
go
real,
quick.
The
reason
like
I
have
George
in
Guinevere
and
Annie
or
in
the
contributors
guide
owners
file
is
because
I
think
they
know
the
most
about
the
contributors
guide
when
it
comes
to
reviewing
it.
But
the
reason
I
think
George
and
Parris
have
the
final
say
on
the
contributors.
C
I
think
I
am
overtime
at
the
deadline.
Okay,
so
the
one
last
thing
I
just
wanted
to
to
show
real
quickly
that
that
thread
generated
a
long
thing,
because
it
made
the
LG
TM
command,
just
LG
TM
and
the
approved
command
just
approve,
and
it
also
turned
off
implicit
approval
for
some
repos,
but
not
all
of
them.
C
So
here's
an
example
where
I
am
in
the
owners
file
for
the
root
of
K
community
I
pushed
a
PR
and
the
bot
has
not
magically
added
any
labels
or
anything
if
I
/lg
TM,
it's
not
gonna
do
anything
because
I'm
not
allowed
to
say
my
own
PR
looks
good,
however,
because
I
am
an
approver.
I
can
say
that
I
think
this
is
a
correct
change
to
be
made,
but
I
really
need
somebody
see
the
bots
even
telling
me
I
can
tell
GT
on
my
own
PR,
but
I
can
approve
my
own
PR.
C
If
I
refresh
with
sure
I'll
see
the
label,
maybe
but
I
need
somebody
to
go
check
this.
This
is
this
is
like
great
for
me,
because
I
don't
want
to
abuse
my
power
of
being
in
the
root
owners
file,
especially
in
repos
like
community,
where
their
cigs
have
stuff
all
over
the
place
and
I
might
want
to
sanity
check
that
content
looks
good
for
a
cig,
but
I
don't
necessarily
want
to
go
in
without
their
finest
anyway.
That
has
been
in
this
week's
contributor
tip
of
the
week.
All
right,
thanks,
Erin
and
I.
B
A
D
Don't
know
why
it
does
this
to
me.
So
one
of
the
things
that
is
often
confusing
is
understanding
the
Charter
brother
sig,
one
of
the
things
we
did
in
the
last
cycle
is
we
actually
formalized
and
put
our
full
Charter
up
there
and
it's
important
to
understand
the
mission
and
I
have
two
quips
that
I
often
say
about
what
ways
we
actually
do
so
sig
Clips,
your
life
cycles
objective,
is
to
simplify
creation,
configuration
upgrade
downgrade
and
teardown
of
kubernetes
clusters
and
their
components.
D
What
does
that
actually
translate
to?
We
spend
a
lot
of
time
trying
to
balance
user
experience
versus
power
and
flexibility
of
deploying
clusters
from
the
scale
of
one
to
n,
to
make
it
seamless.
That's
what
we
really
do.
We
spend
a
lot
of
time.
Are
you
know,
sort
of
debating
back
and
forth
as
well,
as
you
know,
coming
up
with
different
ways
and
means
to
do
this,
and
this
occurs
across
a
number
of
different
tools
and
I'll
get
to
that
in
a
little
bit?
D
So
what
do
we
do
you
this
last
cycle?
We
did
a
ton
of
stuff.
There
is
a
link
to
the
changelog
for
our
sig
in
the
112
cycle.
One
of
the
most
notable
things
is.
Our
configuration
has
changed
last
cycle.
For
me,
what
altitude
of
you
one
alpha?
Three,
our
configuration
for
kubb
ADM,
that
we
use
is
a
weld
versions
style,
that's
very
analogous
to
component
config.
We
want
to
do
this
so
that
we
make
sure
that
upgrades
are
pretty
seamless
and
we
use
API
machinery
for
a
lot
of
that
work.
D
One
of
the
things
we
do.
We
did
improve,
CRI
handling,
better
air
gap,
support
better
certain
management
and
one
of
the
things
we've
been
continually
working
on
over
time
is
kind
of
refining
how
we
want
to
do
AJ
deployments
over
and
over
and
over
again
to
the
point
where
to
make
it
as
simple
as
clean
and
as
manageable
as
we
can.
As
I
mentioned
earlier,
we
have
several
sub
projects.
There
are
two
kind
of
key
stones
that
kind
of
apply
to
several
of
the
other
ones,
kopitiam
being
one
and
clustered
API
being
another.
D
Cluster
API
has
added
a
whole
bunch
of
different
providers
in
this
last
cycle,
and
that
includes
like
do
AWS.
Openstack,
nice
fear
there's
been
some
changes
to
switch
back
to
CR
DS
and
there's
been
good
progress
towards
the
alpha
release.
We
have
kind
of
slow
rolled
the
initial
alpha
release
for
this
next
cycle
or
for
the
release
of
cluster
API
in
part
because
of
the
CRD
migration.
So
originally
it
was
using
aggregated
API
servers
and
now
it's
switch
back
to
Sierra
DS.
D
What
are
some
of
the
plans
for
the
upcoming
cycles?
Sig
cluster
life
cycle
is
growing
and
growing
and
growing.
There
are
new
projects
that
are
coming
online
and
we
want
to
make
sure
as
sig
leads
or
chairs.
However,
II
want
to
say
it
we're
trying
to
help
sponsor
these
projects
and
make
sure
that
they
find
a
good
happy
home
with
with
appropriate
owners
and
that
they
can
grow
and
foster
a
good
community
around
that.
So
we've
spent
spending
a
lot
of
time
and
switching
some
of
our
standard
meetings
around.
D
So
that
way,
we
have
a
venue
by
which
all
the
sub
projects
can
be
report
out
information,
and
we
can
help
to
coordinate
that
effort
across
the
sub
projects
too,
as
well.
One
of
the
ones
I'll
talk
about
a
little
later
is
the
addition
of
a
new
sub
project
which,
just
in
Santa
Barbara,
had
piloted
around
sed
management
at
CDA
TN.
D
Also,
this
cycle
we
are
moving
to
medium
to
GA.
Part
of
that
switch
to
GA
is
changing
our
configuration
to
beta,
so
we're
trying
to
finalize
our
last-minute
changes
to
the
configuration
format
that
we
have
for
configuring
Covidien.
But
the
command-line
options
that
we
specify
to
cube
ADM
will
be
fully
supported.
Geo
features,
so
part
of
that
is
switching
a
lot
of
the
things
that
were
underneath
phases
into
their
proper
homes,
which
includes
in
it
and
over
the
next
cycle.
We
also
do
migrate.
Some
phases
to
join.
D
So
if
those
who
are
familiar
with
cube
ADM
understands
that
there's
two
primary
sort
of
workflow
scenarios,
if
you're
initializing
a
control
plane,
no,
you
did
in
it
and
if
you're
initializing
you
work
or
node
you're,
basically
joining
the
cluster
for
Custer
API,
we
are
trying
to
push
out
that
alpha
release.
There's
a
lot
of
end-to-end
test
migration.
We
still
have
a
lot
of
questions
that
we
need
to
figure
out
as
a
community
of
how
we
want
to
push
all
of
these
artifacts
across
all
of
these
different
repos
in
some
way.
E
Can
quickly
say
something
about
that,
so
last
last
cycle
was
one
well,
we
a
couple
of
myself
and
a
couple
of
other
contributors
create
a
cap
and
for
this
refactoring,
essentially,
we
want
in
the
same
way
as
cuba
name
has
its
file
based
api
for
configuration.
We
want
to
do
the
same
for
the
other
components
like
queue.
Proxy
cubelet
already.
Has
this
good
well
in
place,
but
also
like,
for
example,
the
API
server
doesn't
have
any
kind
of
it
can't
read
from
a
file.
D
There
is
a
cap
that
folks
can
reference
and
we'll
get
to
some
of
that
and
at
the
end,
some
more
information
about.
What's
we
have
plan
that
we
are
trying
to
move
coupe
up
over
time
and
trying
to
deprecate
a
lot
of
the
tests
and
issues
that
have
occurred
over
time,
we're
also
trying
to
deprecate
kubernetes
anywhere
from
being
a
default
deployer
for
part
of
our
CI
signal,
it's
kind
of
limped
along
for
several
several
releases.
Anybody
who
knows
me
knows
that
I've
not
been
a
fan
of
this,
but
it's
it's
done
its
job.
D
Just
as
this
is
Robert
edit,
this
slide
just
as
a
PSA
for
folks
of
who
anyone
who's
been
on
the
release.
Team
has
known
that
there
are
many
many
tests
that
apply
to
said
cluster
lifecycle,
but
not
all
of
them
apply
to
all
the
different
sub
projects.
So
we
wanted
to
make
sure
that,
yes,
we
are
well
aware
that
there
are
skew
tests
that
exists
inside
them
and
they're
highly
valuable,
but
sometimes
we
sing
cluster
lifecycle
in
particular.
Some
other
people,
like
Robbie,
are
primarily
responsible
for
routing.
D
They
are
not
necessarily
responsible
for
the
for
the
failures
themselves,
because
a
lot
of
times
there's
there's
a
question
of
ownership
for
some
of
these
tests.
So
if
you
see
a
test
failing
with
regards
to
the
skewer
upgrade
test,
please
be
advised
that,
even
though
it
says
cluster
lifecycle
on
the
name
that
it
not
all,
parties
are
responsible
and
even
not
even
all
leads
will
know
where,
where
it
leads
to
right.
So
please
poke
for
routing
first
and
we'll
try
to
help
make
sure
that
that
can
land
in
the
proper
location
to
get
fixed.
D
Another
thing
that's
upcoming
that
folks
have
been
working
on
is
add-ons
or
bundles.
There
are
a
number
of
different
features
for
entities
that
are
sort
of
above
the
basic
control
plane.
These
things
are
what
are
lovingly
referred
to
as
add-ons,
then
these
include
things
like
the
DNS
and
even
things
like
the
proxy.
D
So
if
you
are
very
interested
in
understanding
of
how
we
want
to
do
add-on
management,
the
future,
you
should
go
see
the
coop
contact
that
Justin
and
Jeff
have
coming
up
in
Seattle
and
also
check
out
the
link
in
their
talk.
They'll
talk
about
bundles
and
the
separate
project
that
Google
has
been
starting
with
regards
to
how
they
want
to
do
well
version,
well-defined
add-on
management,
as
mentioned
earlier,
one
of
the
sub
projects
that
we've
added
on
recently
and
have
we
just
voted
on
this
week-
was
the
addition
of
a
net
CD
management
utility.
D
D
Where
can
you
find
us
highly
recommends
that
contribute
your
websites
that
I
know
that
Rebecca
scopes
had
set
up
there's
details
inside
there?
We
try
to
make
sure
it's
up
to
date
and
maintain
how
to
contacts
in
coaster,
life
cycle
or
any
of
the
sort
of
some
projects
below
that
I
know.
There's
a
separate
section
for
shout
outs,
but
I
wanted
to
make
sure
I
give
us
some.
D
You
know
a
separate
shout
out
to
the
sick
in
particular,
because
I've
known
and
worked
with
these
folks
now
for
for
several
years
and
they're
they're
a
great
group.
This
is
by
no
means
exhaustive
list,
but
I
just
wanted
to
make
sure
I
give
a
hearty
enough
applause
to
those
who
have
been
contributing
and
I
also
want
to
give
a
special
shout-out
to
to
to
bend
the
elder.
He's
done
awesome
work
and
has
helped
us
several
several
times
across
between
tests
or
sending
up
a
new
infrastructure
that
actually
leverages
our
sub
components.
C
A
D
C
F
D
B
E
Also
clarify
the
scope
of
cubed
M
is
way
less
than
kind
of
what
you
asked
for
like
in
this
case
of
kubernetes
anywhere.
It's
just
existed
as
a
like
III
test
provider
for
like
soon
several
years,
it's
not
being
used
actively
by
posting
and
spinning
up
new
real
clusters,
but
anything
that
can
spin
up
real
clusters
that
are
based
on
cubed
M.
E
C
E
A
D
F
You
guys
see
me
fine
cool,
so
finally,
release
update
for
113,
so
we
are
beginning
to
get
into
the
weeds
of
this
really
short
release.
We
cut
our
beta
0
yesterday
and
we
have
our
release
branch.
We
had
a
day
of
delay
due
to
some
failing
tests,
which
we
triage
and
decided.
Why
not
blockers?
So
we
now
have
the
1.13
release
branch
and
we
have
all
the
CI
there,
which
is
still
stabilizing
a
little
bit
the
main
update
and
we
also
updated
goo
to
use
one
dot.
F
11.2
is
the
latest
version
beats
VSS
and
we
found
we
don't
see
any
scalability
concerns
with
that.
So
we
went
ahead
with
that.
The
big
dates
are,
we
are
approaching
code,
slash
which
is
tomorrow
end
of
day
tomorrow,
PST,
so
all
PRS
that
will
be
going
in
post-fight
p.m.
PST
tomorrow
need
to
have
a
priority
kind
sake
and
milestone
labels.
So
please
make
sure
you
add
them
or
thing
any
of
the
release
team.
F
Folks,
if
you
want
the
milestone
added
and
the
next
big
day,
it
is
code
freeze,
which
is
just
a
week
away
right
now,
looking
at
the
enhancement
status,
we
we
are
leaning
towards
yellow,
because,
though
most
of
the
enhancements
have
just
dogs
and
tests,
we
are
spending.
There
are
few
big
ones
that
have
a
lot
of.
They
are
still
in
progress.
F
So
one
request
to
the
Hansen
owners
is:
please
update
your
enhancement
issues
with
the
latest
status
and
so
that
the
release
stream
can
know
when
to
reach
out
for
an
exception
or
if
you
need
to
move
it
to
the
next
release.
That
said,
the
next
one
is
CI
signal,
I've
linked
to
the
latest
signal
report.
There
again
the
status
there
remains:
yellow
thanks
to
cluster
lifecycle
for
fixing
and
just
in
Santa
Barbara
for
fixing
some
of
the
long-standing
setup
issues
there.
F
The
good
news
is
issues
are
getting
attention
of
Anna
beam
result,
but
we
are
seeing
new
issues
being
added
tests,
failing
tests
and
flakes
being
open,
mainly
because
of
lot
of
PRS
being
merged.
At
this
point
in
the
cycle,
I've
listed
again
in
the
in
the
notes
that
I've
listed
some
top
failing
tests
that
will
become
blockers
as
we
near
code
freeze.
So
the
standing
request
is
for
owners
to
please
take
a
look
at
those
and
investigate
and
resolve
them
as
soon
as
possible.
F
Finally,
at
this
point,
we
are
starting
to
stress
a
little
bit
lot
more
on
docks
and
release
notes
as
well.
Dogs
are
looking
good.
We
have
about
seven
outstanding
PRS
that
still
need
dogs,
so
our
dog
sleep
team
will
be
reaching
out,
and
the
call-out
here
is
please
for
those
who
know
that
who
are
still
working
on
their
doubts.
F
Fears
please
try
to
get
them
in
any
shelter
after
keys
as
soon
as
possible,
and
a
heads
up
is
we
are
we
have
our
initial
release,
notes
draft
and
we
plan
to
send
it
out
for
review
to
all
the
state
leads
next
Monday.
So
you
please
expect
the
release
notes
to
come
your
way
and
if
you
can
leave
early
feedback
on
both
the
notes
and
really
seems
release
themes
that
will
really
help
us
wrangle.
All
of
that
that's
about
it.
I
will
come
back
with
more
updates
next
week,
I.
F
While
we're
talking
about
that
in
the
release
slack
channel
just
in
Stefan
Augustus
he's
going
to
start
get
the
ball
rolling
with
an
issue
and
we'll
have
call
for
nominations.
We
are
also
talking
about
having
a
kind
of
an
informal
interview
process
for
shadows
as
well.
So
we
are
kinda
hiding
out
a
few
things
there,
but
watch
out
for
the
for
the
issue
being
opened
by
Augustus
and
I
will
link
the
issue
in
the
next
week's
our
update
as
well.
Okay,.
C
Wanted
to
say
we're
just
now,
starting
to
gather
volunteers
for
the
114
release
date,
so
contact
the
CID
release,
channel
and/or
steven
augustus.
If
you're
interested
in
being
a
member,
the
114
release
team.
G
Go
all
right:
there
we
go
okay,
this
will
be
a
fairly
short
update,
so
I'm
Chris
Hodge
from
Sega
OpenStack.
So,
with
the
completed
work
in
112,
it
was
actually
mostly
just
bug
fixes
with
a
few
enhancements.
The
entry
driver
is
deprecated
and
will
go
away
soon.
Actually,
our
hope
was
to
have
this
done
for
1.12,
but
it
turned
out
to
be
a
little
bit
more
work
than
then.
G
We
were
able
to
to
fit
into
this
time,
largely
because
we
have
existing
API
contracts
that
we
that
we
still
have
to
make
sure
that
we
support
so
there's,
there's
some
there's
some
tricks
and
moving
code
into
staging
and
make
sure
we
don't
have
circular
dependencies
to
externalize
the
actual
implementation
of
those
for
those
of
you
who
are
using
Manila.
We
have
no
manila
provisioner,
as
well
as
support
for
CSI
version.
G
Zero
point
three
point:
zero
in
both
vanilla
and
your
drivers,
the
number
of
load
balancer
enhancements
that
a
lot
more
they're
all
listed
in
the
one
nut
couple.
Changelog
I,
don't
remember
if
I
mentioned
this
in
the
previous
update,
but
Magnum
is
a
kubernetes
certified
installer.
So
if
you're
running
it
OpenStack
cloud,
Magnum
is
actually
being
used
in
production.
It's
what
CERN
is
using
or
to
operate
their
private
kubernetes
deployments
on
top
of
their
OpenStack
cloud
and
last
I
checked.
G
They
had
several
hundred
kubernetes
clusters
being
managed
by
this
tool,
so
I'm
running
it
OpenStack
cluster
and
you
want
to
have
managed
your
good
ideas,
that's
great
tool
to
use
and
we've
also.
We
also
have
a
driver
in
the
works
for
state
cluster
lifecycle
for
the
for
the
cluster
API
future
work.
There's
still
new
driver
work
in
progress
so
for
the
heat
and
cinnamon
based
auto
scaling.
Drivers
are
still
in
progress.
G
Storage
driver
consolidation
so
that
we're
going
to
pull
both
all
of
us
all.
The
storage
drivers
into
si
si
drivers
and
continue
to
support
for
Barbican
a
barbeque
and
driver
for
key
management
key
and
secret
management.
So
the
plan
was
for
the
code
to
be
removed
in
the
1.14
release
and
work
is
in
progress
from
if
the
code
and
externalized
dependencies
without
existing
content
contracts
for
things
like
cinder,
also
continue
to
ramp
up
work
with
Siculus
the
life
cycle
and
continue
to
work
with
state
cloud
provider.
G
G
Of
upcoming
events
next
week
is
the
OpenStack
summit
in
Berlin.
We're
going
to
have
a
working
session
devoted
to
OpenStack
on
kubernetes
I
mean
kubernetes
on
OpenStack.
So
if
you're,
if
you're
going
to
be
in
Berlin,
you
know
be
at
that
event,
make
sure
you
look
at
the
sinker
in
any
session,
as
well
as
a
couple
of
sessions
at
or
a
session
at
cube,
Con
in
Seattle,
so
I
think
that's
it
for
me.
Does
anyone
have
any
questions.
H
Yes,
you're,
good,
okay,
awesome,
I,
assume
you
can
see
my
screen,
so
I'm,
Moe,
Khan
I,
am
one
of
the
co-chairs,
so
I
would
pretty
short
presentation,
small
updates
here
and
there
so
starting
out
strong.
This
is
probably
my
most
favorite
thing
ever
in
a
really
long
time,
so
we
are
working
to
transition
service
accounts
to
using
protective
volumes.
What
that
means
for
end-users
is
effectively.
H
We
are
moving
away
from
a
secret
based
storage
of
the
service
account
token
to
an
ephemeral
token
that
is
tied
to
the
lifetime
of
the
pod,
and
it
is
also
time-based.
It
is
continuously
an
automatically
rotated
by
the
cubelet.
So
the
benefits
from
that
is
there's
very
little
chance
of
a
compromise
being
held
throughout
long
term,
whereas
the
current
secrets
are
basically
permanent
and
the
only
like
thing
you
can
do
is
basically
delete
the
service
account
to
kind
of
feel
lighter
than
the
compromise.
H
In
this
case,
they'll
be
conspiracy,
rotated
the
biggest
problems
we
do
see
coming
with.
This
is,
if
you
don't
use
client
go
today
and
you're
an
in
cluster
client.
You
do
need
to
keep
reading
the
token
off
of
disk.
The
cube
will
continue
to
refresh
it
and
in
a
similar,
probably
have
no
is
that
things
like
PSPs
and
stuff.
If
they
were
previously
using
a
secret
volume
and
restricting
all
other
forms
with
volumes,
they
will
restrict
the
projected
volume
so
we're
trying
to
work
through
this
right
now.
H
H
F
H
And
the
last
thing
I
wanted
to
discuss
is
we
have
the
alpha
of
the
dynamic
audit
configuration?
This
allows
you
to
create
a
resource
which
effectively
states
and
audit
sync.
This
is
very
useful
when
you
have
h
a
cluster
x',
it's
hard
to
manage
on
disk
configurations
that
you
need
to
keep
perfectly
in
sync
across
multiple
masters
and
obviously
any
file
based
configuration
requires
the
restart
of
the
master.
H
So
in
terms
of
governance,
we
have
formally
defined
our
sub
projects.
Those
are
listed
there.
I
won't
go
over
them
in
detail,
but,
as
you
can
see,
we
have
quite
a
lot.
So
if
anyone
wants
to
get
involved
with
this,
we
have
plenty
of
work
to
do.
We
welcome
you
and
in
a
similar
rain,
I
myself
know
I've
been
added
as
a
chair
for,
say
god,
Jordan
Leggett
recently
transitioning
through
the
tech
lead
role
he
is
still
around.
H
H
That's
been
the
primary
reason
for
winding
us
down
is,
we
did
feel
like
the
intersection
between
sig
often
contain
identity
was
basically
the
same,
and
we
didn't
want
the
overhead
of
an
entire
separate
meeting
and
we
do
have
a
good
path.
Moving
forward
on
the
token
request,
stuff
is
going
to
be
pretty
influential
on
blind
and
we
did
have
some
meetings
with
the
same
group,
which
is
since
merged
with
work
group
policy.
We
haven't
necessarily
had
any
formal
arrangement
for
closer
collaboration,
but
we
are
interested
in
working
with
them.
H
A
Alright,
thanks
mo
and
I
just
like
to
point
out
that
we
did
use
the
working
group
winding
down
to
kind
of
beta
test,
our
Sun
setting
of
workgroups
process,
so,
if
you're
interested
in
that,
as
everything
keeps
on,
turnin
upstream
feel
free
to
pain
us
alright.
With
that
we're
gonna
move
on
to
the
announcements
contributor
summits
at
Q
con,
let's
go
with
Shanghai.
First
Josh,
you
have
anything
to
say
before
you
get
on
your
plane.
A
Moving
on
the
Seattle
just
so
reminders
chairs
and
owners,
if
you
haven't
confirmed
that
you're
coming
in
to
contributor
summit,
please
let
us
know
I
think
at
this
point
the
best
thing
for
you
to
do,
because
we're
running
out
of
time
is
just
paying
myself
or
Parris
directly,
and
we
will
let
you
know.
What's
going
on
with
that
Parris
you
haven't.
If
you
mad
about
Seattle
nope,
we
are
good.
A
Ok,
community
meeting
schedule,
I'm
just
going
to
give
some
quick
bookkeeping
here
for
November
22nd,
that's
going
to
be
Thanksgiving
in
the
US,
we're
still
gonna
do
a
community
meeting.
Igor
will
be
your
host,
so
those
of
you
that
are
not
on
holiday
you're,
more
than
welcome
to
come
and
participate.
We're
gonna
try
to
do
the
release
retro
at
12:00
6:00.
We
try
to
do
it
as
soon
after
release
as
possible,
but
that's
Senate.
If
so,
we
will
be
as
flexible
as
we
can
be.
Their
December
13th
will
be
actually
a
cube
con.
A
So
there's
no
meeting
that
week
and
what
we're
gonna
do
is
for
December
20th
and
27th
we're
just
going
to
close
out
the
years
without
meetings.
Everyone
can
celebrate
new
version
of
kubernetes
and
a
great
year
when
we
come
back
for
January
the
1st
cigs
up
for
status,
updates
that
this
meeting
will
be
sig,
apps,
cig,
UI
and
cig
VMware.
Does
anybody
else
have
any
announcements
or
anything
before
we
move
on
to
the
shoutouts
Aaron
I?
Guess.
C
Since
you
mentioned
holidays
I'll,
do
a
quick
one
steering
committee
decided
during
a
meeting
yesterday.
We're
not
gonna
have
a
meeting
in
two
weeks
since
that's
so
close
to
Thanksgiving,
but
we
will
be
having
a
meeting
December
5th
prior
to
coupon.
We
are
going
to
be
pushing
really
hard
to
make
sure
every
sink
as
their
charters
written
in
submitted.
So
some
of
you
may
have
noticed
I've
been
poking
you
or
you've
gotten
books
from
other
steering
committees,
members.
C
That
is
why
my
suggestion
to
you
would
be
to
get
a
charter
drafted
by
the
end
of
next
week,
so
that
we
can
have
some
time
to
review
it
prior
to
Thanksgiving
and
sometime
after
Thanksgiving
and
see
where
we
are
at
because
I
just
think
it
would
be
really
great
to
stand
up
at
the
contributor
summit
and
say
that
we
know
what
y'all
do
here
now,
because
it's
all
written
down.
Thank
you
for
that.
It
only
took
us
a
year.
It
would
be
really
great.
A
Josh
burkas
would
like
to
do
some
shoutouts
for
the
people
who
helped
plan
the
Shanghai
summit,
so
Meghan
Len
for
doing
all
the
logistics
and
legwork
from
thousands
of
kilometers
away,
booyah,
yang,
pings,
ow,
I,
hope,
I,
get
that
right
and
and
I
deal
hack
for
translating
all
the
new
contributors,
some
of
materials
and
many
other
things
besides
also
mr.
Bobby
tables
and
our
localization
volunteers,
for
getting
the
International
Forum
that
could
be
thought
I
don't
launched.
A
A
That's
Duffy
would
like
to
shout
out
to
Jason
to
Tiger
for
always
finding
time
to
help
dig
into
the
cluster
API
stuffs
and
iishe
would
like
to
double
dip
and
shout
out
to
just
the
Santa
Barbara
yet
again,
I
think
that's
three
years
in
three
weeks
in
a
row
a
shout
out
for
Justin
for
extremely
quick
turnaround
on
a
long-standing
upgrade
testing
issue.
This
helps
us
get
clean
and
the
NCI
coverage
for
the
1.30
beta
feature
taint
based
evictions,
and
with
that
we're
about
nine
minutes
early.