►
From YouTube: kubernetes kops office hours 20190913
Description
kubernetes kops office hours 20190913
A
Hello,
everybody
and
welcome
to
cups
office
hours
today
is
a
very
auspicious
Friday
September
13th
2019
I.
Am
your
host
moderator,
consider
Justin
Santa
Barbara
I
work
at
Google
reminder
this
meeting
is
being
recorded
and
will
be
put
on
the
Internet,
and
please
be
mindful,
therefore,
of
our
code
of
conduct,
which
boils
down
to
being
a
good
person.
A
I
have
pasted
a
link
to
the
agenda
in
the
group
chat,
please
feel
free
to
add
your
name
as
I
am
doing
myself
to
the
attendee
list,
and
if
you
do
have
items
you
want
to
talk
about,
please
feel
free
to
add
them
to
the
agenda
at
the
appropriate
place,
and
so
we
can
be
sure
we
get
to
them.
Otherwise
we
can
dive
right
in
first
up.
Ryan
asks
this
feels
like
a
general
question,
but
Ryan
asks
who
is
going
to
cube
Khan
and
should
we
plan
a
meet-up
is.
A
He's
a
coop
Khan
already
he's
that
keen
yeah
it's
a
little
early
but
yes,
yeah
I'll,
be
great
to
have
a
meet-up
I
have
not
finalized
my
personal
plans
yet
but
yeah.
It
would
be.
It's
always
great
to
see
everyone
when
were
there
and
it
sounds
like
some
people
have
definitely
said
yes,
Mike
and
Peter
and
sounds
like
you
guys
have
already
committed.
B
A
B
I
spotted
this
on
Twitter
I
think
yesterday,
actually
that
someone
was
just
like
hey
by
the
way
everyone
should
be
aware
of
this,
and
so
basically
it's
a
bug
with
the
NZ
de
pree
client.
Again,
I
didn't
necessarily
dive
in
too
deep
to
see
where
we
were
susceptible
I.
Just
really
wanted
to
call
it
out.
It's
a
GRC
gr.
You
see,
the
pendency
upgrade
is
required
for
it.
B
They
did
not
back
port
this
from
116,
so
it
will
not
be
available
in
115
or
earlier
they,
the
weird
about
it,
is
they're.
Basically
like
here's
the
way
you
go
patch
it
yourself,
but
I,
don't
know
many
people
that
are
actually
building
their
own
kubernetes
generally
I
mean
I'm
sure
some
people
do,
but
not
everyone,
so
I
just
wanted
to
crawl
it
out
there
personally
I'm,
not
that
stressed
out
about
it.
A
Thank
you
for
showing
attention
that
I
don't
know
whether
we
should
ask
in
more
detail
on
the
issue
if
anyone's
actually
gigs
like
as
I
believe
we
only
target
each
acre
is
ever
going
to
localhost
and
I.
Don't
know
that
anyone
has
explicitly
confirmed
on
an
issue
that
this
only
affects
like
multiples
or
whether
it's
just
like
a
reconnect
bug.
But
maybe
we
try
to
find
out
the
right
place
to
ask,
but
yeah
I'm,
certainly
not
keen
on
if.
A
We
recently
had
this
come
up
in
conformance
where,
like
there
is
a
distinction
between
a
kubernetes
distro
and
a
kubernetes
installer
and
a
distro
must
fork
kubernetes,
where,
as
an
installer
does
not
for
kubernetes,
so
we
are
a
Installer,
are
proud,
installer
and
not
a
distro.
But
if
we
did
this,
that
would
be
one
of
those
nobody
distros
instead,
so
you
know
right.
B
If
anything
I
think
it's
also
just
valuable,
maybe
we
should
think
of
the
right
way
to
document
you
know
as
kubernetes
operators
and
best
practice
providers
and
you
know
being
cops.
This
is
something
to
be
aware
of
currently
our
team.
You
know
we
haven't
necessarily
taken
a
stance
on
this,
but
you
know
if,
if
people
are
running
into
this,
then
we
can
talk
about
how
this
also
kind
of
helps
push
people
that
you
know
it's
important
to
keep
your
clusters
upgraded.
D
A
Yeah,
this
is
a
little
unusual
to
not
back
board
this,
but
okay,
yeah,
yeah,
I,
guess
I
guess
it
is
a
fairly
big
thumb.
So,
yes,
it
would
be
about,
like
the
trade-off
between
the
consequence
of
the
update,
but
yes,
I
think
in
general.
That
is
what
you
said
is
is
great
and
we
should
better
understand
why
they
didn't
backboard.
It
I
suspect
it's
because
most
people
are
not
running
this.
Even
people
that
are
running
at
sea
DHA
are
not
running
this
style
of
AJ
right.
B
B
Scary
message:
our
issues
just
basically
being
like
by
the
way
as
soon
as
114
is
cut
or
116
is
cut.
The
CNC
is
no
longer
doing
performance,
verification
on
113
and
up
we're,
not
113.
At
that
time
we
weren't
113
compliant
or
you
know,
conformant
so
I
read
the
conformance
on
1/3
team,
114,
beta
and
115
alpha,
113
and
114
are
approved
so
we're
now
informant
115
is
still
waiting
on
their
approval,
but
you
know
I,
don't
think
there
will
be
a
problem
and
I
just
mentioned.
Then
the
discussion
came
up,
they
were
asking
us.
B
Are
we
an
installer
or
we,
a
distribution
and
I
was
like
I
got
a
little
defensive
that
burst
and
I
was
like
where
everything
where
distribution
were.
You
know
and
then
I
message,
Justin
and
kind
of
try
to
call
his
attention
to
it,
and
I
was
prepared
to
talk
about
it
today
and
I
thought.
You
know
we
could
kind
of
discuss,
but
what
it
really
came
down
to
is
the
definition
of
what
those
things
are
and
they
decided
to
clarify
that
a
distribution
is
in
theory,
forking
and
modifying
the
code.
B
So
we
definitely
don't
do
that
and
want
to
look
at
that.
So
that's
good!
We
are
officially
an
installer
and
yeah,
so
I
I
know
that
we
did.
This
I
was
going
to
come
up
with
a
way
to
make
sure
we
document
this.
As
a
you
know,
release
procedure,
you
know
maybe
in
the
alpha
and
beta
time
before
you
know
it's
just
nice
to
have
it
done
before
we
come
out
with
our
major
versions,
so
the
right
spot
to
capture
that,
but
yeah
we're
now
conforming
again.
Woohoo.
B
A
You
for
doing
that,
did
you
say
you
did
114
is
whether
or
not
yeah
I
did
114
and
115.
That's
wonderful!
What
the
team
is,
and
they
accepted
114
yeah
but
being
said
to
the
previous
alpha
of
ours.
I
look
back
in
the
history
and
they
accepted
one
while
ago.
So
ok,
that's
sort
of
what
I
was
wondering.
It's
like
it's
a
little
yeah
like.
Can
we
just
yeah?
Well,
okay,
how
early
can
we
do
it?
A
E
A
Okay.
Thank
you!
So
much
for
doing
that
and
yeah
it's.
Actually
it's
really
great
that
you
did
1
Cortina
beta
on
that
note
as
well,
because
then
we
I
can
say
I
like
the
idea
of
doing
it.
I'd
like
at
the
beta
timeframe,
that's
a
good
like
or
like
last
beta
timeframe.
That's
a
good
like
sanity
check,
yeah.
B
B
We
haven't
had
any
problems:
we've
tested
a
bit
of
stuff
with
it
today,
I'm
just
going
through
the
cherry,
picks
and
making
sure
everything
was
up
to
date.
There
I
and
just
a
couple
things
like
you
know,
flag
and
someone
wanted
a
new
region
added
for
Gulf
cloud,
some
of
those
things
that
are
kind
of
like
best
best
if
we
can
get
them
in
that
would
be
great.
So
a
couple
cherry
fix
are
open,
but
if
anyone
has
a
strong
disagreement
that
those
are
two
you
know,
that's
too
late
for
those
feel
free.
B
C
C
C
C
A
Thank
you.
If
you,
if
you
have
a
work-in-progress
PR,
it's
feel
free
to
like
upload
it
and
I
like
write
work
in
progress,
even
if
it's
just
like
I'm
stuck
here
and
then
we
can
sort
of
sort
of
there's
a
nice
way
to
like
gather
people
to
look
at
the
problem
in
it's
a
very
concrete
way.
I
don't
know
off
the
top
of
my
head.
If
we
have
any
like
add-ons,
I
guess,
we'd
call
them
that
reference,
a
non-constant
directory.
A
We
might
have
to
template,
we
may
have
to
add
another
function
that
like
gives
the
correct
location
in
a
in
this,
because
we
do
it
for
like,
for
example,
I
think
API
server
has
a
bunch
of
directories
that
change
locations
on
different
OS
is.
It
would
be
great
to
like
in
coop
controller
manager
like
you,
for
this
exact
reason
like
some
of
them
are
right
up
on
somewhere.
It
would
be
great
to
find
directories
that
work
everywhere,
but
I
haven't
found
them
yet
so.
C
A
A
And
other
than
that
are
people,
okay,
happy
great
116,
0,
alpha
1
I
think
I'm
continue
to
try
to
push
on
that.
It
would
be
great
to
get
it
out
before
116
lands
it
is
or
taking
it
to
the
wire,
but
the
blocker
there
is.
The
I
figured
out
a
way
to
do
enough
labels
that
we
can
sort
of
get
going,
but
we
do
need
a
controller.
The
labels
on
the
masternodes,
which
have
to
be
done
anyway
for
bootstrapping
purposes,
are
now
done
by
protic
u'b.
A
A
B
A
Me
I
just
figured
I
would
put
it
on
the
list.
Yes,
I,
don't
know
if
we
want
to
but
I
said
I
think
we
should
can
advance
want
a
team.
At
the
same
time,
yeah
we
pretty
should
do
a
beta.
If
we,
when
we
cut
the
Alpha,
we
should
do
when
we
cut
114.
We
should
like
bump
the
other
two
I
think
that
makes
yeah
it's
it's
basically
a
statement
that
features
are
now
going
to
go
into
the
116
release.
Rather
than
going
into
the
115
release.
A
Although
111
111
is
fairly
old
but
I
mean
four
releases
is
I,
don't
we
can
certainly
support
a
couple
more
than
three
releases
in
terms
of
this
force
behavior,
but
we
should
like
to
look
at
some
sort
of
softer
warning,
but
at
least
getting
people
a
little
bit
more
newer
more
than
they
were
a
little
bit
more
up
to
date
would
be,
would
be
great,
so
I
have
any
complaints
from
anyone
yet
either.
Well,
it's
only
in
the
Alpha
Channel,
so
I
imagine
the
people
that
are,
but
yes
exactly
I.
A
I
know
there
is
someone
out
there
running
one
eight
so
because
I
see
the
Downloads
and
I'm
really
want
that
person.
Whoever
it
is
to
to
upgrade
would
be
because
I
might
bend
a
little
old
alright,
because
I
haven't
done
a
who
they
are.
If
that
person
is
listening,
please
do
contact
me.
I
am
just
an
SBR
slack
and
various
emails
and
github
have
to
know
sort
of.
If
there's
a
reason,
why
you're
still
wanting
one
eight
that
we
should
we
can
help
with
or
anything.
A
Yeah
so
we
should
put
that
is
there
are
113
in
am
I
in
the
stable
channel
or
not
anyway.
We
should
I,
don't
believe
so
we
should.
We
should
promote
and
then
guess,
114
ami
I
think
that's
a
good
idea,
and
then
this
is
releasing
yes
and
we're
now
doing
am
eyes
from
the
new
repo
which
is
github.com
kubernetes
SIG's
image
builder.
A
Thought
we
got
the
PR
merged
images,
it's
an
images
cube,
deploy
I,
wash
the
link
to
the
subtree,
and
then
we
are
basically
there's.
There
are
a
bunch
of
these
image
tools
that
have
arisen
over
the
years
and
we
are
trying
to
unify
them
and
we
put
them
into
this
new
sub
project
under
six
austere
life
cycle,
and
we
are
now
once
formatting
myself
we
are,
we
are,
we
are
working
there
and
there
are
other
tools
and
we
can
hopefully
like
identify
some
overlap
between
them
all,
but
I
think
it's
I
wouldn't
make
a
typo.
A
A
F
A
F
I'm
started
talking
no
I'm
consumed,
so
yeah
I
was
able
to
test
it.
I'm
actually
running
it
on
three
of
my
clusters
right
now
rolling
it
out
to
the
rest
of
them.
So
it's
seems
fine.
The
only
gotcha
that
there
is
today
when
you
do
the
rollout
it's
like
multi-phased,
so
you
basically
have
to
first
be
configured
to
do
both
the
primary
and
secondary
McMansion
member
lists
raw
to
roll
to
the
Masters
roll
to
the
DNS
controller.
You
don't
have
to
roll
to
all
the
nodes.
F
Then
you
can
flip
to
primary
member
list
secondary
mesh
on
the
node
masters
and
the
DNS
controller,
and
then
you
can
flip
to
member
list
only
and
primary.
You
do
all
the
nodes
they
need,
then
the
Masters
than
the
controller.
The
thing
is:
when
I
do
the
cops
apply
for
the
cups
of
edit
and
all
that
stuff.
It
doesn't
update
the
DNS
controller,
which
is
actually
good
in
this
case,
because
I
don't
want
to
do
it
at
the
same
time.
I
don't
want
to.
F
A
A
A
I
mean
I,
think
yeah
I
think
I
might
my
hope.
So
it
sounds
like
there's
an
issue
there
which
I
want
to
get
to
the
bottom
of,
but
I
can't
think
what
it'd
be
right
now,
but
my
hope
would
be,
but
certainly
when
we
do
a
like
a
role
of
the
master
and
a
version
bump
of
the
DNS
controller
that
it
would
apply
that
change.
Yeah.
F
A
We
tend
to
get
the
new
features
on
the
kubernetes
version,
so
you
have
to
running
a
new
version
of
cops,
but
then
we'll
introduce
member
lists,
keep
saying
member
less
member
lists
for
greater
than
equal
to
1/16
kubernetes
will
introduce,
do
all
I,
guess
dual
stack
and
then
and
then
at
1
greater
than
goes
like
kubernetes
1:17.
We
can
switch
to
member
list
only
and
that
way-
and
this
would
just
because
a
false
is
what
I'm
sort
of
saying
in
it.
So
if
you
don't
do
anything,
I'm
wondering.
B
A
F
A
You
that'd
be
great
yeah
I'm,
really
surprised.
The
like
I
know
that
there
was
some
great
changes
that
went
in
which
sort
of
hashed
the
manifest
so
that
they
should
basically
pick
up
everything.
We
certainly
have
had
problems
with
this
in
the
past.
Ryan
did
a
PR
that
addressed
some
of
those,
but
so
you
guys
definitely
worth
having
a
look
at
to
understand
that
should
now
be
gotta
be
fixed.
So
it's
interesting.
If
you
reported
this
not
but
yeah,
please
do
you
upload
the
steps
and
we
can
that's.
F
A
D
Hey
Jesse
I
haven't
seen
this
agenda
list,
so
I
just
mean
now,
but
basically
the
question
I'm,
not
sure
if
you
guys
discussed
it
yet
as
a
part
of
116
support
from
cops,
I'm
wondering
is
their
name
Tomlin
for
it
or
as
it
being
planned
or
cuz
I.
Try
to
myself
seems
there's.
Some
issues
seem
to
be
fixed.
There.
A
Are
issues
yes,
so
this
is
in
the
release
plan.
We
are
talking
about
the
116
alpha-1.
This
is
a
little
bit
of
a
there's,
a
incompatible
change
in
kubernetes
with
the
labels
and
that's
what
we
sort
of
need
this
controller
for.
So
it's
it's
a
little
different
like
in
a
theory
like
you
should
we,
you
need
the
cops
116
to
run
tyrannize
116,
but
in
the
past,
like
basically
it's
always
worked.
That
is
not
the
case
today.
A
So
I
think
I
think
technically
so
I
think
technically
head
actually
will
now
work,
but
doesn't
do
these
labels.
So
there's
these
these
labels
that
steer
traffic
steer
pods
and
if
I
recall
correctly,
they
are
the
node
roll
labels
and
the
cops
kate's
IO
instance
group
labels,
and
the
intention
is
that
or
the
way
this
previously
worked
was
that
cubelet
would
set
those
labels
itself
on
its
own
node
at
boot.
Up.
A
That
is
a
potential
security
issue,
because
if
you
escape
from
a
pod
and
get
access
to
the
cubelet
credentials,
you
could
change
the
you
can
change
your
own
note
labels
and
steer
traffic
towards
it.
You
could
say
I'm
a
master
and
you
could
get
like
master
workloads
assigned
to
you,
and
so
the
idea
is
to
prevent
nodes
from
setting
any
labels
that
are
used
to
steer
traffic.
A
We
I
think
actually
just
like
earlier
this
week.
I
got
a
PR
and
that
worked
around
it
partially
in
that
it
now
the
master,
the
Masters
use
a
use.
Protic
you
buy
a
different
different
components
to
apply
their
own
labels
for
the
master,
steering
and
we've
stopped.
Applying.
We've
stopped
telling
cubelet
to
apply
the
notes.
It
can
apply
the
labels.
A
It
cannot
node
labels
that
it
cannot
apply
anywhere,
so
I
think
technically,
head
will
now
work,
but
what
we
have
broken
is
we've
broken
like
anyone,
relying
on
the
the
node
role
other
than
the
master,
node
role
or
or
the
instance
group
labels.
So
to
do
to
fix
that
we
need
to
have
a
way
to
apply
those
labels
to
the
nodes
that
doesn't
rely
on
cubelet
doing
it,
which
is
probably
going
to
be
a
it.
A
Morris
has
to
be
a
controller
that
runs
on
the
master
that
observes
the
nodes
and
labels
them
I,
see
I
added
a
link
to
the
piarc
in
the
chat.
Thank
you,
and
so
yes,
there
is
an
item.
I
have
a
work
in
progress,
PR
which
is
much
bigger
and
messier
than
it
needs
to
be,
and
I
am
working
on
basically
making
that
a
tolerable
merge,
because
but
anyway
it.
A
This
is
a
controller
that
will
run
on
the
master
and
its
first
functionality
will
be
to
start
labeling
the
nodes,
but
we
actually
have
a
bunch
of
things
which
we
want
to
start
doing.
In
a
controller
coming
up,
this
is
a
cupola
controller
and
we
like
we,
there
was
a
idea
of
POC
before
where
we
have
integration
with
cluster
API.
A
That
was
a
controller
some
of
the
add-on
stuff
that
we're
talking
about
would
probably
run
as
a
controller
to
sort
of
address
some
of
the
issues
with
s3
buckets
and
my
them
being
a
little
bit
fuzzy
and
when
they're
applied.
So
it's
it's
a
great
step
that
we're
sort
of
being
forced
into,
and
it
is
unfortunate,
but
that
is
I
am
I,
am
aware
of
it,
and
I
am
trying
to
get
it
going
I'm
just
looking
at
how
bad
the
PR
is
at
the
moment.
It's
not
terrible
at
the
moment.
A
A
When
I,
we
have
to
map
from
an
instance
from
an
AWS
or
GCE
or
OpenStack
or
whatever
instance,
to
a
to
understand
the
labels
that
should
be
applied,
and
so
we
need
we.
In
other
words,
the
central
controller
has
to
figure
out
what
they
believe
lies,
and
so
that's
that's
sort
of
the
nastiest
bit
of
the
code
we
have
to
like
use
the
AWS.
A
A
We're
kind
of
thank
you
but
ya
know
it's
a
it's
a
good
question.
Thank
you
for
bringing
it
up.
It's
yeah,
it's
the
it's
it's!
It
was
annoying.
That
was
breaking
change,
but
it
is
in
kubernetes,
but
it
is
a
like.
It
will
drive
a
great
like
big
step
forward
for
cops,
I.
Think
if
we
can
get
this
going
cool.
D
B
A
B
Unaware
there's
a
membership
requirements
in
in
the
community
repo
for
kubernetes.
So
if
you,
you
know,
as
just
kind
of
as
a
background
for
anyone
watching
or
anyone
here,
that
doesn't
know
basically
once
you're
a
board
member
of
kubernetes,
which
is
really
helpful
when
you've
done
a
couple
contributions
and
we
kind
of
trust
you
you
talk
to
one
of
us
that
are
already
a
member
I
think
you
need
two
people
to
support
you
and
then
you
can
join
the
communities
org
and
then
you
don't
need
us
to
approve
your
tests.
You
know
so.
B
Whenever
you
make
a
new
PR,
you
don't
have
to
do
it.
You
know
get
someone
to
do
okay
to
test
and
then,
if
you're
interested
in
working
your
way
up
and
getting
more
involved,
the
nice
level
up
is
really
reviewer,
which
means
you'll
automatically
get
tagged
on
to
review
PRS,
and
it
really
helps
a
lot
of
us.
B
You
know
take
a
look
at
the
the
ones
that
won't
even
looked
at
to
really
say
this
is
ready
for
approval,
so
it's
kind
of
the
first
line
of
defense
and
for
over
the
years
we've
had
reviewers
and
then
a
lot
of
us
that
we're
reviewers
became
approvers,
and
so
it's
nice
to
have
that
first
tier
to
kind
of
sort
through
like
oh,
you
just
need
to
rerun
your
tests
or
you
need
to
also
run
this
other
thing.
So
it's
really
crucial
for
us
and
it's
awesome
to
get
more
people
involved
and
yeah.
B
If
you
ever
have
any
questions
or
whatnot
feel
free
to
message
me
at
my
explain,
I'm
happy
to
help
and
support
people
getting
more
involved,
and
my
last
note
on
this.
Anyone
can
really
bring
yours
once
you're,
an
organ
member,
and
so
that's
actually
actually
become
a
reviewer.
You
need
to
start
review
so
the
main
way
that
we
tracked.
That
is
you
go
through
some
PRS.
B
You
comment
on
some
code
and
then,
when
it's
ready
you
well,
you
can
assign
it
to
yourself
and
then
you
do
/lg
TM
and
once
you
start
doing
that
on
I,
don't
remember
what
the
number
of
pr's
is
probably
20
or
something.
Then
we
can.
You
know
talk
about
promoting
you
into
reviewer,
so
sorry
for
the
monologue,
but
just
thought
I'd
do
the
you
know
that
sales
pitch
we
love
more
more
people
is
always
helpful.
So
thank
you.
A
A
There
is
another
agenda:
I,
don't
know
who
wrote
this,
but
there's
a
report
in
cop
settlement
from
Jesse
saying
that
there
may
be-
or
he
has
observed,
a
problem
with
gossip
DNS
at
least
an
openstack
115
alphas
0
115
0,
alpha
1
I
added
that
as
a
blocker
to
cutting
115
beta,
1
I
thought
I
had
tested
this,
so
I,
don't
know
about
I
have
not
tested
an
openstack.
I
am
not
sure
what
the
Delta
would
be
here.
A
Ok,
that's
odd.
He
says
also
that
oh,
it's
in
master.
Ok,
this
could
be
ok,
I
understand
alright,
so
this
is
a
master,
but
not
on
with
the
team.
It
looks
like
look
at
the
issue.
I
will
leave
as
a
blocker
just
in
case
it's
something
else,
maybe
that,
but
it
would
certainly
be
a
blocker
for
116
alpha
0
Oh
once
it
sees
here
awful.