►
From YouTube: Kubernetes kops office hours 20201009
Description
Recording of the kops office hours meeting held on 20201009
A
Hello,
everyone
this
today
is
october
9th
2020..
This
is
cops
office
hours,
I'm
your
moderator,
peter
rifle.
This
is
a
reminder
that
the
meeting
is
being
recorded
and
will
be
put
on
the
internet.
So
please
follow
the
kubernetes
code
of
conduct,
which
comes
down
to
be
a
good
person.
A
We'll
start
the
agenda
with
reviewing
action
items
from
the
last
meeting.
First
is
the
119
branch
plan.
B
This
may
have
been
on
me.
I
was,
I
was
a
little
neglectful
this
week
of
formal
items.
It
seems
like
the
the
good
news
is.
We
were
more.
We
were
active
in
hacktoberfest.
The
bad
news
is
that
sort
of
seemed
to
take
over
everything
else.
C
B
C
Is
just
the
plan?
That's.
B
D
I
don't
think
we
chose,
I
don't
think
it's
a
blocker
for
for
branching,
because
we
can
always
cherry
pick.
B
B
I
I
do
think
that
any
of
the
maintainers
can
branch-
I
don't
know.
I
think
peter
you've
pushed
a
branch
in
the
past
intentionally.
I
think
it
was.
I
remember
someone
pushed
a
branch
unintentionally,
so
I
know
that
other
people
can
push
branches.
So
that's
so
I'm.
But
yes,
I'm
happy
to
do
it.
B
If
someone
else,
if
another
maintainer
wants
to
try
it,
that
would
be
an
interesting
thing
to
make
sure
that
we,
it
is
not
like
you
know,
we're
trying
to
get
rid
of
all
the
centralization
points,
and
this
would
be
hopefully
a
relatively
easy
one
to
cross
off
the
list.
A
Yeah
anyone
want
to
volunteer
to
do
that
or
I
can
do
it.
A
Cool
I'll
give
that
a
shot
after
the
call.
Next
on
action
items,
I
had
a
pr
open
for
adding
bottle
rocket
support
and
I
needed
some
advice
on
the
task
dependencies
and
having
the
user
data
now
dependent
on
the
ca,
and
I
think
justin
was
gonna.
Take
a
look
at
that.
B
A
Okay,
nothing
more
on
that
one
really!
Next.
D
D
C
B
On
that
note,
I
I
think
it's
a
valid
concern
to
bring
up.
I
mean
I
on
that
note.
Would
it
be
possible,
for
example,
to
run
a
an
initialization
pod.
B
B
C
So,
and
the
other
question
is:
do
we
want
it
in
119
or
do
we
want
to
not
sure
how
ready
it
is
or.
B
Okay,
I
mean
the
the
goal
of
the
feature
flag
is
to
clearly
delineate
the
areas
where
we,
where
we
do
not
make
a
guarantee.
It's
not
it's
not
that
we
will
accidentally
break
something
it's
so
we
reserve
the
right
to
deliberately
break
it
because
it
makes
it
for
a
better
experience,
but
yeah
an
easy
test
would
be
nice,
but
it's
not
gonna
help.
It's
not
gonna.
It's
not
gonna
make
the
feature
magically.
A
Okay,
so
I
think
we
can
move
on
from
that.
So
next
action
item
from
last
time
was
nlb
support.
C
Yeah,
that's
what
that
was
added
by
me,
sorry
to
say,
but
I
think
we
made
a
mistake
last
time
complicating
the
feature
for
compatib
for
making
it
work
with.
You
know
both
nlp
and
classic
load,
balancer
and
priorities,
because
this
should
merge
as
soon
as
possible
and
then
tweak
it
more
adding
more
stuff
in
it.
B
C
The
person
already
did
some
of
the
changes,
I'm
not
really
sure,
because
I
didn't
actively
review
it.
It's
kind
of
huge
and
I
think
it's
still
working
progress
so,
but
if
our
changes
to
allow
both
elb
and
nlb
at
the
same
time
are
complicating
things,
maybe
we
should
strip
these
things
and
make
it
as
simple
as
possible,
at
least
to
get
the
nlb
support,
and
then
maybe
some
of
us
can
help
improve.
C
Somehow
I
think
things
are
very
slow
there,
so
either
we
take
a
decision
that
this
is
no
longer
a
blocker
and
well
we
put
a
big
label
on
119
that
doesn't
work
with
acm
and
I
don't
know
when
it
will
work
or
try
to
simplify
things
here.
D
Let's
see
I
was
talking
to
them,
we
are
exploring
of
an
alternative
where
you
just
flip
it,
and
then
it
leaves
the
old
one
behind
on
the
first
run
and
then
the
next
time
you
do
an
update
cluster,
it
cleans
it
up
so
basically
deferring
the
cleanup
of
the
old
one.
C
B
In
other
words,
to
to
add
the
nlb
feature,
like
a
standalone
nlb
feature
for
a
brand
new
cluster
opt-in
or
for
an
old
cluster,
but
you
accept
that
for
five
minutes
you
will
have
downtime
on
api
server
right
and
then
so
we're
basically
taking
this
pr
and
we're
suggesting
to
split
it
yeah
on
that
basis,
that
seems
reasonable,
yeah,
okay,
it.
B
Seven
commits
so
I
can
have
a
look
at
what
christian
has
already
done.
Maybe
that
is.
B
The
size
will
be
greatly
reduced
to
the
pr
if
we
split
it
up
into
introduce
nlb
as
the
first
one,
which
would
then
mean
that
we
can
really
see
the
the
those
the
sort
of
sticking
points
right,
because
I
don't
think
anyone
objects
to
the
nlb
or
has
any
issues
with
the
nlp
support.
I'm
sure
that
we'll
find
a
couple
of
minor
things,
but
you
know
nothing
serious
and
then
but
yeah.
The
the
the
nuances
of
how
the
prioritization
works
is,
is
much
trickier
and
so
like
having
a
small
pr
which
introduces.
D
Okay,
so
I'll
say:
where
did
I.
A
Okay,
so
moving
on
to
the
open
discussion,
the
first
item
is
pr
that
I
opened
open.
A
First,
pr
is
cubetest,
which
is
the
test
tool
that
we
use
to
run
our
e2e
tests.
They
have
a
new
cube,
test2
and
I've
been
wanting
us
to
migrate
to
it,
mostly
because
the
original
cube
test
is
under
maintenance
mode.
So
they
are
not
accepting
big
features
to
it
and
we
want
to
add
new
features
to
our
e
testing
so
that
we
can,
you
know,
test
more,
have
improved
test
coverage,
and
so
this
pr
kind
of
sets
up
the
initial
setup
for
cubetest
2.
A
There's
there's
complications
with
how
we'll
run
periodic
jobs
because
they
don't
clone
the
cops
repo,
and
so
we
have
to
decide
how
we
will
build
the
cube
test,
2
cops
binary
and
which
commits
we
use
for
that
when
we're
doing
periodics-
and
I
have
a
few
suggestions
for
that-
but
yeah-
that's
kind
of
most,
mostly
how
I
was
envisioning
it
right
now.
It
only
does
a
build
and
publish
to
gcs,
but
it
should
be
good
to
extend
further
to
actually
launch
clusters
and
run
the
kubernetes
et
test
suite.
B
B
That's
also
the
biggest
gosum
I've
ever
seen.
I
hope
that
I
hope
that
is
not
accurate,
but
it
is
yes
it
doesn't.
It
doesn't
really
matter
so
yeah.
It's
a
good
call
on
splitting
that
out
to
be
a
separate,
go
module.
A
Cool
no
other
comments
on
that.
We
can
move
I'm
going
to
rearrange
these
john,
the
node
labels
from
cloud
tags.
D
B
A
Okay
and
then
the
we're
flying
through
these
last
item
in
the
open
discussion
is
choosing
a
next
meeting
day
and
time.
A
We
did,
let's
see,
we've
done
thursday
at
nine
eastern,
nine,
eastern
and
tuesday
at
nine
eastern.
B
B
I
don't
know
if
we
I'm
trying
to
look
at
like
the
sort
of
the
the
different
one
is
tuesday
at
11,
but
I
feel
like
that
conflicts
with
something
like
in
terms
of
being
like
so
far,
we've
done
early
morning,
ones
11.
I
think
this
is
eastern
right.
Yes,
11
eastern.
D
B
So
we
could
do
another,
I
mean
we
could
do
another
wednesday
or
thursday.
Earlier
morning,
then
I
think
we
did
have
some
people
join
from
europe,
mostly
the
normal
kubernetes
people,
just
different
groups
different
like
not
necessarily
the
cops
groups,
so
that
was
good.
B
B
Actually
I
I
don't
think
I
can
do
it.
I
can
I
have
to
do.
I
have
my
routine,
which
is
butts
into
the
very
beginning
of
that,
so
either
nine
or
ten,
I
think,
would
be
better
on
either
wednesday
or
thursday.
It
looks
like
they
all
have
either
two
or
three
votes
so.
A
C
May
I
ask
an
unrelated
cops
question
or
slightly
related
guy
about
the
cluster
autoscaler
arm64
build.
Do
you
have
any
feedback
on
that
from
the
maintainer
there.
E
Yeah,
I
I
think
they
said
they're
they're
more
than
happy
to
accept
a
pr
for
it
and
there's
not
currently
a
plan
from
the
maintainer
to
raise
the
that
pr
themselves,
but
they're
they're
happy
to
review
it.
Whenever
it's
raised.
E
That's
all
right,
if,
if
you
need
to
make
sure
eyes
get
on
it,
feel
free
to
ping
me
and
I
can
get
in
touch
with
the
right
people.
C
A
It
works
cool,
any
other
discussion
items
before
we
move
into
the
release
plans.
F
B
It's
more
of
like
a.
I
think.
The
core
idea
is
to
encourage
people
to
work
on
open
source
and
hack
on
open
source,
and
we
are
interpreting
that
in
an
ongoing
and
fluid
way
and
trying
to
decide
what
that
means.
So
we
had
a
couple
of
sort
of
kick-off
meetings
where
we
basically
talked
about.
You
know
things
that
we're
interested
in
working
on
over
the
sort
of
month
to
couple
of
month
horizon
in
the
people
that
showed
up,
and
it
is
sponsored
the
the
official
october
event.
B
The
oktoberfest
event
is
sponsored
by
digitalocean,
and
if
people
send
three
or
four
I
can't
recall
prs,
I
believe
digitalocean
will
send
them
a
t-shirt.
That's
kind
of
beside
the
point
I
think
about
how
we
are
doing
it.
The
but
yeah
we're
just
trying
to
encourage
either
new
contributors
or
people
to
their
existing
contributors
to
to
work
on
something
over
that
sort
of
time
period
and
also
using
it
to
experiment
with
different
time
slots.
B
So
if
you're
interested
in
in
hacking
on
cops
or
hacking
on
something
unrelated
to
cops
ideally
related
to
kubernetes
in
that
in
that
group,
but
it
doesn't
much
matter
other
than
the
fact
that
we
won't
be
able
to
tell
you
or
won't
have
any
real
opinions
about
other
things.
Then
it's
great
to
sort
of
come
along
if
you
can
or
just
work
sort
of
digitally
and
like
on
github
and
look
for
the
hacktoberfest
tags
for
repos
or
issues
or
things
like
that.
F
B
That's
that's
very
much
in
the
spirit
of
the
event
just
coming
along
and
just
trying
to
get
so.
F
This
is
wednesday
eastern
time
9
a.m
this
coming
week.
Yes,
I
think
it's,
the
14th
14th,
okay,.
D
F
Have
some
issues.
D
B
On
the
sorry,
just
on
the
previous
mini
topic
around
arm
images,
we
actually
talked
about
it
in
the
hectare
fest
meetings,
I
think,
but
I
think
one
of
the
blockers
for
std
managers
aren't
invited,
was
trying
to
find
a
good
base
image,
and
I
I
think
we
found
the
debian
10
image
that
docker
maintains
docker
inc
maintains,
and
it
seems,
reproducible
and
good.
In
that
regard,
I
just
have
to
make
sure
that
it
is
actually
reproducible
and
so
forth.
So
far
I
failed
to
do
so
so
I'm
gonna
be.
B
I
intend
to
continue
working
on
that,
but
once
we
are
confident
that
that
is
a
reasonable
base
image
that
we
can
reproduce,
then
I
think
we
will
have
a
base
image
that
that
is
a
a
stock
debian
image
that
we
can
work
from
for,
for
I
call
it
the
fat
images
where
they
have
like
not
the
distrelis,
but
a
couple
of
extra
binders.
A
Cool
moving
into
the
release
plan
any
releases
we
need
to
do
in
the
next
two
weeks.
I
saw
something
merged
into
118.
This.
C
There
are,
there
are
some
things
for
118..
One
of
them
is
that
118
doesn't
work
on
golf
cloud
because
of
some
s3
thing
it's
fixed,
it's
there
so
can
be,
can
be
done.
There
is
some
request
and
also
cops
upgrade.
Doesn't
work
on
118.
C
There
is
a
pr
there,
that's
not
quite
there,
but
once
it
is
in.
I
hope
that
that
person
can
finish
it.
If
not,
I
will
try
to.
I
think
I
can
try
to
put
something
into
118.
If
you
don't
mind,
but
someone
would
have
to
do
well.
Justin
would
have
to
do
the
release
we
did
so.
Are
we,
okay
with
the
118
release.
B
Yeah,
I
think
so.
Certainly
sir,
I
meant
from
the
point
of
view
of
of
the
work
I
just
want
to
clarify.
You
said
cops
upgrade
is
broken.
You
mean
literally
the
cops
upgrade
command.
You
do
not
mean
that
upgrading
cops
is
broken.
Yes,
it's
better,
not
great,
but
yeah.
It's
still
that's
much
better.
All
right,
good.
C
Yeah,
I
think
there
are
somewhat
there
are
a
few
other
things
in
node-up
and
protocol.
That
would
be
not
yet
do
we
want
to
wait
for
something
before
it
or
do
you
want
to
do
it
over
the
weekend.
C
C
B
B
Don't
feel
like
it's
a
rush
but
like
when
that,
when
that's
when
that
fix
is
in,
then
we
should
do
a
118.2
is.
I
think
we've
decided
here,
yeah.
A
B
B
A
The
next
is
the
client
cert
off
doesn't
work,
that's
blocked
by
the
nlb
once
the
nlb
pr
lands.
I
can.
B
C
A
Yeah
yeah,
I
could
even
build
a
pr
off
of
his
first
few
commits,
depending
on
how
easily
they're
separated
just
to
get
the
target
group
and
security
gear
changes
in
place
for
the
client
start
off
issue.
C
Well,
these
were
some
things
for
justin
to
look.
I
think,
all
three
of
them.
B
C
I
still
think
that
it
might
be
easier
to
just
revert
the
previous
image
so
that
it
won't.
It
would
not
mean
so
much
rebasing
for
the
arm.
Pr,
considering
that
we
will
replace
it
anyway,
so.
B
C
For
the
previous
one,
there
was
already
you
found
it,
but
it
was
unsupported.
At
least
it.
B
No,
it
was
like
kubernetes
the
kubernetes
base
image,
I
think
yeah
and
the
problem
is
when
they
went
to
the
next
version
and
then
with
the
buster.
I
think
they
they
dropped
a
bunch
of
packages
from
that
image.
B
B
Literally
scratch
and
just
install
the
just
the
packages
we
need
by
just
expanding
them
without
doing
an
installation
which
is
probably
better
than
any
of
the
other
options,
except
it's
a
little
bit
more
work
to
maintain.
But
I
don't
know
that's
if
I,
if
I
totally,
if
I
can't,
if
I
can't
reproduce
the
other
image,
that
is,
if
I
can't
reproduce
the
official
debian
10
image,
then
that
will
be
the.
I
guess
the
full
back
plan.
B
The
nice
thing
is:
if
we
can
reproduce
the
debian
10
image
I
feel
like
we
don't
have
to
maintain
it.
We
can
just
assume
that
they
are
doing
a
reasonable
job
of
maintaining
the
official
debian
10
image.
G
It's
pretty
much
right
there,
so
there's
a
bug
with
terraform.
I'm
gonna
call
it
a
bug
for
a
lack
of
a
better
explanation
where,
if
you
set
the
load
balancer
outside
of
the
asg
definition
like
with
the
load
balancer
attachment
on
terraform
12,
every
time
you
run
terraform
it
attaches
or
detaches
it
doesn't
matter
what
the
state
of
the
terraform
configuration
is.
The
actual
the
actual
apply
changes
that
every
single
time
I
I've
experienced
it
with
some
clusters
and
not
others.
G
So
I
created
that
pr
so
that
it
does
have
the
little
battlestar
attachment
happen
on
the
same
block
as
the
asg
creation
and
not
an
external
attachment.
However,
there
is
a
downtime
for
the
switch
peter
offered,
a
solution
that
we
may
be
able
to
do
something
where
we
modify
the
terraform
state
before
we
do
the
first
supply.
For
that
pr,
I
unfortunately
haven't
had
time
to
actually
test
that
and
I
haven't
found
any
other
ways
to
not
take
a
downtime.
G
So
if
anyone
has
a
suggestion,
I'd
be
open
to
modifying
that,
but
that
pr
may
conflict
with
the
mlbpr.
That's
currently
open
as
well.
So
we
may
need
to
take
a
look
at
how
we
want
to
merge
those
or
which
one
we
want
to
merge.
First.
A
A
You
can
either
define
the
attachment
of
a
load
bouncer
to
an
auto
scaling
group,
either
as
its
own
resource
or
as
a
field
within
the
auto
scaling
resource,
and
if
you
have
both
or
some
strange
combination
of
those,
it
will
try
to
flip
flop
back
and
forth,
between
which
one
is
actually
defining
the
the
attachment.
G
In
my
case,
I
only
had
one
defined,
which
is
the
standard
one
cops
puts
out
and
I
was
still
seeing
that
issue.
So
I'm
not
sure
exactly.
What's
triggering
it.
G
Sorry
did
you
say
there
was
a
bug
open
for
this.
There
is
well,
there
is
a
bug
that
was
closed,
their
recommendations,
pretty
much.
Don't
do
that
or
just
ignore
the
little
balancers
on
the
sheath
you're
using
a
external
load
balance
attachment.
Sorry,
is
there
a
cops
bug?
Is
there
a
cop
spot
for
it?
There
is
a
cops
bug
for
it.
There
is
here.
Let
me
pull
the
pr
there's
links
to
that
as
well.
A
B
A
Okay,
I
think
everyone's
getting
16
minutes
back
of
their
time
and
we'll
see
each
other
in
two
weeks
or
next
wednesday.
If
you
can
make
it,
but
otherwise
have
a
have
a
good
weekend.