►
From YouTube: Kubernetes kops office hours 20200619
Description
Recording of the kops office hours meeting held on 20200619
A
Hello,
everybody:
this
is
cops
office
hours
today
is
friday
june
19th.
I'm
your
moderator,
facilitator,
john
myers.
I
work
for
proof,
point
a
reminder.
This
meeting
is
being
recorded
and
will
be
put
on
the
internet.
Please
be
mindful
of
the
code
of
conduct,
which
is
basically
to
be
a
good
person.
I've
put
a
link
to
the
agenda
in
the
zoom
chat.
B
C
A
C
Continuity:
okay,
so.
B
Some
guys
from
sig
storage
approached
me
and
peter
to
help
them
test
sc
linux,
on
santos,
once
we
got
into
that,
we
found
out
that
the
container
d
doesn't
build
the
run
c
with
sc
linux
support.
B
B
So
this
would
be
only
a
temporary
thing,
because
I
already
submitted
prs
that
were
merged
for
container
d.
So
next
container
d
will
be
s
linux
enabled
and
because
this
is
1.19,
I
don't
think
there's
any
chance.
We
will
ship
with
the
current
docker
version.
B
So
I
know
that
justin
is
not
a
fan
of
using
docker
binaries
again,
I'm
not
out
of
the
mirror
the
mirror.
D
The
the
the
sourcing
from
that
particular
location
we've
had
our
troubles
in
the
past,
but.
B
D
Yeah,
I
think
that's
reasonable.
Can
I
ask,
is
it?
Is
it
technically
possible
today,
to
override
the
container
delocation
so
like
to
point
to
a
like
to
point
to,
I
presume,
there's
a
ci
build
or
something
of
this
of
your
change
to
container
d?
Is
it?
Is
it
technically
possible
to
repoint
cops
to
use
that
tar
file
or
rgz
file.
E
B
That
was
something
that
I
wanted
to
propose
now,
since
we
are
no
longer
using
the
rpm
and
debian
files,
we
are
pretty
much
down
to
very
few
target
z's.
So
let's
say
one
target
z
or
two.
If
we
take
into
account
container
deeper
release,
so
wouldn't
be
that
much
of
a
trouble
to
host
them
somewhere.
D
Yeah,
it
would
be
good
if
I
I
don't
wanna,
I
don't
wanna
host
other
people's
binaries.
It
would
be
good
if
the,
but
it
would
be
good
if
the
container
d
projects
or
whatever
these
binaries
are
were
built
in
the
same
way.
That
cops
is
gonna
be
built,
so
it
goes
into
the
same
like
kate's,
artifacts
location.
It
might
be
that's
further
away,
and
so
we
might
have
to
if
it's.
D
B
For
container
d
itself,
I
know
that
well,
people
in
kubernetes
upstream
had
some
discussions
with
them
to
provide
more
vinyls,
including
before
they
didn't
agree
on
something.
So
kubernetes
ended
up
having
a
nightly
build
of
container
d,
that's
used
in
testing.
B
D
Yeah,
I
don't
I
I
don't
love
the
idea
of
doing
something
which
we're
just
gonna
revert,
but
I
think
it's
acceptable
in
this
case.
The
I'll
I'll,
probably
personally,
take
a
look
at
like
being
able
to
the
the
thing
we
talked
about.
D
We
were
able
to
override
the
tar
gz
I
feel
like
that,
would
if
we
had
that,
that's
what
I
would
say
we
should
do,
and
so
I
will
try
to
make
sure
we
have
that,
but
in
the
in
the
absence
of
that,
I
guess
it's
okay
to
temporarily,
but
anyway,
that's.
B
A
nice
feature
to
add
for
1.19,
so
I
think
I
will
take
a
look
and
add
it.
D
Okay,
thank
you.
Yeah
see,
I'm
fine
with
trying
this
this
this
target
from,
but
docker.
For
now
I
would
like
in
general,
to
like
do
the
things
we
talked
about
when
we're
closer
to
having
our
artifacts
in
locations
that
we
better
control
or
have
like
a
better
policy
or
our
own
policies
around,
and
but
in
this
case
I
think
it's
okay.
A
Okay,
continuity
is
default.
B
D
It
might
actually
be
better
to
do
it
like
on
120
or
one
like
on
a
on
a
version
barrier
and
do
it
for
everyone
I.
What
can
I
propose?
I
know
that
there
are.
There
are
ongoing
challenges
with
container
d,
just
as
there
are
with
docker
like
there
are
always
problems
and.
D
What
the
state
of
it
is
in
terms
of
which
one
like
I
don't
know
what
gk
is
recommending.
I
don't
know
what
eks
is
recommending.
I
don't
know
what
everyone
is
recommending
and
I
I
have
a
feeling
that
gke
is
not
recommending
or
is
not
defaulting
to
container
d
right
now,
and
I
can
try
to
find
out
why
and
whether
that's
something
that
would
apply
to
us
as
well.
B
A
A
D
B
Okay-
and
the
last
thing
from
this
part
is
bazel-
builds
now
that
well,
I've
been
looking
at
making
the
images
for
cops
built
for
arm
64..
B
I've
been
pretty
successful
with
them,
but
there's
still
the
question
of
what's
the
plan
here:
do
we
want
to
keep
having
both
docker
files
and
bazel
build,
or
do
we
want
just
one
of
them?
I
don't
mind
doing
both
of
them,
but
it's
a
bit
confusing
in
some
cases,
I
think
at
least
the
docker
files
may
be
a
bit
out
of.
D
Sync,
I
can
speak
a
little
to
this.
We
we
started
with
docker
files.
We,
I
think,
tried
or
we
we've
started
using
bazel
more
so
there
were
both
of
them
and
for
the
newer
images
like
the
cube
api
server
health
check,
sidecar
that
is
only
built
in
bazel
today,
the
intent
was
to
basically
go
to
my
intent
was
to
go
to
bazel.
D
I
think
I
think
kk
is
having
a
little
bit
of
a
debate
with
bazel
right
now
and
I'm
not
sure
whether
they
are
like
who's
winning
that
debate
right
now.
I
it
puts
us
in
a
sort
of
difficult
position,
because
bazel
is
nice
for,
like
basically
has
its
quirks
in
terms
of
syntax,
but
it
is
nice
for
being
able
to
build
repeated
repeatably,
and
I
feel
like
it's
easier
to
build
a
correct,
build
with
bazel,
but
that's
my
opinion.
B
D
If
it
wasn't
for
the
fact
that
kk
was
having
their
debate
with
bazel,
I
would
say:
let's
go
to
bazel
like
that
was
sort
of
our
plan.
D
That's
been
our
plan
all
along
and
we've
been
like
gradually
getting
there
and
I
think
in
1
16,
like
we
switched
dns
controller
to
the
bazel
like
we're
gradually
going
more
and
more
basal,
and
I
think
the
the
only
one
that's
left,
I
think,
is
protocube,
which
is
the
one
that
has
the
complicated
packages
which
we're
also
going
to
get
rid
of,
and
I
saw
your
pr
to
do
that.
Thank
you.
D
So
I
guess,
if
we're
not
going
to
make
anything
worse
by
just
doing
basal
for
protocube,
which
is
so,
we
can
make
things
better
by
going
to
bazel
for
predicting
because
at
the
same
time
we'll
go
to
dystrolis
and
we're
not
making
our
situation
significantly
worse
because
we
already
have
to.
If
we,
if
we
decide
across
kubernetes
that
we're
going
to
move
away
from
beta,
we
already
have
to
reverse
out
some
images
anyway.
D
So
it
seems
to
me
like
if
bazel
would
make
life
easier
for
everyone,
then
we
should
do
for
now.
We
should
do
bazel
and
finish
that
conversion
and
then.
B
D
Yeah,
I
agree
with
that.
There
is
there's
one
weird
one,
though
there's
the
there's
a
docker
file
that
wraps
the
cops
cli,
which
I
think
some
people
use
sort
of
as
an
easier
way
to
I
don't
know
to
run
cops
or
something
that
one
might
be
a
sort
of
special
case.
I.
B
B
By
the
way,
sorry
related
question,
we
have
some
component
that
I
never
heard
about
cube
discovery.
Is
that
still
needed,
or
I
mean
do.
D
B
Okay,
so
that's
all
from
me,
I
think
you're
next
to
john.
A
Yes,
okay,
so
I've
been
trying
to
remove
insults
group
from
model
context
to
make
it
so
that
we
don't
have
changes
that
if
you
edit
an
instance
group
and
don't
update
the
cluster,
those
changes
don't
actually
affect
your
cluster,
so
been
moving
a
bunch
of
stuff
into
the
node-up
context
but
or
the
node.config.
A
So
I'm
looking
for
guidance
there,
whether
that
is
actually
the
case
for
people
that
it
might
be
too
large,
and
if,
if
that,
if
it
is
possible,
then
do
we
have
to
come
up
with
some
other
mechanism
like
a
per
launch
configuration
directory
in
the
vfs.
D
And
so
specifically,
the
question
is
whether
people
here
have
yeah
like
data
in
are
using
those
hooks
or
file
assets
yeah
and
the
the
canonical
example
of
things
is:
if
you
are
putting
keys
into
your
file,
assets
like
pki
keys
certificates,
those
compressed
particularly
poorly,
and
are
particularly
likely
to
send
you
over
the
edge
of
the
limit.
A
A
Okay
and
then
here's
a
big
one,
so
the
entry
cloud
providers
have
apparently
been
deprecated
for
a
while
in
kk,
but
we're
still
using
them,
and
I
was
wondering
if
we
wanted
to
start
moving
towards
the
auto
tree
cloud
providers.
D
D
For
example,
actually
yeah
runs
yeah,
so
it's
it's
and
I
think
it
would
be.
It
would
be
a
great
service
to
the
community
if
we
were
to,
for
example,
turn
on
the
gce
cloud
controller
manager
and
turn
on
the
aws
stock
controller
manager
and
run
tests
with
those,
because
I
don't
believe
otherwise,
there
are
any
tests
on
this.
Oh
okay,
I.
A
D
D
They're
behaving
dragons.
It
would
be
great
to
add
honestly.
It
would
be
great
to
add
that
support
like,
but
I
I
would
very
much
oppose
anyone
that
said
we
should
make
it
the
default
or
mandatory
in
any
version
less
than
20..
Okay,.
F
The
for
example,
the
aws
cloud
provider
external
cloud
provider
only
has
a
118
alpha
tag,
so
a
very
narrow
use
case
at
the
moment,
but
I'm
sure
that'll
get
better.
A
Okay
rodrigo
backup
store
for
tv
backups.
E
Yeah
so
before
we
had
fcd
manager,
if
we
had
this
flag
that
let
us
change
where
the
backups
went,
I
tried
using
it
recently
because
my
cluster
lists
cops
list.
E
Clusters
were
getting
very,
very
slow
and
when
I
dug
them
into
that,
I
found
out
that
well
aws,
when
we
use
that
we
list
all
the
files
under
a
certain
pat
in
the
bucket
and
aws
doesn't
have
a
better
way
for
you
to
do
it
for
s3,
and
this
starts
becoming
very,
very
slow
when
you
have
all
your
backups
the
same
pat
as
you
have
your
configs,
which
my
users
are
complaining
about,
so
I
tried
moving
and
my
backups
to
a
different
location
which
well
pretty
much
completely
breaks
at
cd,
because
it's
pretty
much
hard
coded
to
look
at
the
configuration
and
put
the
backups
in
a
specific
place
right
now,
so
I
tried
overwriting
the
path
which
would
need
to
add
a
base
parameter
to
manage
file
that
feels
a
little
bit
hacky.
E
I
have
that
diff
out,
but
I'm
not
sure
if
I
like
it
truthfully.
The
other
options
I
thought
about
is
just
modifying
scd
to
accept
an
additional
parameter
for
where
the
backups
actually
go
or
just
completely
removing
backup
store.
Once
we
remove
the
legacy,
let's
see
the
manager,
but
I
don't
really
like
that
option,
because
I
really
like
to
move
my
backup
somewhere
else.
D
We
should
probably
support
we
should
support
setting
that,
like
I'm
not
breaking
things,
I
will
have
a
look
at
the
details
of
the
pr
to
see
it
feels
like
the
fl.
The
flag
that
you
have
here
seems
reasonable
to
look
into.
What's
going
to
try
to
understand
better,
exactly
what's
going
wrong.
Another
thing
is
on
the
root
issue.
D
D
Yeah,
okay,
so
that's
yeah!
We
could
I
mean
we
could
always.
Maybe
we
could
more
carefully
list
anyway,
but
yeah.
Okay,.
A
Okay,
so
for
my
rolling
update
changes
in
118,
I
put
in
a
pr
for
previously.
I
had
it
so
if
you
set
both
the
maximum
surging
and
max
unavail
to
zero.
That
would
be
what
disables
rolling
update
for
an
instance
group,
and
thinking
about
that
I
thought
maybe
having
a
separate
explicit
field
would
be
clearer,
and
so
then
the
question
is
what
the
name
of
this
field
should
be.
I
put
in
enabled
false,
but
it
doesn't
actually
disable
rolling
updates
entirely.
D
False
I,
like
the
I
like
the
notion
of
well,
if
you
make
it
enabled
false
and
you
make
it
the
overall
thing
someone's
going
to
say
that
it's
a
bug,
if
it
doesn't,
if
it
still
takes
yeah
right.
So
if
the
intent
is
that
you
will
later
add
the
ability
to
turn
off
the
painting,
then
that's
fine
or
someone
later
adds
that
and
then
enabled
is
a
good
name.
D
A
Draining
terminate
yeah
that
might
be
what.
D
A
D
A
D
F
I
think
we
have
a
lot
of
big
efforts
that
are
currently
scheduled
for
119..
I
thought
it'd
be
useful
to
kind
of
get
updates
on
each
one.
I
listed
some
that
were
in
my
mind,
but
there
are
probably
others
if
people
want
to
add
them.
Just
starting
with
the
oidc
support.
F
I
am
roles
for
service
accounts,
justin
and
I
were
working
on.
I
think
we're
at
four
prs
now:
they're,
all
overlapping
or
maybe
three.
The
goal
is
to
create
in
aws,
create
an
aws
oidc
provider
and
be
able
to
link
service
accounts
to
im
roles.
F
The
current
our
current
initial
implementation
is
going
to
be
the
cube.
Api
server
provides
the
the
jocks
documents
and
rather
than
publishing
them
somewhere
on
the
internet,
we're
going
to
start
with
exposing
those
through
the
api
load,
balancer
directly,
which
involves
making
sure
the
api
load.
Balancer
is
public
that
it's
using
public
dns
that
we
have
anonymous
auth
for
those
endpoints
and
getting
that
working
all
behind
a
future
flag
and
only
for
kubernetes
119.
F
I
believe
and
then
later
on,
we'll
be
able
to
iterate
on
that
and
work
towards
taking
that
taking
those
documents
and
publishing
them
somewhere
public,
presumably
an
s3
bucket
or
something
like
that,
but
that
would
be
a
bit
more
secure
because
we're
not
exposing
the
api
load,
balancer
and
so
it'll
work
with
more
people's
clusters.
I
don't
know
if
justin
has
anything
to
add
on
that.
D
Wonderful
summary
yeah,
I
mean
I
just
looking
at
the
list.
It's
going
to
be
sort
of
a
pattern
where
we
are
trying
to
get
it
in
with
a
configuration
behind
a
feature
flag.
D
It's
not
necessarily
gonna,
be
a
feature
we
will
recommend
running
with
that
configuration
it's
more
just
like
in
order
to
get
it
and
we
have
to
go
through
some
steps
and
like
this
is
a
a
configuration
that
will
work
and
start
to
exercise
the
the
functionality
and
then
we
can
sort
of
add
the
the
rest
of
the
functionality
in
a
gradual
way.
It's
it's
a
really
great
feature
right,
like
the
the
the
ability
that
the
thing
it
gives
us
is
that
our
pods
are
able
to
have,
including
our
system.
D
Pods
are
able
to
have
very
specific
iam
roles
for
just
the
permissions
they
need,
and
so
dns
controller
has
is
the
only
pod.
Now
that
has
the
dm.
I
think,
oh
my
god,
that
has
the
dns
permission.
D
That's
I
mean
we
yeah,
yes
behind
a
feature
flag
and
it's
open
for
discussion,
but
yes,
that's
how
I
I'm
imagining
us
doing
it
and
if
you
turn
on
the
two
feature
flags,
I
think
it's
used
pod.
I
in
my
in
the
pr
that
I
proposed
was
to
use
pod
iem
and
public
jwk
s
or
something
like
that.
The
behavior
is
we
and
you're
using
a
load
balancer
with
a
load
balancer
and
the
load.
Balancer
is
public
and
you
set
those
feature
flags.
D
Then
it
will
configure
aws
web
identity.
It
will
create
up
another
instant,
I
enrolled.
I
think
it
will
create
a
service
account
for
the
or
that
it
will.
It
will
associate
the
dns
controller
service
account
to
the
iam
role
and
the
iem
role
has
just
the
dns
controller
permissions,
and
so
yes,
in
that
set
of
yes,
we
get
where
we
would
go
with
this,
and
it's
obviously
like
open
for
discussion
is
that
we
would
have
five
five
or
six.
D
Maybe
iem
rolls,
instead
of
that
are
per
pod
rather
than
the
masters
role,
which
is
the
union
of
those
six.
F
Roles
and
then
the
next
on
my
list
was
add-on
operators.
Maybe
you
can
comment
on
that.
D
Yes,
this
will
be
another
one
that
I
think
will
have
to
follow
the
the
pattern
even
more.
You
know
we're
doing
all
this
work
on
on
add-on
operators
and
around
being
able
to
the
problem
we
have
in
cups
is,
I
guess,
a
narrower
problem
with
our
burning
problem
is
being
able
to
update
add-ons
without
having
to
do
a
binary
cops.
D
There's
this
cluster
add-on
subproject
of
c-cluster
lifecycle
that
is
investigating
also
like
basically
creating
up
add-on
operators
to
that
would
have
a
calico
add-on
operator,
for
example,
and
so
we
would
create
that
operator
and
the
operator
would
be
responsible
for
deploying
updated
versions
of
calico
and
for
monitoring,
calico
and
reporting
health
of
it.
D
Things
like
that
there
has
been
some
excellent
feedback
from
john
around
like
the
security
model
of
this,
and
I
would
I
would
maybe
associate
it
to
the
jay
david
pwks
case,
where,
like
the
initial
implementation,
is
likely
to
be
not
something
we
would
recommend
for
people
that
are,
you
know
like
particularly
security
or
more
security
conscious,
and
we
aim
to
get
there.
D
I
think
there's
a
very
open
debate
about
the
extent
to
which
we
can
even
start
that
ball
rolling
right
now
or
whether
we
need
to
like
like
clean
it
up
more
before
we
even
start
like.
We
can
argue
that
the
jwks
thing
is
probably
okay.
I
think
there's
a
very
open
question
about.
Are
the
adam
operators
at
that
point?
Yet.
A
Yeah
well,
yeah.
Basically
our
requirements
are,
we
need
to
have
our
production
pulling
from
our
own
registry
and
we
want
to
make
sure
everything
goes
through
lab
before
it
gets
into
production,
so
things
that
just
upgrade
things
from
directing
downloading
of
the
internet
just
not
possible,
and
I
also
also
concerned
about
the
privileges
needed
to
allow
an
operator
to
install
new
things.
You
know
I
prefer
to
have
installation
done
with
as
little
code
as
possible.
D
Yeah,
and
so
the
like
one
approach
is
that
we
try
to
figure
out
how
to
how
to
make
the
operators
work
in
a
way
that
generates,
manifests
manifest
more
or
less
as
we
do
today,
where
we
can
generate
them
on
the
client
side
and
sort
of
like
able
to
run
in
either
mode
the
the
pre-generation
more
secure
or
the
post-generation
more
dynamic,
more
flexible
way
and
like
we
don't
know
yet
whether
we
can
get
it
in
behind
a
future
flag
in
the
dynamic
yellow
mode
or
whether
we
need
to
wait
for
the
existence
of
the
pre
or
static,
safer
mode.
F
Cool
and
then
next
was
the
cluster
api
support.
D
This
is
further
further
off,
maybe
we'll
see.
But
yes,
I
continue
to
try
to
add
again
behind
a
feature
flag
and
purely
optional.
The
ability
to
where
I
intend
to
start
is
replace
or
running
an
instance
group
backed
by
a
machine
deployment,
so
the
cluster
api
equipment
and
not
for
control
plane
nodes
only
for
worker
nodes.
D
Probably
jazz
we've
talked
about
that
as
a
good
idea.
Anyway,
it's
it
certainly
becomes
a
lot
more
complicated
if
we
yeah.
If
you
want
to
actually
dynamically,
I
think
we
could
so
I
think
we
could,
if
you
want
to
dynamically,
create
a
new
cops
instance
group.
Yes,
it
would
likely
require
that
because
we
probably
don't
want
to
be
writing
to
s3
from
there,
but
I
think
we
could
probably
back
an
a
a
pre-created.
D
We
could
have
a
some
mode
where
we
could,
in
the
cops
client
create
an
instance
group
which
would
be
backed
by
machine
deployment,
and
then
that
would
be
able
to
access
s3,
because
the
files
would
be
there
just
fine,
so
that
would
be
exactly
the
same
mode
of
operation.
It's
just
the
the
dynamic
creation
case
probably
requires
us
to
rethink
how
where
we
source
those
that
those
instance
group
yamls
from.
F
Arm
64
support
pacman.
Do
you
want
to
take
that
one.
B
Thank
you.
So
yesterday
the
main
pr
for
worker
nodes
was
merged,
so
yay
we
have
support
for
arm64.
B
I
think,
after
looking
a
bit,
we
could
make
masters
work
on
r64
quite
easily.
B
B
And
I
think
the
main
blocker
well,
the
only
one
remaining-
is
hcd
manager.
I
know
that
justin
started
something
there
saw
the
pr,
but
I
don't
think
there's
anything
else.
A
I
think
possibly
the
next
step
would
be
to
be
able
to
run
a
cluster
without
any
amd
worker
nodes,
so
to
make
sure
everything
we
have
that
that
runs
off
the
masters
have
arm
64
images.
A
B
Would
be
good
to
fix
like
cube
dns,
it's
not
the
multi-arch
image
referenced,
it's
the
amd
one,
so
most
of
them
are
pretty
simple
to
fix
in
because
there
there
are
such
images
for
a
long
time.
There
are
some
more
quirky
ones,
like
the
dns,
the
auto
scaler
used
in
dns
that
they
don't
have
a
multi-arch
image,
but
they
have
multiple
images
for
arm
amd.
B
And
this
is
actually
something
that
we
will
have
to
debate
for
our
own
images
like
the
names
and
which
ones
we
want
to
push
to
repos,
which
ones
we
want
to
sideload,
because
as
it
is
now,
we
have
three
images
that
are
pulled
from
repo
proto
cube.
That
is
side
loaded
and
I
don't
know
it's
kubernetes
has
a
naming
convention
and,
as
we
didn't
have
this
problem
yet
well
until
now
we
never
discussed
it,
but
I
think.
B
B
So
one
of
the
next
milestones
proposed
by
peter
was
to
add
some
kind
of
periodic
testing.
This
involves
adding
one
or
two
new
parameters.
We
have
to
be
able
to
specify
the
image
differently
for
master
and
work
control,
plane
and
worker
node
to
define
the
cluster
as
just
the
masters
will
be
on
amd
64
and
the
workers
will
be
on
arm
64.
B
and
at
the
moment,
there's
no
way
to
create
such
a
cluster.
We
have
only
the
instance
type.
D
Yeah,
I
don't
think
it's
a
limitation
of
our
schema
right,
so
it's
just
it's
just
the
create
cluster
layer.
There
is
also
the
yeah.
There
is
a
pr
that
I
have
in.
We
have
like
the
set
commands
as
well.
I
I
don't
know
whether
we
could
use
the
it
would
have
to
be
a
new
syntax
to
to
to
make
that
work
there.
So
I
I'm
in
favor
of
of
a
flag
in
this
case.
B
Okay,
I
will
work
on
creating
and
adding
some
testing
for
sure
it
fails
at
the
moment
in
some
cases,
because
I
don't
really
think
cube
test
or
sick
testing
was
thinking
about
testing
arm
stuff
or
I
don't
know
there
aren't
many
tests
for
that
last
time
I
checked
many
things
were
missing.
D
B
I
already
tested
it
when
I
created
my
first
version
of
the
pr
two
months
ago.
I
hacked
the
create
and
had
a
special
pr
just
to
test
it,
and
about
50
or
60.
Tests
failed
out
of
600
and
most
of
those
tests
were
related
to
storage
and
because
cube
test
was
pulling
the
amd
version
of
busy
box
something.
D
Yeah
that
that's
yeah,
a
point
of
the
a
cube
test
is
the
framework
that
runs
the
ete
tests.
I
think
you're
saying
the
ete
test
sail
on
or
some
of
these
you
just
fell
in
arm.
This
may
be
better
and
I
think
there
was
actually
discussion,
like
some
of
those
may
have
been
fixed.
The
sig
testing
folks,
in
particular
ben
the
elder,
raised
a
topic
that
the
kubernetes
projects,
ships
a
lot
of
architectures,
for
which
they
have
no
test
coverage,
for
which
we
have
no
test
coverage.
D
So
once
again,
this
would
be
a
great
thing
for
us
to
do.
I
don't
think
we
should
feel
like
we
are
on
the
hook
for
making
the
ede
tests
pass.
It
will
be
a
good
contribution.
I
think
it
is
a
sufficient
contribution
for
us
to
get
the
testing
going
and,
let's
let
those
sig
storage.
In
this
case,
I
guess
like
replace
their
busybox
image.
B
D
Everyone
would
love
to
see
those
tests
like
not
just
in
cops
but
in
the
project
I'm
sure
like
we
want
to
know
if
it
doesn't
work
on
arm.
That's
important,
like
aws,
has
arm
chips
and
we'll
see
what
happens
next
week
with
apple,
so
I'm
sure
that,
like
their
arm,
is
increasingly
relevant
so
like
I
think
everyone
wants
to
make
sure
it
works
on
on
these
these
platforms,
so.
B
A
Okay,
released
automation.
F
D
I
had
done
like
a
that's
something
like
a
couple
of
weeks
ago.
I'll
take
another
look.
I
had
yeah,
I
lost.
I
lost
sight
of
this.
Thank
you.
F
Yeah,
because
we
we
push
cops
controller
to
a
staging
gcr
repo
right
now
so
it'd
just
be
a
matter
of,
I
believe,
promoting
the
image
manifest
in
the
kds.io
repo,
maybe
as
well
as
pushing
any
other
images
that
we
want
posted
on
gcr.
D
I
can
take
a
look.
That
sounds
sounds
at
least
the
at
least
the
images.
The
container
images,
I
think,
should
be
smoother
sailing
at
this
point.
So
I'll
see
where
we
are
okay,
cool.
A
Okay,
yeah,
I
want
to
mention
some
things.
I'm
doing
I'm
working
on
reducing
the
lifetimes
of
some
of
the
certificates.
We
have
I've
gotten
most
of
the
easy
ones,
I'm
having
a
bit
of
a
trouble
with
the
the
api
server
server
cert,
mostly
because
that
wants
the
ip
address
of
the
load
balancer.
So
that
means,
if
we're
going
to
have
it
generated
by
node
up,
we
have
to
get
the
user
data
generated
after
the
load.
A
A
So
if
anyone
knows
about
openstack,
I'd
appreciate
some
help
with
that,
the
other
one
I'm
doing
is
trying
to
get
ready
for
switching
to
our
introducing
a
v1
beta
1
api
in
1.20.
A
So
I'm
trying
to
make
sure
that
119
is
able
to
be
downgraded
to
once
we
go
to
a
120
which
has
v1
theta
1.,
and
so
that's
that's.
Why
I'm
trying
to
get
all
the
stuff
into
node
up
config.
F
F
Peter
for
these
kittens
yeah,
we
were
talking
in
slack
a
few
days
ago
about
how
kubernetes
119
is
delayed
august
september.
I
don't
know
exactly
but
there's
you
know
this
list
that
we
just
discussed
and
some
more
stuff
that
is
targeted
for
cops
119.
F
But
if
we
want
to
follow
our
pattern
of
waiting
a
month
or
two
after
kubernetes
release
is
stable.
That's
a
lot
of
features
that
are
that
may
be
ready
sooner.
That
are
not
getting
released
in
cops
until
september.
F
So
if
there's
something
that
we
could
do,
we
were
throwing
around
ideas
of
like
adjusting
how
we
do
our
pre-releases
or
focusing
on
features
that
are
not
specific
to
kubernetes
119
and
getting
those
in
and
then
doing
pre-releases,
so
that
people
could
use
those
and
know
that
they're
more
stable
for
kubernetes,
118
and
earlier
or
some
sort
of
intermediate
minor
version
release
or
cut
or
feature
release
between
118
and
118.
Somehow
I
don't
know
if
anyone
has
other
ideas
or.
A
So,
which
features
are
you
talking
about.
F
Well,
the
one
that
was
mentioned
was
my
instance:
types
support
for
dynamic,
dynamically,
getting
instance
types
but
other
ones.
Just
looking
at
this
list,
like
I
think
the
certificate
lifetimes
could
be
useful.
D
Yeah,
like
one
of
the
things
we
could
also
do,
is
like
get
our
119
beta
out
for,
like
there
are
going
to
be
a
lot
more
features
in
there.
Let's
get
people
like
get
that
beta
out
in
a
timely
way,
so
that
people
can
try
those
features
and
like
there
are
a
lot
of
features.
So
let's
have
a
longer
beta
period
and
make
use
of
that
specifically
on
the
instance
type
one.
D
I
feel
like
that's
a
reasonable
one
to
back
port
because
or
to
add
the
sport
for
the
new
instance
types
like
that's
what
people
really
want
right?
They
want
to
run
these
new
instance
types,
and
I
saw
someone
did
a
pr
trying
to
do
it
against
one
18,
which
is,
I
guess,
the
alternative.
D
So
I
I
I
think,
that's
more
of
a
bug
fix
and
that,
like
the
way
I
I
did,
it
originally
wasn't
like
adaptable,
and
this
is
a
way
that
it's
more
adaptable
to
changes,
and
so
that
to
me
like
putting
that
into
118
is
unacceptable
risk
but
or
an
acceptable
thing.
But
we
can.
D
The
the
alternative
is
to
just
like
continue
to
release
118
releases
patch
releases
batteries
yeah
when
a
new
instant
site
comes
along
using
the
generate
machine
types
script.
I
guess.
D
Yeah,
I
feel
like
it's
less
it's
a
month
or
two
like
a
month
or
less
like.
I
feel
like
we're
we're
in
reasonable
shape
there
I
feel,
like
I,
don't
know
how
people
feel
in
general,
like
that's
about
it's
about
what
people
think
like
it's
I'll
turn
that
question
around.
But
what
do
you?
What
does
everyone
think
two
weeks
ago.
A
Yeah
I
mean
I
I
listed
what
I
think
is
the
one
blocker.
If
I
move
forward
into
that
section,
and
maybe
these
new
instance
types
are,
I
don't
know
what
what
do
people
think
the
blockers
are
for.
B
You
know
install
it
on
some,
whatever
new
instance,
and
I
would
go
about
adding
that
and
if
you
don't
want
something
else.
B
I
I
agree
with
you
so
those
two
I
would
get
those
in
cut
a
release
candidate
to
to
be
something
that
people
actually
try.
You
know
and
if
everything
goes
okay
in
three
weeks
or
so
four
weeks,
we
can
release
118.
D
Yeah
we
haven't
done,
we
haven't
done
release
candidates
in
the
past.
I
don't
know
whether
we
should
do
that.
It's
a
nice
signal
that,
like
we
do
that,
I
think
we've
sort
of
made
the
the
betas
be
something
we
think
of
as
as
release
candidates,
yeah.
B
D
Idea,
yes,
we
won't
disrupt
our
naming
scheme,
but
we
will
like
put
words
that
that
make
it
that,
like
suggested,
are
intended
to
be
our
final
beta
before
release.
I
like,
okay,.
A
So
a
beta
two:
well,
let's
go
through
the
other
ones.
Oh
well,
actually
119
alpha
one.
I
think
we
should
cut
one
of
those
really
and
I
think
we
need
because
of
the
cves.
We
want
117
and
116.
D
Yeah,
although
I
must
say
the
the
above,
the
username
having
changed
gets
me
every
time.
Five
years
of
memory,
that
is,
admin
is
just.
A
Like
okay,
anything
else.
A
B
Okay,
one
thing
from
me:
I
backported
all
the
fixes
that
I
know
of
for
the
cves,
so
those
should
be
okay
but
didn't
get
to
start
any
of
those.
B
B
To
see
if
they're
still
actually
working
would
be
nice.
D
F
B
By
the
way
regarding
testing,
I'm
not
sure
if
anyone
noticed
yet
but
me
and
peter
added
the
tests
for
network
plugins.
So
right
now,
when
you
do
a
pr
for
calico,
cilium
or
something
it
will
run
the
test
with
that
network
provider
or
something.