►
From YouTube: Kubernetes kops office hours 20200717
Description
Recording of the kops office hours meeting held on 20200717
A
Hello,
everybody
today
is
friday
july
17th.
This
is
the
bi-weekly
cops
office
hours
meeting.
I
am
your
moderator,
facilitator,
justin,
santa
barbara.
I
work
at
google
a
reminder.
This
meeting
is
being
recorded
when
we
put
on
the
internet
and
to
be
pleased,
therefore
be
mindful
of
our
code
of
conduct,
which
does
boil
down
to
being
a
good
person.
We
have
a
fairly
full
agenda,
which
I
am
pasting
in
the
chat,
and
so
please,
if
you
have
other
agenda
items,
to
add
them
to
the
list
so
that
we
can
be
sure
to
get
them.
A
B
B
The
new
im
permissions
have
been
default
for
new
clusters
since
about
1
9,
or
so
so.
This
really
shouldn't
affect
anyone.
A
B
A
C
Can
we
do
it
in
118
quickly?
You
know
add
something
there
to
warn
them.
A
A
So
if
we,
what
we
have
done
in
the
past
here
is
basically
turn
it
off,
except
we
have
a
get
a
jail
free
card
where,
if
you
pass
a
feature
flag
or
a
environment
variable,
you
are
somehow
able
to
continue,
but
it
will
break
your
ci
system,
for
example,
like
so
there's
no
way
someone
will
miss
it,
but
when
that
ci
system
breaks,
all
they
have
to
do
is
pass
the
flag
and
then
so,
basically
they
are
made
aware
they
can
temporarily
bypass
it,
but
they
understand
that
then
it
will
go
away,
but
everyone
has
seen
it.
B
Okay,
so
you
okay,
so
you
want
a
a
way
to
a
special
way
to
do
it
for
119.
B
A
So
I
I'm
trying
to
think
of
when
we've
done
this
in
the
past.
It
will
be
in
one
of
our
feature
flags,
I
think,
but
essentially
we
we
turn.
We
we
make
it
an
error
and
then
we
say,
but
it's
not
an
error.
If
you
pass
that
feature
flag,
okay,.
A
B
Yeah
in
about
one
eight
one,
nine
all
new
clusters
were
created
without
it.
A
A
That
then,
we
can
probably
be
more
aggressive
yeah
I
mean,
certainly
with
that.
We
can
definitely
justify
this.
I'd
be
very
happy
if
we
had
the
feature
flag,
because
I
suspect
there
are
still
some
clusters
that
are
just
running
with
it,
and
then
there
are
some
people
that
are
accidentally
or
intentionally
relying
on
those
permissions.
A
C
F
C
A
You
actually
mean
the
integration
test,
not
the
ede
tests,
yeah,
okay,
yeah,
okay,
fine!
I
was
like
yes,
yes,
I
was.
I
was
really
struggling
to
understand
how
the
t
tests
were
not
working,
but,
yes,
the
integration
tests
definitely
make
more
sense.
We
try
to
like
not
integration
test,
every
everything
integrated,
and
so
yes,
there
is.
It
is
a
different
code
path
for
creation.
B
So
the
next
one
there
are
two
old
failing
tests:
copseido's
canary
and
cops
gce
canary
last
time
I
tried
to
fix
them,
been
the
elder,
ask
us
to
move
and
it
into
our
own
repo,
and
I
don't
think
they're
actually
providing
much
value.
So
I
just
don't
appear
to
delete
them.
So
do
we
want
to
delete
them
or
move
them
or
what.
E
I'm
good
on
gc
to
get
rid
of
that
one.
For
now
it's
not
working
and
it's
it's
unlikely
to
be
very
useful
to
us
it's
hard
for
us
to
change
right
now.
I,
like
yeah,
that
was
part
of
what
I
think
I
have
a
pr
out
to
do
something
like
that,
but
it
also
got
pushed
at
the
bottom.
So
I'm
I'm
in
I'm
in
favor
of
the
gce
side
for
the
aws
side,
all
that
others
chime
in.
E
A
I
just
like
the
the
tests
it
turns
out.
The
tests
were
doing
two
things
right.
They
were,
as
far
as
I
know,
are
only
tests
that
we're
actually
testing
multiple
masters,
so
we
should
have
those
tests
ourselves
and
that
test
should
not
be
in
this
canary
test.
The
canary
test
is
actually
not
really
supposed
to
be
for
cops
at
all.
A
It
turns
out
that
cubetest
isn't
actually
following
test
infra
patterns,
despite
being
a
testing
for
project,
and
so
it's
not
actually
looking
at
the
canary
results,
which
is
why
they're
still
promoting,
despite
the
fact
that
our
tests
are
failing,
and
so
I
I
agree
that
test
has
little
value
from
that
point
of
view,
I'd
be
happier
if
they
actually
started
looking
at
the
canary
results,
but
that's
not
our
battle.
I
do
think
that
we
should
on
our
side.
We
should
get
some
coverage
of
multiple
masters.
A
C
That's
fixed
in
119.,
so
we
just
didn't
switch.
We
are
unstable,
really
stable,
and
because
of
that,
we
we
didn't,
we
don't
see
that
it
actually
works.
You
could
see
it
in
the
arm.
64
tests.
A
Yeah,
that
is
the
downside
of
the
the
tests
being
tied
to
the
releases
as
it
were,
tied
to
the
kubernetes
versions
right,
I
presume
they
fixed
the
flake
in
the
test
right,
I
presume
it
was
a
test
flick
and
not
a
actual
behavior
yeah.
B
C
A
Okay,
well,
all
right!
So
yes,
I
am.
I
am
reluctantly.
I
reluctantly
agree
that
we
can
delete
this.
I
I
I
actually
thought
the
canaries
were
used
by
just,
but
it
turns
out
they're,
not
so,
okay
blocked
on
sorry,
we
blocked
on
me
or
someone
else.
A
B
A
It
is
different
right
like
there
is
a
this
was
a
tricky
one.
I
think
this
is
a
great
one
to
discuss
like
I
will
just
like.
It
is
different
to
say
like
from
a
intent
point
of
view.
I
want
machines
in
any
of
these
zones
versus
I
want
machines
in
this
zone
the
synonym
this
this
zone.
Arguably,
if
you
have
a
cluster
auto
scaler
that
is
working,
it
becomes
an
academic
difference,
but
I
will
leave
it
up
to
other
people
to
see
what
they
think.
A
Oh
and
sorry,
one
more
thing,
cluster
api
machine
deployment,
I
think,
is
going
to
be
tied
to
a
single
zone.
So
this
more
closely
matches,
I
think,
to
what
cluster
api
is
going
to
do.
I
believe,
though,
who
knows.
G
At
least
using
one
instance
group
per
zone,
so
that
kind
of
aligns
with
what
we've
been
seeing
in
real
world.
G
H
We
don't
use
the
default
nodes,
one,
we
still
spin
it
up
and
we
some
of
our
you
know
basic
bootstrapping
stuff
for
our
clusters
go
there,
but
we
don't
actually
run
workloads
on
those
and
the
only
reason
we
still
leave
them.
There
is
because
I
haven't
removed
it
from
our
templates
and
then
I
don't
have
a
way
to
delete
them
programmatically
right
now.
So
so
I
support
this.
I
as
long
as
there's
a
clean
path
to
making
sure
new
users
are
aware
of
it,
but
I
think
most
people
would
be
delighted.
E
Yeah
exactly
you've
been
doing
this
as
well.
We've
been
doing
this
for
years,
basically
just
by
it
by
ourselves.
So
this
is
a
great
a
great
addition.
As
far
as
I'm
concerned.
F
C
Yeah,
but
there's
just
one
gotcha,
I
guess
there's
no
one
here
that
has
anything
against
the
behavior,
the
gocha.
Right
now
we
say
node
count
as
a
parameter
which
can
be
understood
either.
So,
if
you
have
multiple
zone
can
be
understood
as
either.
I
want
five
nodes
and
divide
the
by
something
and
do
some
math.
You
know
to
see
how
many
in
each
zone
or
we
can
not
repurpose,
but
I
don't
know,
reefs
rename
the
parameter
to
mean
how
many
per
ac.
B
Well,
this
one
gives
you
the
number
of
nodes
you
ask
for
just
like.
When
you
ask
for
three
masters,
you
get
three
instance
groups
with
one
master,
each.
E
A
My
view
is
like
the
the
gk
behavior
is
confusing.
I,
when
I
ask
for
two
nodes-
and
I
get
six-
I
find
it
like.
Why
do
I
suddenly
have
six
notes,
so
I
personally
would
be
in
favor
of
no
count
being
the
behavior
that
john
described
where
it's
like.
I
asked
for
two
notes.
I
get
two
notes.
I
guess
it's
weird.
If
I
have
three
zones,
I
ask
for
two
nodes
that
then
like
one
of
them,
one
random
one
random
gets
like
zero.
B
F
E
What
what
if
we
use
a
different,
a
different
field
for
maybe
then
we
have
repetition.
We've
got
like
nodes
per
a
z
or
nodes
total
nodes
per
group,
so
there's
so
that
at
least
you
wouldn't
have
you
wouldn't
have
it?
You
know
you
could
you
choose
which
one
you
wanted
to
use
and
you
had
to
make
it
an
intentional
choice.
G
I
kind
of
like
what
john's
proposing
really
like,
where
you
have
like,
if
you
specify
two,
but
you
specify
three,
like
you
get
two
az,
the
only
thing
I
say
there
is
like
still
create
the
last
one
just
set
it
to
zero,
just
so
that
if
someone
wants
to
scale
it
up
later,
they
don't
have
to
go
and
go
through
that
whole
process.
Again.
B
Now
it
if
you
would
get
an
instance
group
with
zero
nodes,
it
skips
creating
that
instant
script.
I
A
It
does
sound,
like
everyone
is
doing
this
anyway.
Is
there
a
what
is
the
fallback
for
people
that
that
don't
like
it
that
want
the
old
behavior
like?
Can
they
get
it.
C
A
It
is
supposed
to
get
you
going
in
a
reasonable
behavior.
It
sounds
like
people
are
saying.
A
better
behavior
is
like
this
one's
own
per
ig
behavior
on
the
I
I'm
guessing,
the
code
isn't
horrifically
complicated,
and
if
we
do
encounter
these
people
we
can
add
a
future
flag.
If
we
need
to,
is
that
fair
or
something
feature
flag
or
a
flag?
I
guess.
A
A
Yeah
but
every
week,
but
it's
we
can
if
we
need
to
it's
it's,
not
it's
not
going
to
be
particularly
complicated
to
have
a
flag
to
go
with
the
old
behavior.
A
Are
we
also
assuming
basically
that
people
are
running
autoscaler
is
that
is
that
is
basically
everyone
running
order?
Scaler
at
this
point
see
lots
of
knobs
all
right.
Okay,.
C
A
That's
what
I'm
wondering
maybe
our
first
add-on
operator
anyway.
I.
A
B
There's
definitely,
but
you
want
the
zero
node
in
the
script.
J
I'm
not
sure
that
we
set
all
the
correct.
I
don't
know
if
I
figured
off
their
labels
or
annotations
on
our
either
aws
labels
or
tags
on
the
auto
scalers
that
we
create
so
that
cluster
autoscaler
can
scale
from
zero.
J
C
By
the
way
regarding
these
labels
and
stuff,
someone
came
to
me
in
slack
and
said
that
they
were
using
for
some
time
cloud
label
named
name,
and
they
were
surprised
that
in
a
recent
release
it
doesn't
work
anymore.
So
basically
they
were
renaming
their
instances
by
setting
a
label
a
tag.
C
B
A
This
is
a
change
all
right
I
will.
I
will
take
a
look
at
that.
If
film
beats
me
to
it
peter
you
have
a
pair
of
issues,
a
pair
of
topics
rather.
D
Yeah,
the
first
one
is,
I
think,
we've
discussed
it
in
the
past.
Maybe
it's
about
how
we
changing
the
behavior
of
export,
keep
config
command
to
no
longer
export
the
admin
user
by
default.
I
think
we're
in
general
consensus
maybe,
but
I
was
mostly
looking
for
feedback
on
the
new
flags
that
are
being
added
to
the
command
since
that's
more
difficult
to
change
down
the
road.
So.
A
I
see
we're
getting
into
edit
wars
about
whether
errors
start
with
capital
letters
or
not,
but
okay.
Yes,
I
think
this
looks
reasonable.
John.
What
are
your
additional
changes.
B
I
want
to,
I
have
a
use
case
where
we
want
to
export
a
per
cluster
user,
but
not
give
it
credentials
and
the
other
is.
I
want
to
reduce
the
lifetime
of
the
certificate
to
18
hours.
A
The
second
one
feels
like
something:
that'll
have
to
be
behind
a
flag,
but
but
yes,
I
think
that
sounds
well.
B
A
Okay
yeah,
so
let's
just
let's
get
them
all
into
1,
19
together
then,
so
we
don't
have
one
like
yeah
all
right.
It
sounds
reasonable
to
me
and
if
anyone
else
has
any
thoughts
on
this
topic.
A
All
right
and
peter
you
have
the
next
one
about
the
kubernetes
dependencies
to
119.
D
I
tried
updating
our
dependencies
and
I
ran
into
a
couple
issues.
One
is
regarding
the
api
machinery
and
the
conversion
code
generation.
They
no
longer
support
implicit
conversion
between
api
versions
or
something
like
that.
That's
at
the
limits
of
my
api
machinery
expertise,
so
I
might
need
someone
else
to
step
in
with
that
one
and
then
there's
also
this
logger
version
dependency
mismatch
due
to
a
breaking
change
in
the
library,
that's
causing
us
issues.
So
there's
a
couple
blockers
before
we
can
upgrade
to
119.
A
C
C
C
C
A
A
Sorry,
just
on
that
it
is,
it
is
important
that
we
install
ntp
like
it
doesn't
sound
like
anyone's
saying
I
don't
want
to
install
ntp
like
without
ntp.
Things
go
very
badly
wrong
and
so
like
even
just
talking
to
ec2
breaks
so
like
that's,
why
we
are
so
careful
to
make
sure
sntp
is
installed
but
yeah.
It's
certainly
valid
to
say.
I
want.
C
C
C
A
Yeah,
thank
you
for
finding.
Thank
you
for
progressing
that
and
finding
that
issue
and
sorry
about
that.
I
oh
no
problem,
I'm
surprised,
but
yes
it.
I
agree,
it
doesn't
have
permission.
So,
yes,.
C
Okay,
next
I'm
progressing
on
multi
architecture
support.
I
have
a
pr
for
images,
so
the
master
ones
and
there
there
are
some
questions
about
the
naming
format.
C
A
There's
there's
also
at
least
in
theory
the.
What
are
they
called
manifest
like
the
multi-arch
manifests
that
are
able
to
pull
the
correct
architecture,
so
I
could
pull
like
in
your
second
example:
cops
controller
nb641190.
There
may
be
a
cops
controller,
one
nineteen
zero.
That
is
a
that
says
like
if
you
want
arm64
you
go
here.
If
you
want
andy
for
you
here
so.
C
That
was
the
next
thing.
If
we
want
multi
architecture
manifests,
then
we
have
to
revert
back
to
docker.
C
I
don't
want
about
that.
So
if
you
look
at
what
I'm
linking
in
my
whatever
I
in
my
list,
that's
a
very
old
bug
where
the
docker
bazel
guys
just
say
that
it's
not
a
priority
for
them
to
have
multi-manifest
stuff.
A
C
A
C
A
Yeah
I
mean
we
do
have
that
behavior
today,
where,
if
you
have
a
cop
space
url
set,
it
will
sideload
everything,
so
that
isn't
that
I
guess
we
could
and
it
seems
to
work
fine.
I
guess
we
could
actually
pursue
that
it
might
make
mirroring
easier,
although
you
have
to
pull
like
api
server,
for
example,.
A
And
would
you
do
you
have
a
preference
in
terms
of
side,
loading
versus
local
registry
or
mirror
registry?
I
guess
you'd
call
it.
B
I
don't
think
so.
The
only
thing
I
can
think
of
is
side.
Loading
might
allow
building
images
with
them
or
building
ami's
with
them
built
in,
but
that's
kind
of
minor.
Yes,
yeah.
G
Yeah,
the
only
thing
I
can
think
of
the
registry
is
we
have
scanning
on
those
images
if
they're
built
with
non-scratch.
So
that's
the
only
thing
I
can
think
of.
C
A
In
terms
of
the
naming,
if
we
disregard
the
side
load
issue,
it
does
seem
like
the
the
constant
tag
ie
119,
0
and
not
119
0-mb
64
is
probably
more
correct.
If
we
imagine
multi-arch
manifests
in
that
that
that
seems
more
correct,
you
should
like
if
it
would
feel
weird
to
have
a
well,
maybe
it's
no
weirder.
Maybe
it's
just
my
bias.
It
feels
less
weird
to
me
to
have
a
multi-arch
manifesto
like
refer
to
the
same
tag
and
different
name
than
it
does
a
different
name
and
it's
the
same
tag.
A
C
Kk
is
with
n
repositories
with
dash
arch
and
one
without
it,
which
is
the
multi
architecture.
One
repository.
A
Yeah
is
who
is
doing
the
architecture
in
the
tag
one
nineteen
zero
dash,
amd64,
anyone
that
seems
that
one
seemed
the
first
one
of
the
two
you've
listed
seems
like
the
weird
one
to
me:
okay,
but
that's
just
that.
That's
just
I'm
just
asking
like
that's
just
my
bias.
C
Internally
in
my
company,
because
it's
easier
than
other
repos
usually
comes
with
permissions
and
other
things,
so
it
was
much
easier
to
just
do
it
like
that.
But
if
you
think
that
we
should
have
other
rep
the
arch
repos,
then
there's
no
problem,
but
then
anyway,
it's
the
question.
If
about
whether
we
do
it
via
side
loading
or
we
continue
with
repos.
A
I
mean
I,
the
pulling
falling
from
a
registry
has
the
potential
to
be
much
more
efficient
as
well.
If
we
are
actually
pulling
if
we're
pre-loading,
then
side
loading
is
for
put
baking
into
the
ami,
then
side
loading
is
presumably
better,
but
we're
not
taking
into
the
ami.
So.
C
A
A
I
can
have
a
look
at
this
one
as
well
like
I
agree.
This
is
a
problem
point
and
we'll
be
increasing
a
problem
point
for
what
I
presume
is
our
multi-uh
architecture
feature,
although
maybe
that
doesn't
particularly
matter
to
caps,
but
I
don't
know.
A
C
That's
if
we
put
this
to
rest
until
next
time,
it's
really.
I
cannot
do
anything
until
we
decide
which
direction
right.
Okay,
okay,
so
last
thing
is
versions
and
os
support.
C
Now
people
are
coming
complaining
that
hey.
I
tried
something
and
doesn't
work
because,
let's
say
ubuntu
2004
doesn't
have
packages
for
docker
1806,
something
or
even
earlier.
So
I
think
it
would
be
a
good
thing
to
clean
up
those
things
and
add
some
package.
The
latest
packages
in
that
particular
docker
version,
but
as
a
separate
package,
sorry,
a
star
gz,
so
all
of
them
will
be
target
z
and
it
wouldn't
matter
os
support
or
not.
They
will
support
whatever
support
on
our
side.
A
A
Feels
risky
it
is,
it
is
a
big
change
right.
I
I
very
much
I
like
I,
like
the
tarjay
z.
I
think
we've
been
burned
repeatedly
by
the
devs
in
the
rpms,
but
I
also
don't
want
to
reopen
that
kind
of
worms
by
changing
from
the
devs
and
the
rpms
on
what
was
perhaps
already
a
working
cluster.
A
Yeah,
let
it
age
out
exactly
and
like
we
should
absolutely
like,
if
you
want
to
add
support
for
1806
20
ubuntu
2004,
then
like,
let's
put
a
targzy
in
there,
but
and
when
we're
adding
support
for
whatever
the
next
version
of
docker
is,
let's
put
a
target
z
and
that's
basically
like
let
the
clock
run
out
on
the
on
the
old
docker
versions.
But
I
just
I
don't
really
want
to
touch
them.
To
be
honest,.
A
That
sounds.
That
sounds
great.
Yes,
I
think
it's
absolutely
fine
to
like
to
move
people
into
a
different
thing
that
we
think
is
better
as
they
update.
We
just
don't
want
to
like
change
it
on
an
existing
configuration
type
thing.
C
Not
a
problem,
thank
you.
Thank
you.
Also,
I
cleaned
up
the
images
you
know
pr.
Basically,
I
removed
everything
that
was
built
with
docker,
as
we
discussed
a
month
ago,
is
that
okay
to
approve,
then
move
on
so
to
remove
all
those
protocube
cops
controller
that
were
built
with
docker.
A
It's
fine
with
me,
I
don't
know
if
anyone
else
has
any
objection.
Yeah,
it
seems
like
you're
the
one
with
the
biggest
objection
with
the
multi-arch
thing.
A
C
That
was
the
part
that
we
discussed
previously.
What
we
do
with
the
images-
okay,
oh
sorry
or
no,
that
was
docker
related,
but
we
discussed
it
now.
This
was
if
we
move
them
up.
Sorry,
I
wanted
to
think
to
think
about
moving
the
docker
to
assets
to
do
it
in
cloud
the
cloud
part
and
because
right
now
we
have
archives
star
disease,
so
we
could
just
put
them
as
assets
from
when
we
create
the.
A
We
want
I
I
very
much
approve
of
getting
like
logic
out
of
node
up.
I
think
that's
a
good
thing
to
do.
The
reason
this
one
is
historically
in
node-up
is
because
it
is
dependent
on
the
or
it
was
dependent
on
the
distro,
and
we
don't
know
the
distro
from
it's
very
hard
to
map
from
an
ami
to
a
distro,
and
so
we
don't.
We
didn't
know
which
package
to
install.
We
couldn't
even
tell
whether
it
was
debian
or
red
hat
like
devs
or
rpms.
A
If
that's
no
longer
the
case,
we
could
certainly
put
it
there.
I
wonder
the
challenge
is
gonna,
be
how
we,
how
we
get
from
a
to
b,
like
how
do
we
get
people
doing
that?
I
guess
we
would
only
put
the
asset
in
on
newer
versions
and
put
it
into.
C
A
That
could
yeah,
I
think
that
makes
sense
and
or
like,
if
you
specify
it
in
in
the
in
in
the
app.
If
you
specify
a
docker
asset,
it
will
use
it,
and
if
you
don't,
it
will
go
through
the
magic
behavior,
exactly
only
specify
when
we
can
be
sure
that
it's
not
ios
dependent,
which
is
that's
pretty
nice
behavior.
A
All
right,
so
I
have
two
items
and
then
we
should
go
through
the
release
plan.
One
piece
of
housekeeping-
I
guess,
which
is
we
finally
did
a
release
119.0
alpha,
one
that
was
built
from
cloud,
build
jobs,
so
yay
the
actual
promotion
process
of
the
binaries
is
was
also
via
a
automatable
process,
but
it
is
not
currently
merged.
But
then
nor
is
the
basically
the
the
binaries
are
following
a
similar
pattern
to
the
containers
and
the
containers.
A
Are
they
work
via
a
pr
and
they
are
still
working
to
get
that
all
that
machinery
up
and
running
once
that,
like,
I
think,
they're
having
a
hard
time
or
it's
proving
very
complicated
to
cut
over
case.gcr.io,
but
once
that
is
done,
then
they
may
have
some
bandwidth
to
look
at
our
binaries
but
yeah.
So
we
have,
we
have
a
binary
promoter.
The
promotion
was
done
using,
so
I
tagged
the
build.
So
someone
has
to
tag
the
build.
A
I
think
john
approved
it
and
I
think
there
was
some
feedback
on
like
that
being
difficult
to
prove,
and
then
I
think
I
had
to
manually
tag
it
after
it
merged
which
we
could
probably
automate.
I
don't
know
and
then
that
triggered
a
build
of
the
a
cloud
build
which
dumped
all
its
artifacts
to
gcs
and
then
anta
gcr.
A
E
A
Yeah,
I
will,
I
will
work
on
those
I
have
the
the
docs.
They
were
there's
a
little
bit
of
a
scramble
yeah,
and
then
it
is
a
new
release
cycle
that
we
are
coming
to
we're
coming
to
the
end
of
bombay
cycle
in
the
beginning
of
a
new
one,
and
we
often
look
at
our
the
people.
Who've
been
contributing
and
doing
a
wonderful
job,
and
I
think
there
are
at
least
two
people
that
have
really
stood
out.
A
This
cycle,
who
are
that
are
not
yet
approvers
who
are
john
and
cyprian,
and
if
no
one
objects,
I
think
we
should
and
they
are
willing
to
act
as
such.
I
think
we
should
make
them
approvers.
A
A
All
right,
that's
a
that's
a
great
note
to
welcome
you
on.
So
thank
you
for
all
your
our
hard
work
and,
yes,
we
will
put
through
a
pr
to
make
that
make
the
machinery
the
our
robot
overlords.
Aware
of
your
your
new
approval,
your
status
and
thank
you
for
everything.
You've
been
doing.
C
Thank
you.
Could
we
be
added
also
to
the
cops
maintainers
group
or
something
I
think,
that's
the
one
that
lets
us
label
stuff.
A
Yeah,
I
think
I
yes
good
point,
we
don't
we.
I
don't
think
we
draw
distinction
to
my
knowledge.
So
yes,
I
don't
know
why
they
have
different
names.
But
yes,.
A
And
then
final
item
in
our
last
five
minutes
is
the
release
plan
for
the
coming
two
weeks.
The
past.
I
guess
four
weeks
because
we
didn't
meet
two
weeks
ago.
We
did
a
lot
of
releases,
so
I
think
we
are.
We
did
one
nineteen
zero
alpha,
one
one
eighteen,
zero
beta
one
or
two.
I
can't
remember
and.
F
A
Two
thank
you
on
117,
another
release,
one
or
two,
and
so
yes,
the
someone
put
on
a
suggestion
like.
Are
we
ready
to
do
one
eighteen,
zero
release?
We
did.
We
did
describe
in
the
release
notes
for
one
eighteen,
zero
beta
one
that
it
was
which
or
two
that
should
be
considered
as
a
release
candidate,
even
though
we
didn't
like,
as
we
just
previously
discussed
like
we,
don't
have
a
we.
We
have
never
done
an
rc
labeled
release,
but
we
soft
we
english
labeled
it
as
as
an
rc.
A
I
really
scanded
it.
How
do
people
feel
about
it
being
time
for
one
eighteen,
zero.
B
Yeah,
so
I
think,
maybe
not
next
week
or
the
week
after
but
yeah.
What's
the
usual
delay
time
we
put
on
those.
A
Yeah
I
mean,
I
think,
yeah
so
sometime
in
the
not
this
weekend,
but
perhaps
in
the
next
weekend
time
frame,
I
will
push
that
it's
still
a
man
that
one
is
still
manual.
Sadly,
so
I
will
do
that
one
and
then
we
can
see
whether
there's
anything
else
that
warrants
a
117
bump
or
a
119
bump.
I
don't
know
if
anyone's
aware
of
anything
at
the
moment
that
requires
or
recommends
them.
We
could
do
another
119
bump
because
we
can
test
that
process
again.
D
Aha
bug
and
for
118
I
think,
do
we
need
to
include
a
deprecation
announcement
for
the
are
we
deprecating
legacy?
Iem.
D
And
is
there
any
other
deprecations
that
we
need
to
make
sure
in
118
like,
for
example,
that
old
old
kubernetes
versions
are
we.
C
Okay
question
about
the
flannel
vxlan
workaround:
if
we
are
waiting
for
next
weekend,
we
are
already.
We
already
have
the
channel
set
the
alpha
channels
with
the
newest
releases,
and
I
think
I
can
create
a
pr
to
move
them
to
stable
one.
So
I
think
one
week
to
go
through
those
should
be
enough.
C
A
A
A
A
A
Otherwise,
I
wish
everyone
a
very
happy
two
weeks
and
see
you
soon
have
a
great
weekend.