►
From YouTube: kubernetes kops office hours 20190802
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everybody
today
is
Friday
August
2nd.
This
is
cops
office
hours.
I,
am
your
moderator
facilitator,
just
in
Santa,
Barbara
I
work
at
Google,
a
reminder
that
this
meeting
is
being
recorded
and
will
be
put
on
the
Internet.
Please
be
mindful,
therefore,
of
our
code
of
conduct
and
be
a
good
person.
I
am
missing
a
link
to
the
agenda
in
our
chat.
A
Please
feel
free
to
open
that
agenda.
Have
a
look
at
it,
add
any
items
you
would
like
to
cover
to
that
agenda
and
please
feel
free
to
put
your
name
on
there,
as
I
am
now
doing
having
committed
to
do
so
before.
If
you
would
like
to
it's
helpful
for
people
to
watch
the
video,
so
they
can
track
back,
I
could
utilize.
They
need
to
talk
to
you
about
anything.
A
B
B
C
B
B
A
Yes,
we
we
so
kind
you're,
sort
of
cutting
in
and
out
a
little
bit
richer,
but
I
think
the
the
issue
comprehensive
and
I
think
there
is
a
way
back
story.
Although
urbanize
has
this
skew
policy,
which
basically
has
a
bunch
of
compatible
or
skews
that
are
allowed,
you
are
not
technically
allowed
to
have
a
cubelet
on
a
later
minor
release
than
the
master
and
so
keep
it
1:13.
A
B
Even
hit
this
with
auto
scaling
off
just
with
the
masters.
For
some
reason,
the
master
was
still
talking
to
you.
So
in
one
of
the
old
masters
get
terminated,
a
new
master
came
up
with
114
that
new
master
also
had
issues.
I,
never
quite
got
a
chance
to
debug
that
exactly,
but
it
was
still
the
same
issue.
Cubelet
was
just
talking
to
a
master.
That
was
an
older
version.
Yes,
because
I'm
I'm
wondering.
B
A
B
B
A
You
have
to
force
the
rotation
yep,
so
I,
I,
think
and
I.
Think
the
the
issue
here
is:
we
in
cops
have
a
kubernetes
version,
which
is
at
the
cluster
level,
rather
than
at
the
instance
group
level,
yep
I
I'm,
trying
to
think
I
think
we
might,
despite
the
fact
that
it
is
at
the
cluster
level,
I
think
it
gets
mirrored
into
the
instance
group
I
think,
and
so,
if
you
were
to
roll
your
masters,
so
you
can
do
a
cops
rolling
update
by
instance
group.
A
You
can
specify
the
instance
groups
and
I
think
you
can
even
specify
roll
master
to
just
roll.
The
masters,
so
I
think
would
be
anything
to
know
whether
if
we
split
oh
no
I'm,
Bert
yeah,
if
we
split
because
we
can
update
all
right,
I'm
sorry,
yes,
I
understand,
what's
going
on
now,
all
right,
okay,
yeah,
that's
gonna,
be
interesting
to
fix,
yeah,
we'll
probably
have
to
so
technically.
We
must.
According
to
the
rules,
we
have
to
update
the
masters
first
entirely.
C
D
B
That's
somewhat
what
we've
been
doing,
it's
just
that
we
Auto
scale
very
aggressively
and
that
can
be
get
pretty
expensive
because
we
have
to
pre
provision
before
we
do
the
rotation.
But
it's
a
one-time
thing.
So
that's
how
we've
been
getting
around
it,
but
I'm
not
sure
if
it's
a
long-term
solution,
because
we
most
likely
will
hit
this
again
I.
D
B
A
Yeah
it'd
be
great
to
know
what
that
is,
because
that's
really
surprising,
I
don't
know
there.
It's
like
this,
whether
it
was
the
same
one
or
not,
I
guess
it
Oh
actually
I
think
I
know
what
this
probably
is.
So
there's
another
book
alongside
incur
minetti's
bug
where,
where,
if
you
have
mixed
API
server
versions,
the
older
one
obviously
doesn't
have
this
enable
service
link
field.
So
if
you
write
an
enable
service
link
pod
to
it,
this
doesn't
sound
right,
I,
don't
know,
but
it
doesn't
sound
right
in
this
case,
but
you
can.
B
A
Yes,
I
am
also
wondering
I
mean
we
have
tried
to
work
around
these
issues
in
kubernetes
by
essentially
like
being
more
disciplined
about
how
we
introduce
fields,
and
there
are
supposed
to
be
rules
that
you
know
you're
not
introduced
a
field
and
start
using
it
in
the
same
release.
It
sounds
like
we
have
not
necessarily
it's
always
hard
to
follow
the
rules
exactly.
It
sounds
like
when
it
didn't
necessarily
the
rules
here
or
the
guidance
here.
A
One
of
the
things
I
do
think
if
we
like,
if
we
get
to
machine
deployments
like
the
cluster
API
that
machine
the
cluster
okay
does
have
a
separate
notion
on
the
machine
deployment
of
a
kubernetes
version.
So
we
could
we
can
get,
we
can
accelerate
getting
there
and
we
can
say
well.
We
know
we're
gonna
have
this
notion
of
a
per
it
cops
instance
group
version,
so
maybe
we
should
actually
mirror
it
down
and
actually
configure
it
to
enable
split
up
things
yeah
that
would
that
would
be
fine,
I'm.
A
Just
trying
to
think
about
like
exactly
how
the
workflow
would
I
guess,
yeah
and
then
the
default
workflow
would
would
presumably
just
update
them.
All
at
the
same
time,
but
maybe
we
have
a
more
complicated
workflow
on
a
on
a
minor
upgrade
it
becomes.
We
sort
of
then
need
a
tool
to
like
yeah
to
guide
you
through
the
steps
of
that
which
is
which
is
okay.
We
can
do
that.
A
We
have
I
mean
it
will
basically
be
a
wizard
that,
like
would
would
do
the
building
blocks
of
so
we
basically
would
start
off
with
having
my
default
was
so
proposed
plan
of
action.
We
can
put
the
kubernetes
version
on
each
instance
group
to
enable
it
we
can
start
off
by
having
the
the
existing
like
there's
a
toolbox
set
command
which
would
basically
just
set
the
version
everywhere.
So
we
still
have
the
problem
we
could
then,
but
you
could.
You
could
then
set
it
on
a
per
instance
group
basis.
B
A
A
And
we
have
we
have
this
cops
upgrade
command
which
is
sort
of
guides
you
through
other
changes.
So,
for
example,
like
we've
used
it
in
the
past
nudge
people
off
pinning
like
various
options
and
I,
think
pretty
lot.
People
still
have
ed
CD
versions
pinned
and
things
like
that,
which
we
could
nudge
people
to
unpin
those
at
CD
versions.
A
So
it's
yeah.
We
can
put
it
into
cops,
upgrade
I,
guess,
yeah
I.
Think
first
step
though
I
think
is,
if
we
start
with
having
the
ability
to
specify
urbanized
version
on
an
instance
group,
probably
optionally,
then,
and
have
it
override,
then
I
think
you
gain
the
capabilities
to
do
it
and
I
presume
you're,
not
I,
presume
you're
doing
a
like
it
from
a
more
from
a
script
anyway,
rather
than
Jim,
and
so
it's
a
little
bit
of
a
pain
but
should
be
too
much
of
a
pain
for
you.
Yep.
B
A
But
thank
you
for
bringing
it
up.
Yes,
go
I'm
sure
it
will
happen
again,
yep.
So,
okay,
thank
you.
I
know!
If
there's
anything
else,
anyone
wants
to
talk
about
about
that,
but
I
think
yeah.
We
should.
We
should
expose
the
ability
to
have
an
instance
group
on
a
different
kubernetes
version.
My
guess
is,
that
is
the
immediate
first
step,
and
then
we
can.
We
can
build
from
there
to
better
porcelain.
C
So
we're
working
on
disabling
anonymous
off
on
cubelets,
unfortunately,
which
is
good.
Unfortunately,
our
monitoring
tools
rely
on
communicating
with
each
nodes
cubelet,
and
so,
if
we
disable
anonymous
off,
that
requires
giving
client
certificates
to
the
monitoring
tools
when
you,
when
you
disable
anonymous
on
cops,
cops,
creates
a
client
certificate
on
the
masters
for
the
API
servers
to
communicate
with
them.
But
we
would
need
this
a
client
certificate
on
every
node,
so
I'm
wondering
what
the
best
way
to
accomplish
that
would
be
G.
C
A
A
Mean
we
could
certainly
create
a
secret
Ncube
system
that
could
enable
you
that
we
could
then
grant
very
limited
permissions
to
access
the
cube
land
I
feel
like
that's
just
really
anonymous
or
sort
of,
but
I
don't
know
we
should.
What
we
should
do
is
so
I
think
that's
an
approach.
I
think
one
thing
we
could
do
is
look
at
how
other
people
have
done
it.
I.
A
Thought
also,
you
were
supposed
to
able
to
use
a
criminales
service
account
which
is
not
a
client
sir
pair,
but
I
thought
you.
If
you
have
a
community
service
account
you,
the
intent
of
a
lot
of
these,
was
that
comforts
cold
now,
but
there's
there's
API
server
exposes
an
endpoint
that
lets
services
basically
check
or
lets
any
component
check.
Whether
a
service
account
is
authorized
to
do
things
subject.
Access
review
is
the
name
of
it,
so
the
intent
was
that
all
these
pieces
would
start
using.
A
That's
definitely
worth
ago,
I
think
we
should
check
how
other
people
have
done
it,
but
then
that
would
be
relatively
secure.
You
would
give
let's
say:
New
Relic,
I'm
guessing
or
something
like
that,
or
data
dog
you're
you're,
a
special
service
account.
You
would
grant
the
commissioners
that
service
account
and
you
would
tell
the
agent
to
use
that
service
account
when
Green
cubelet,
okay.
B
E
Thanks
I
would
like
to
sort
of
just
say
that
we
do
have
an
ugly
hack
of
downloading
the
CA
and
signing
our
own
client
certificate
that
we
have
done
in
the
past.
But
that's
definitely
not
advised.
A
There
there
is
also
a
that's,
that's
a
good
point
and
there
is
also
a
there's,
a
there's,
a
API
for
that
there's
a
csr.
It's
some
certificate.
Subject:
review,
maybe
API
in
kubernetes,
where
I
think
you
can
basically
create
a
CSR
request
to
make
a
signing
request
there
we
are,
and
you
can
it
it
will.
It
will
automatically
sign
some
certificates
for
you,
so
you
may
also
be
able
to
do
that
which
is
honestly
the
same
thing
just
rafting.
The
cabin
is
API.
A
This
is
the
this
is
this
is
built
into
communities
and
it
uses
the
kubernetes
CA,
which
is
that
I
mean
so
cops
is
creating
that
the
criminality
is
ca.
Cert,
technically
I
think
it
could
use
a
different
CA
or
a
sub
CA,
but
I
think
everyone
uses
the
same
CA.
Okay
thanks
and
that's
actually
used
in
the
cubelet
bootstrap
flow,
so
I
I,
don't
I
can't
recall
whether
we
actually
have
been
able
by
default.
We
should
get
that
enabled
by
default,
but
I
think
we
haven't.
We
have
it
enabled
optionally
right
now,
I
believe.
A
D
Yes,
my
question
is
that
we
recently
started
looking
into
the
leaf
she
and
I
and
I.
Seen
as
that.
The
direction
that
we
installed
with
corpses
are
coded
of
the
link
that
I
think
to
the
the
document.
So
I
have
more
a
couple
of
questions
around
what
would
be
the
good
way
to
contribute
back
and
maybe
specify
the
leaf
design
either
shown
and
how
to
improve
the
documentation,
because
it's
the
networking
documented
there
is
no
mention
of
the
leaf
lift
CNI.
Even
if
it's
supported.
A
So
I
don't
know
how
many
probably
using
the
lifts
C&I.
So
that's
probably
why
the
documentation
is
a
little
lacking
and
if
you
are
interested
in
contributing
that
would
be
wonderful.
I'm
not
really
familiar
with,
like
the
alternatives
of
the
lift.
Ci
is
the
AWS
PPC
and
I
won
and
I'm
not
really
familiar
with
honestly,
why
you
would
pick
one
or
the
other
or
how
they
are
doing
relative
to
each
other?
A
A
We
we
tend
to
do
those
only
on
minor
releases
of
kubernetes
to
sort
of
so
we
don't
break
anyone's
existing
manifests,
but
if
it's
important,
if
we
can
and
other
than
that
I
think
yeah,
it
would
be,
would
be
wonderful
for
you
to
try
if
you,
if
you
have
any
issues
that
let
us
know
but
yeah,
it's
basically,
you
know
it's
like
so
a
community
effort,
so
I
don't
think,
there's
any
any
reason
that
we
haven't
done
it
it's
just.
No
one
has
done
it.
No.
D
A
Is
it
is
very
good?
Yes,
thank
you.
We
would
like
to
get
it
extracted
to
the
bundle
and
there
are
also
PRS
where
people
are
allowed.
People
are
sending
peers,
which
I
think
is
a
good
one
to
let
people
overwrite
the
image,
which
is
often
a
good
one.
Like
often
people
just
want
to
use
a
newer
version,
and
that's
nice
because
you
know,
then,
if
there's
a
security
issue
or
whatever
it
is,
you
don't
have
to
like
deal
with
a
cops.
A
C
A
Yes,
and
we
I
think
it'd
be
great
to
get
it
if
the,
if
you
could
specify
a
manifest
externally,
but
still
you're
right,
like
we're,
not
gonna,
have
solved
our
back
or
well
we're
not
gonna
solve
Diane
permissions
or
network
firewall
permissions,
or
things
like
that.
They
tend
to
change,
that's
often,
which
is
the
outside,
but
yeah.
It's
certainly
an
issue.
F
Yes,
so
yeah,
there's,
there's
currently
I
know
coordinates,
is
not
the
default
yet,
and
it's
currently
calling
backs
previous
one
is
a
hard-coded
image
version
in
the
manifest
and
there's
been
a
there's.
A
couple
of
bill
requests
that
I've
linked
to
there
that
are
sort
of
either
changing
or
modifying
like
adding
a
bit
more
and
flexibility.
F
To
that
manifest
and
there's
an
issue
I've
raised,
which
and
yeah
I'd
like
to
be
able
to
basically
fully
specify
our
core
file
as
possible
in
the
in
the
sort
of
cops,
config,
til
I
rest
to
filly
set
up
what
we
want
in
terms
of
different
plugins,
etc.
I
realize
that
probably
for
most
people,
that's
that
it's
gonna
be
overkill
and
and
I
think
my
personal
opinion
on
the
moving
coordinate
daemon
set
on
the
masters
is
it's
it's
not
a
good
idea
and
that
actually
we
should
go
down
the
same
route.
F
A
I
think
it
certainly
is
the
easier
on
that
topic.
It's
certainly
the
easier
transition
to
like
just
move
from
kookiest.
According
us,
Robin
also
moving
to
master
the
other
piece.
That's
coming,
hopefully,
is
note.
Local
DNS,
which
is
like
the
a
lot
of
people,
have
already
a
per
node
proxy,
and
this
is
sort
of
a
an
upstream
solution
that
people
can
use,
which
then
everyone
would
have
a
a
good
base
to
start
from
I
guess
or
hopefully
meet
most
people's
needs.
A
For
your
particular
one,
more
factor
is
I
and
others
are
working
on
this
idea
of
atom
operators,
which
would,
in
theory,
allow
like
a
cross
installation
tool,
specification
of
things
like
or
DNS,
and
also
do
things
like
if
Cordia
has
had
more
complicated
upgrades
or
like
no
local
DNS,
has
a
more
complicated
installation.
Uninstallation
sequence,
like
help
with
things
like
that.
A
That
said,
I,
don't
think,
there's
any
reason
why
we
I
don't
think
we
should
hold
up
all
changes
to
configuration
for
that.
The
external
one
I,
as
you
say
it
wouldn't
be
compatible
like
basically
like
all
bets,
are
off
at
that
point
and
that's
okay,
I
guess.
The
question
is
like
your
changes
are
I
assume
sufficiently
complicated
that
they
are
unique
to
you
and
not
something
we
should
like
try
to
expose
is.
F
That
that's
my
feeling,
yeah
we've
got
like
I
put
a
snippet
and
then
the
issue
for
how
we
set
up,
but
we've
got
like
full-on
breakdowns
of
different
localized
within
our
BTC
DNS
servers
that
we
were
certain
domains
to
to
lower
the
impact
on
other
bears,
and
so
yes,
it
would
the
bill.
Other
alternative
would
be
exposing
about
30
different
options
and
which
the
vast
majority
people
would
never
need.
F
A
And
I'm,
hoping
that
that
core
DNS
eventually
moves
to
a
sort
of
kubernetes
style,
CRD
models,
so
that
you
know
those
30
options
would
be
nicely
composed
and
like
each
one
of
those
providers
would
be
their
own
CRT
type
thing,
and
it
wouldn't
be
quite
so
crazy,
but
yeah
I
like
given
the
complexity
I
like
the
idea
of
external,
like
saying
like
I,
need
a
break
glass
model
and
I
know
what
I'm
doing
you
know.
I
think
we've
struggled
with
this
in
cups.
A
A
We
have
talked
in
the
past
about
splitting
up
the
cluster
object
to
be
less
monolithic,
but
I
don't
think
we
should
tie
this
PR
to
that
I
think
at
some
stage
you
know
be
nice
to
be
able
to
specify
an
OEM
all
alongside
it
somewhere,
but
we
don't
have
the
mechanics
for
that
today.
Okay,
please,
thank
you.
E
A
My
understanding
is,
it
would
be
added
sorry,
the
the
for
coordinates.
It
would
be
added
and
we
have
a
place
to
get
baked
in
summer,
but
it's
completely
static
or
mostly
static
core
DNS
core
file-
that's
configured
okay,
so
this
is
just
for
the
coal
file.
Basically,
this
is
just
for
the
court
felt
that
external
pattern
that
we've
started
looking
at
is
in
some
places
rather
than
requiring
like
subnets
to
be
created
it's
well.
We
have.
We
have
subnets
where
you
can
basically
specify
an
external
ID.
A
For
example,
I
think
the
other
one
where
this
has
come
up
is
in
I.
Remember
any
more
routing
of
some
description.
Anyone
help
me
out
here,
but
yes,
we've
sort
of
we've
gone
with,
like
I.
Think,
there's
a
PR
out
there
to
say
you
can
just
specify
external
and
you're
on
your
own
and
at
that
point,
yeah
I
think
this
good
I
see
a
nice
ejection.
Oh
there
was
a
see
and
I
said
yes,
since
he
and
I
as
well.
A
Ok
and
then
this
is
one
that
the
next
time
one
is
something
I
put
on
the
issue
unless
you're
I
put
on
the
agenda,
which
is
around
I,
don't
know
if
RIPTA
is
here,
I'll
see
ya,
but
apparently
apparently
the
newer
AWS
instance
types,
including
c5
s,
for
example,
c5
and
m5
s
support
20
attachments,
but
those
attachments
include
network
interfaces,
EBS
volumes
and
nvme
instant
storage
volumes.
A
So
you
know
I
did
not
know
this
today,
I
learned,
if
you,
if
you
attach
a
bunch
of
eni
providers,
for
example,
using
the
lift
or
a
device
weekly
see
you
are.
You
then
can't
attach
as
many
volumes
anymore.
So
so
the
request
is
to
the
scheduler.
The
scheduler
today
imposes
a
limit
on
the
number
of
volumes
you
can
attach
so
would
like
it
won't
put
more
than
I
guess,
28
or
whatever.
It
is
onto
a
single
note,
because
it
knows
that
you
can't
do
that.
But
now
that
limit
is
no
longer
true.
A
It's
it's
no
longer
a
constant
limit
either,
which
is
really
difficult,
but
the
proposal
is.
We
have
a
field
to
override
the
number
of
the
limit
with
an
environment.
We'd
start
we
have
an
environment
variable
and
scheduler
that
overrides
the
limit
and
which
was
actually
put
in
there,
because
people
wanted
to
attach
more
volumes
back
when
that
was
possible,
I
guess
or
possible.
On
some
instance
types.
The.
A
With
that
many
volumes
attached,
I
would
guess,
and
so
they're
saying
we're,
never
gonna
have
more
than
ten
a
in
eyes
and
ten
volumes,
and
that
leaves
another
eight
for
NB
Emmys
and
that's
good
enough
for
something
like
sort
of
that
sort
of
calculation.
We'll
probably
get
to
the
point
where
it's
good
enough
for
long
as
it
takes
to
get
the
this
one
can
be
properly
baked
in
t
communities
is
my
guess,
or
we
can
ask
them
to
raise
the
limits.
But
yes,
that
isn't
it's
an
interesting.
It's
an
interesting
wrinkle.
A
Just
come
from
Isaac
it's.
It
almost
feels
like
like
PCI
lanes
right
now,
I
guess,
but
but
it's
it's
all
virtual,
so
I'm
like
I,
don't
think
that's
the
case,
but
anyway.
A
A
G
A
A
Yes,
that's
certainly
the
obvious
place
to
put
it.
It's
is
this
is
a
communication
with
AWS
in
the
scheduler
or
is
that
in
a
separate
sponsor
I?
Don't
that's
exactly
the
challenge?
Scheduler
does
not
have
any
provider
awareness
at
all
so
like
it
was
particularly
difficult,
for
example,
for
the
AWS
VPC
CNI,
because
it
imposed
a
lower
pod
limit
and
that
wasn't
visible,
so
it
has
to
basically
be
set
on
the
node,
which
is
basically
through
the
cubelets,
which
does
which
is
aware
I'm,
not
so
we're
able
to
get
that
going.
A
But
this
is
a
new
one,
because
on
AWS
on
certain
instance
types
there
is
this
odd
requirement
and
ian's
eyes
are
not
visible
today
to
the
scheduler.
But
there
is
a
pluggable
resource
extensive,
releasing
mechanism,
which
is
how,
for
example,
we
count
GPUs,
so
I
guess
this
will
entail
either
exposing
en
eyes
or
some
notion
of
attachments
to
the
to
the
tip
and
on
the
node.
But
yes,
this
is
this
is
not
gonna,
be
a
fun
one
to
solve.
Yeah.
A
Or
C
fives
there
is
the
other
hand
sir
right
is
like
find
ones
that
are.
That
are
good
instance
is
not
on
his
list.
It
looks
like
all
the
just
scanning
with
us.
It
does
look
like
all
the
newer,
the
newest
generation,
just
off
the
top,
my
head
and
a
five
it's
hard
to
like
see
which
ones
are
missing.
Yeah.
C
A
Okay,
that's
I,
think
we've
spoken
enough
about
this
I
think
we're
agreed
that,
like
the
we
should
raise
an
issue,
get
get
the
upstream
scheduler
people.
Thinking
about
this
wonderful
issue.
I
might
I,
might
ruin
their
Friday
by
putting
it
in
slack
just
to
have
some
amusements,
but
but
yes,
and
in
the
meantime
we
should
expose
the
environment
variable
I'll
try
to
get
that
into
which
bring
me
on
to
the
next
topic.
I'll
try
to
get
that
into
one
14-0
bado
one
or
whatever
we're
about
to
cut,
because
the
next
topic
is
the
release.
A
Train
is
rolling.
So
if
we
recall
last
week
we
or
last
two
weeks
two
weeks
ago,
we
did
a
review
of
the
things
needed
to
happen
in
the
various
releases,
and
we
did.
We
have
a
big
knot
back
like
we
have
a
list
of
releases
which
I
intended
to
get
tax
happen
in
this
two-week
cycle.
We
make
good
progress,
but
we
didn't
I
did
not
complete
the
whole
list,
but
thanks
to
everyone
that
helped.
A
A
We
got
cops
113
zero
out
about
an
hour
ago,
which
is
good
idea,
also
bumped
cups
112
to
to
pick
up
that
new
at
CD
manager
and
xev
version
change,
because
it
will
now
warn
you
if
you're
gonna
use
a
version
of
a
tea
that
isn't
supported,
which
will
also
help
people
get
on
the
sort
of
golden
versions
or
the
this
like
versions.
We
support
I
think
we
also
added
one
cops:
sorry
exit
III,
313
or
whatever
the
latest
STD
is
in
in
there.
A
So
that's
now
available
as
well
and
I
think
we're
basically
ready
to
cut
cups,
114
0,
beta,
1
I
think
all
the
things
have
gone
in
and
you
7
had
time
to
do
so
yet
and
then
I'll
also
do
115
0
for
1
116
0
alpha
1,
and
we
did
some
vein
in
my
map.
We
did
some
ami
promotion.
I
still
need
to
do
the
a
my
refresh
and
I
think
the
channel
bumps
are
basically
mostly
done,
although
I
will
double-check
those
I,
don't
know.
A
If
there
are
any
other
new
releases,
we
think
we
need
to
do,
or
so
we
can
keep
the
sort
of
release
backlog
as
continuing
I'm
hoping
this
time
we
can
do
1/16
0
alpha
1,
which
will
require
some
node
labels
stuff.
So
that's
still
pretty
ambitious
for
two
weeks,
but
I
am
I'm,
optimistic
and
also
I'm
gonna,
throw
anything
else
on
that
and
on
the
plate
or
any
blockers
or
blank
problems
that
we
should
get
resolved
for.
A
H
A
The
absence
of
any
compelling
reason,
I
would
say
111
only
that's
right
and
yes,
any
kind
of
recent
I
would
say
no
more
to
111.
Yes,
I
did
112,
because
I
figure
there
were
some
like
more
important
things.
There's
a
security
fix.
People
might
not
want
to
go
right
onto
the
new
113
zero
and
also
because
112
is
where
we
do
the
up
the
SUV
update
and
so
like
having
more
checks
around.
That
felt
like
a
good
thing
to
do,
but
yes,
I
would
I
would
encourage
people
to
use
112
or
113
and
not
111.
A
And
we
do
have
this
sort
of
strategy
that,
like
cups,
should
work
with
all
prior
versions
of
kubernetes,
so
it
doesn't
make
a
huge
amount
of
sense
to
have
and
it
should
work
with
all
private
communities
and
should
not
change
unnecessarily
the
configuration
of
a
prior
version.
So
when
we
introduce
a
new
version,
a
new
significant
version
of
the
lift,
VAP,
PCC
and
I,
for
example-
that's
why
we
would
put
it
into,
for
example,
kubernetes
114.
So
we
didn't
change
the
configuration
on
existing
113
cluster.