►
From YouTube: Kubernetes kops 20190118
Description
Kubernetes kops Office Hours
A
I'll
everyone,
it
is
Friday
January,
18th
2019.
This
is
cops
office
hours.
I,
am
your
moderator
facilitator,
just
in
Santa,
Barbara
I
work
at
Google
reminder
this
meeting
is
being
recorded
and
when
we
put
on
YouTube-
and
please
be
mindful
of
that
and
be
a
good
person-
I
put
a
link
in
the
chat
to
our
agenda.
A
If
we
have
a
lot
of
stuff
on
the
agenda
and
mostly
my
stuff
medley,
but
we
have
a
lot
of
stuff
on
the
agenda,
which
is
great,
but
if
you
do
want
to
talk
about
something,
please
do
put
it
on
the
agenda,
or
else
we
may
not
find
time
for
it,
and
please
do
put
your
name
there
if
you
are
willing
to
so
that
you
can
be
identified
for
people
I
guess,
referring
back
to
the
minutes,
for
the
meeting
notes.
First
up
on
the
agenda,
let's
just
jump
right
into
it.
A
B
Say
KCI
machinery
wants
to
get
rid
of
initializers,
which
was
also
in
114
and
so
Jordan
kind
of
wanted
to
have
a
version
of
cops
that
has
it
disabled
push
through
CI,
and
so
he
made
a
cure
that
would
disable
it
for
one
call.
So
I
sort
did
my
research,
it
seems
like
we
have
a
disabled
in
Cuba
starting
112
and
giving
it
alpha
I
kind
of
just
approved
the
PR
one
for
but
I
wanted
to
raise
it
here
in
case
there
was
any
like
context
or
history
I'm
missing
for
disabling
initializers,
legally.
A
B
Okay,
that's
cool
select
in
terms
of
like,
like
actual
kubernetes
usage
of
initializers,
the
only
patients
used
in
the
out
of
trees,
persistent
volume
label
or
controller,
and
so
from
my
understanding
that
is
not
used
anywhere
in
cops.
Unless
someone
explicitly
enables
alpha
CCM,
which
I
don't
think
anyone
has
working.
B
B
B
A
B
Yeah
well
yeah,
once
I
come
up
with
something
all
update
here.
Oh
sorry,
cool,
and
also
just
on
some
personal
note
today
is
my
last
day
at
digitalocean,
and
so,
if
anyone
here
is
interested
in
picking
up
the
digital
ocean
support
perk
up,
let
me
know
and
happy
to
kind
of
walk
you
through
what
it
looks
like
and
let
you
know
what
else
needs
to
be
done
on
that
end,
all.
B
A
A
Sandra
yeah
great
next
on
the
agenda.
I
just
had
a
public
public
service
announcement
reminded
like
today
is
the
deadline
for
your
keuken
talk
submissions
for
EU
Barcelona,
which
is
in
May.
So
if
you
want
to
submit
a
talk
case,
you
put
it
on
there
or
please
get
it
in
today.
I
see
someone
typing
the
yeah
about
video
yeah
I
think
we
can
the
heart,
but
is
like
getting
the
getting
an
account,
but
it
would
be
great
to
figure
out
how
we
can
were
generally
had
on.
The
do
topic.
Do
the
we
generally.
A
We
have
to
test
the
kubernetes
and
for
team
that
is
starting
to
spin
up,
and
maybe
they
can
be
a
clearinghouse
for
sort
of
test
accounts
and
we
can
start
to
get
like
automated
testing
going
on.
Do
I.
Think
I'd
be
super
helpful
also
for
the
cluster
provider.
Do
I
guess
next
on
the
agenda
external
egress
for
requests
yeah?
A
Are
you
here
MA
mr.
Moustafa
I
don't
see
mr.
over
here?
I
can
give
some
context
on
this.
I
guess,
which
is
I.
Think
I
originally
proposed
that
we
have
a
new
there's,
a
lotta
people
that
want
to
do
different
networking
configurations
and
we
carry
any
supporting
cops.
We've
been
trying
to
map
the
most
common
ones.
A
D
A
The
downside
is
that
the
one
of
the
problems
we
faced
in
the
past
is
when
people
do
things
themselves.
We
have
no
easy
way
to
validate
that
it's
correct,
and
so,
if
they
hit
a
problem,
we
can't
tell
them
about
it.
So
if
you,
if
you
let
cops,
manage
it
or
cups
of
validate
it,
cops
can
check
it
and
will
give
you
a
very
clear
error.
If
something
is
wrong.
Yeah.
A
D
A
C
Have
any
objections
I
think
it's
a
I
think
it's
fine
I
think
that
it
kind
of
brings
to
mind
that,
like
some
of
the
docs,
you
know
every
time
we
we
add,
like
an
exception
like
this,
we
kind
of
just
expand
on
the
existing
dot
and
maybe
at
some
point
we
should
restructure
to
say
kind
of
what
you
just
said.
Justin
of
like
you
know
the
cops
way
that
we
suggest
is.
You
know
the
the
best.
C
A
It's
it's
tricky
and
I.
Think
one
of
the
reasons
that
cops
works
at
all
is
because
it
does
manage
that
configuration
for
you,
because
if
we
didn't
manage
it,
it
would
be
just
the
configuration
is
so
complicated
that
people
just
yeah
get
it
right.
The
downside
is
that
we
there
are
other.
There
is
not
just
one
solution
for
networking,
as
you
say,
and
it's
a
difficult
trade-off.
D
A
A
C
A
Okay,
that's
great!
Thank
you
all
right.
We
have
a.
We
have
one
I'm
gonna
swap
the
order
of
the
next
two
just
because
the
just
because
the
this
this
111
one
is
hopefully
a
little
faster.
So
it's
another
one
I
put
on
the
agenda
or
one
like
on
an
agenda
which
is
you
know
we
want
to
get
an
habit
of
more
regular
releases.
We
had
a
request
for
a
cherry-pick
of
a
PR
for
a
mix
amidst
dependency
for
dr.
A
A
Why
I'm
careful
about
pushing
the
that
particular
one?
But
yes,
it's
it's
not
great,
and
we
should
get
it
onto
a
tag
and
we
should
put
that
into
the
same
deployment
more
automated
deployment
they're
using
for
cops
or
we
could
just
vendor
it.
I
was
realized
today,
which
might
be
an
easier
option.
So,
in
other
words,
cops
might
then
could
vendor
at
CD
manager,
even
though
it's
a
little
bit
weird,
but
we
could
like
have
a
cops
@cd
manager
version
that
was
more.
That
was
tagged
with
cops.
Your
that
was
like
released
alongside
cops.
A
That's
another
option,
I,
don't
know
if
anyone
else
has
any
other,
so
I
think
we
should
do
that
unless
anyone
objects
isn't
getting
into
the
habit
of
more
regular
releases
working
towards
fully
automated
releases.
We're
not
fully
there
on
the
111
stream
I'm
trying
to
get
us
there
on
the
112
stream,
but
yeah
at
least
we
can
start
doing
more
regular
releases.
I,
don't
know
if
anyone
says
any
other
PRS
they
want
to
what
they
think
should
be
back.
Foot
into
111
are
cops
111.
A
If
you
do,
then,
if
you
have
label
Commission's,
then
add
the
cherry,
pick
label
or
comment
on
it.
Saying
please
cherry
pick
or
please
consider
cherry
picking
to
111.
Then
we
can.
We
can
have
a
look.
We
don't
tend
to
do
new
features,
but
if
there
are
any
bugs
or
smaller
features,
we
can
get
that
yeah.
A
This
is
one
that
Brian
found.
Sorry
where,
if
you,
if
you,
if
at
CD
manager
or
quarantine,
is
half
the
nodes,
it
gets
very
confused
if
it
so
Exedy.
Has
this
mounted
concept
of
quarantine,
ian,
an
NC
d
node?
Where
because
SE
d
doesn't
support
cave
read-only
mode?
What
we
do
is
we
launch
it
on
different
port
so
that
no
one
can
talk
to
it
and
is
effectively
read-only,
because
no
one
can
make
any
changes
to
no
one
to
you
reach
it.
A
We
call
that
quarantine
mode
and
the
problem
was,
I
guess
something
went
wrong
in
your
cluster
and
it
got
half
warranty,
didn't
half
quarantined
and
we
didn't.
We
didn't
know
XD
man
who
didn't
know
what
to
do
at
that
point.
It
was
very
confused,
so
there's
a
fix
for
that.
So
we
should
definitely
add
that
one
in
okay.
A
A
Yes,
I
am
working
on
that
I
do
I
think
we
should
I,
don't
I
think
it
really
ties
into
the
next
topic
about
whether
we
about
what
goes
in
there,
because
we
have
some
big
potential
changes
and
I
guess
it
would
be
an
alpha.
So
it
doesn't
really
matter
whether
we
like
introduced
much
more
things.
But
yes,
there
was
also
some
I'm
trying
to
get
at
building
automatically
and
I.
There
was
some
infrastop
that
I
had
to
get
everything
all
lined
up
and
I.
A
Think
I
think
I
messed
up
the
PR
for
infra
to
do
that.
So
just
working
on
that,
if
there's
nothing
else
on
111one,
we
can
go
on
to
the
next
topic,
which
is
I.
Think
a
big
topic
I
make
tickets
a
lot
of
time.
So
alright,
so
we
finally
have
I've
been
doing
a
little
work
to
make
a
CD
manager
super
robust,
a
CD
manager
is
the
tool
which
will
manage
at
CD
going
forwards.
A
It
replaces
the
sort
of
built-in
one
that's
built
into
proto
cube,
which
is
a
cop
component,
and
it
is
technically
a
separate
project.
It.
It
itself
has
a
rope.
Xev
manager
itself
has
a
roadmap
of
merging
into
fcd
ADM,
which
will
be
a
project
under
six
austere
life
cycle,
but
that's
taking
longer
than
we
want
to
wait.
We
have
a
hard
deadline
of
Etsy
d3
bike
amenities
113,
because
committees
113
turns
off
at
CD
2,
so
xev
manager
does
the
impossible
and
gets
you
from
it.
A
A
So
it's
actually
like
pretty
it's
it's
better.
Now,
it's
it's
pretty
good.
I
need
to
like
go
back
and
I,
don't
know
how
to
like
keep
I
need
to
keep
hitting
on
it
and
see
if
I
can
find
any
more
but
certainly
fixes
a
lot.
I
also
introduced
TLS,
because
that
was
with
a
the
big
remaining
feature
that
was
sort
of
like
not
be
necessary.
Manager
at
all
and
I
was
very
worried.
It
was
going
to
have
consequences
in
terms
of
you
know.
We
don't.
We
know
on
another
migration
where
you
go.
A
The
downsides
are
that
it
would
that
we
turn
it
on,
for,
as
you
turn
on
TLS
for
at
33,
and
that
we
prevent
any
external
process
from
accessing
that
CG.
That
is
good
from
a
security
point
of
view,
but
it
is
bad
for
to
networking
providers
calico
and
psyllium.
Both
talk
directly
to
at
CD
I,
don't
know
if
any.
If
there
anyone
knows
of
any
other
things
that
also
talk
directly
to
at
CD,
but
as
far
as
I
know,
there
are
not,
and
that
is
a
good
thing
for
calico.
D
A
A
Networking
modes,
networking
systems
that
wanted
to
support
other
systems
than
kubernetes
didn't
want
to
talk
to
the
communities.
Api
didn't
want
to
drive
themselves
on
the
criminal
API
seized
at
CD.
Is
that
supposed
to
truth
for
ones
that
have
primarily
focused
on
kubernetes?
Now
it
seems
that
using
kubernetes
api
is
a
good
thing
to
do
particular.
A
So
there's
there's
calico
and
there's
some
complexity
there,
but
we'll
come
back
to
that
and
there's
also
psyllium
where
we
actually
have
no
option
right
now
and
I
need
to
talk
to
the
psyllium
folk.
But
I
don't
know
if
anyone
is
using
psyllium,
but
but
today
today's
psyllium
does
not
have
a
a
mode
that
does
not
talk
to
Ed
CD,
so
we
would
have
to
before
1:13.
We
would
have
to
either
get
persuade
psyllium
to
support
CR,
DS
or
spin
up
a
separate
at
CD
for
psyllium,
both
of
which
are
well.
A
The
CID
option
is
much
better
and
hopefully
we
can
just
persuade
them
to
do
that.
The
sed
edct
option
is
not
bad.
We
we
do
currently
run
two
at
CDs,
one
for
ED
CD
events,
a
separate
one
for
ICD
events,
so
we're
not
being
into
the
world
to
support
a
third
but
I
would
rather
they
support
us
here
DS,
but
we
have
an
option
there,
I'm
almost
not
sure
if
count
will
use
psyllium
anyways.
A
A
Ryan
has
helped
me
a
lot
with
understanding
the
Calico
upgrade
and
we
also
spoke
to
Casey
from
Nigeria
or
calico
Inc,
who
helped
us
a
lot
with
understanding
and
as
far
as
let
me
know,
if
I'm
miss
speaking
I
Ryan,
but
we
couldn't
figure
out
a
way
to
go
from
calico
two
to
calico
three,
but
it
was
a
non-destructive
upgrade
and
disruptive
in
this
case
means
or
calico.
It
means
that
your
old
nodes
can't
talk
to
your
new
nodes.
A
A
It's
not
a
good
thing
to
have
for
very
long
so,
like
your.
Your
workload
should
should
be
fine,
but
like
D
heavily
degraded
during
it
like
it
will
be
a
lack
of
connectivity
but
yeah
the
that
is
something
that
can
happen
any
like
zones
can
be
partitioned
from
each
other.
So
no
workload
should
we
shouldn't
introduce
any
failures,
but
it's
certainly
not
something
that
you
want
to
keep
going
for
very
long,
the
so
that
that
would
basically
mean
they
were
there
at
Ekman
either
there
are
two
calico
up.
A
There
are
two
upgrades
for
calico
users,
both
of
which
are
disruptive
there's
the
sed
manager
upgrade,
which
is
disruptive
in
a
way
that
I'll
talk
about
in
a
minute,
but
only
for
masters,
and
there
is
a
Oh
upgrade
which
is
disruptive
for
effectively
for
every
every
machine.
A
calico
user
can
combine
those
two
upgrades
or
we
can
split
them.
I
I
am
proposing
as
a
straw,
man,
that
in
1/2
cups
1:12
we
move
calico
users
to
s,
CD
manager
and
calico
CRD,
and
then
we
combine
them
into
a
single
upgrade.
D
A
E
A
E
A
So
yeah,
maybe
that's
how
I'll
explain
it
to
the
Calico
focus,
we're
actually
keeping
folk
on
calico,
not
so
not
to
push
people
away
from
calico,
I
guess
so.
I
I
don't
know.
If
anyone
has
any
objection,
although
I
don't
know
any
objections
in
principle
to
the
idea,
basically
I
think
Ryan
might
looked
and
we
couldn't
find
any
better
up.
Any
less
disruptive
option
that
was
practical.
I
can
always
write
something
but
like
calico
ink
didn't
write
anything
so
or
they
were
something
we
shouldn't
work
for
the
full
two
to
three
transition
and.
A
There's
I
put
up
a
work-in-progress
PR,
so
if
people
have
thoughts,
feel
free
to
comment
on
that
PR,
but
we'll
have
to
well
we'll
need
like
clear
release,
notes
about
what
how
to
do
the
upgrading
and
that
anyone
write
those
because
I
want
to
figure
out
the
exact
process
that
were
going
tell
people
to
do
before.
I
tell
people
what
the
process
is.
C
So
yeah
I
fully
support
this
and
I.
Think
you
and
I
you
and
I
talked
about
this
offline
but,
like
I,
think
you
know,
I
think
it's
good
for
us
to
push
this
forward.
I
think
that
the
tls
support
like
it
there's
I'm.
We
should
be
embracing
that
and
we
should
be
telling
people
that
that's
the
right
way
to
do
things.
In
my
opinion,
I
guess
I.
My
couple
thoughts
are
when
we
get
the
question
of
like
well
I,
don't
want
to
do
that
or
are
we
prepared.
F
C
Well,
you
know,
you
know
the
team
has
decided.
You
know
this
is
the
way
forward,
and
this
is
you
know.
This
is
what
we
recommend
as
cops
I.
Think
there's
that
point
that
we
need
to
make
sure
that
we're
ready
to
say
that,
but
also
I
was
thinking.
So
you
said
that
you
suggest
doing
this
sed
my
manager
and
the
Calico
upgrade
at
the
same
time.
By
doing
this
in
112,
we're
also
doing
a
kubernetes
upgrade
at
the
same
time,
and
it
cops
up
great
at
the
same
time
right
so.
A
It
out
there's
as
a
point
I
think
it's
a
good
point.
I
think
it
also
like
makes
it
harder
to
downgrade
like
if
you're,
making
four
changes
at
once
and
then
you
would
like,
but
like
some
of
those
work
with
111
and
some
don't
write,
it
all
work
with
one.
But
you
know
that
makes
it
harder.
I
think
I
can
draw
up
docs
on
how
to
split
it
up,
because
I
think
it
is
possible.
A
I
think
it
would
involve
multiple
disruptive
updates,
but
I
think
okay,
yeah
I
want
me
to
do
it
is
to
like
we
can
still
do
calico
with
CR
DS
in
112,
and
you
could
explicitly
not
turn
on
at
CD
manager
in
112,
right
of
so
current.
We
turn
it
on
like
automatically
well
I,
think
you
could
like
I,
think
or
we
can
make
it
so
that
you
can
specify
legacy,
in
other
words,
actively
turn
it
off.
A
So
we'll
still
won't
have
calico
talking
to
we'll
still
have
calico
using
CR
DS
for
everyone
using
tops,
112
I
think,
but
we
won't
you
can.
If
you
really
want
to
keep
your
sed
alive
or
at
CD.
You
want
to
keep
your
STD
on
like
version
2
or
insecure,
for
whatever
reason,
then
you
can
do
that
I.
Don't
know
why
people
would,
but
you
can
do
that.
I
think.
That's
a
good
idea
right
right,
I.
E
Would
think
that
the
only
thing
I
would
say
is
I
think
we
need
to
do
more
testing
around
the
upgrade
process,
particularly
moving
both
at
CD
men
moving
to
STD
manager
from
the
old
system
on
version
2
and
upgrading
TV
3
at
CD
manager.
At
the
same
time,
my
experience
has
been.
That
is
not
a
smooth
process
at
trying
to
combine
those
two
steps.
E
E
A
E
E
The
issue
we've
seen
rate
is
once
it
does,
the
first
master.
Is
it
won't
validate
the
cluster
at
that
point
because
API
server
is
down
on
the
other
two,
because
it's
yeah
me
too,
and
the
only
way
to
do
that
is
to
go.
You
know
term
nodes
manually
or
you
know,
go
for
a
more
validation
which
is
really
a
bad
idea.
I
think.
A
A
The
new
the
new
node
can't
join
in
a
three
node
cluster.
The
new
node,
which
has
EDD
manager
can't
join,
can't
find
the
other
two
X
can't
find
the
other
two
Etsy
D
managers
and
fails
to
join
the
cluster,
because
each
API
server
only
talks
to
its
local
@,
çd
and
I
have
been
doing
the
testing
with
like
just
roll
like
like
cloud
only
instance
group
rolls
master
and
master
interval.
I
do
PI
to
2
seconds
right.
A
C
E
A
No
there's
there's
no
problem
with
doing
that
that
I'm,
aware
of,
although
it
is
confusing
I,
don't
know
if
I
wonder
if
we
get
like
do
some
hash
trick
or
something
like
like
to
try
to
get
them,
try
think
why
they
actually
Oh
cuz,
it's
just
a
race
yeah.
Try
think
whether
we
could
do
some
trick
to
like
get
them
to
well
look
at
where
there's
a
trick.
We
can
do
how.
A
Storage
difference
generally
and
they
share.
The
only
thing
they
share
is
is
like
4
letters,
which
is
you
know
the
HUD
thing
like
EDD
CTL
doesn't
work
like
between
them,
so
what
Etsy
D
manager
does?
Is
it
so
when
you
run
out
to
be
three
by
default,
it
runs
both
databases
in
a
single
process
and
it
listens
for
both
protocols
and
when
you
talk
SCD
to
UC
storage,
and
when
you
talk
at
CD
3,
you
see
storage,
3,
so
use
this.
A
One
database
is
serving
like
two
different
data
stores
on
the
same
in
the
same
key
space,
which
is
just
a
little
scary.
So
in
other
words,
like
imagine
that
you
imagine
that
we
didn't
pass
the
Etsy
v3
flagged
API
server
you
would
like
rollback
so
like
whatever
you
were
on,
would
be
that's
a
little
I
think
that's
terrifying.
So
what
we
did
in
that
CD
manager
is
we
basically
when
we
run
exit
III
me
very,
we
turn
off
exit
e2
mode
entirely.
A
D
E
If
we
split
the
up,
if
we
split
the
upgrade,
if
we
had
a
way
for
because
the
problem
with
once
you
upgrade
a
split
right
is
the
problem
is
once
you've
done.
The
at
CD
v3
upgrade
is
the
second
and
third
masters
their
API
server,
still
configured
to
talk
to
e2
and
that's
no
longer
available.
So
if
we
could
have
a
way
for
that
to
you
know,
pull
and
figure
out
what
version
of
CD
the
cluster
was
configured
like.
E
A
Does
do
that
a
challenge,
the
challenges
we
have
to
also
configure
API
server,
so
HDD
manager
does
something
similar
you're
describing
good
right
it.
So,
in
other
words,
if
you
go
in-
and
you
say-
EDD
manager,
please
upgrade
to
version
3,
it
will
do
that
the
de
Valence
is
we
have
to
do
it.
As
you
say,
at
some
stage,
when
it
notices
that
change
see.
A
I,
don't
have
any
I
guess
one
thing
we
could
do
is
we
could
have
so
one
option.
What
I'm
sure
would
be
we
don't
we
don't
worry
about
it?
We
say
just
just
it's
gonna
be
disruptive,
that's
on
your
first
and
an
H,
a
cluster
and
your
first
master.
It's
gonna
be
disruptive.
The
other
option
is
we
do
something
weird
like
we
have
cubelets
register
via
the
API,
the
alb
endpoint.
A
D
A
D
A
A
I
have
held
off
on
the
docs
visit
hasn't
been
like.
You
need
to
have
this
discussion
as
a
community
and
drought.
What
else
people
agree
I
think
a
new
requirement
in
here
and
that
like
if
we
want
to
support
turning
on
at
CD
manager
in
a
less
disruptive
way
we
have
to
support.
We
have
to
make
it
so
we
have
to
spell
out
the
TLS
configuration
so
I'll
look
into
doing
that.
Well,
what
it
means
is
so
here's
secure
so
I
think
I
think
we
should.
A
We
should
have
it
video
that
you
can
here's
what
I
suggest
you
can
install
a
CD
manager
in
111
and
it
won't
automatically
turn
on
TLS
until
you
go
to
112
I
think
anyway,
that's
what
I'll
look
at
because
maybe
not
yeah,
maybe
not
I'm,
trying
to
think
ed
city
mentor
is
pretty
smart
about
like
rolling
out
TLS,
so
maybe
we'll
get
away
with
it.
We'll
have
to
see
like
each
node
reports,
whether
it
supports
TLS
or
not
so
I
will
I
would
double
check.
A
A
A
C
As
I
said
earlier,
I
think
that
you
know
that's
what
we're
saying
we
stick
with
it.
I
was
just
trying
to
propose
the
alternate
I
I
do
support
it.
I
think
that
we
should
kind
of
just
do
it.
It's
bigger
than
the
last
few
of
crates
we've
had
but
hey.
If
it's
what
we
should
do
it
I
I
support
it
as
well.
So
don't.
A
Don't
spend
too
much
time
on
it.
Just
okay,
I
appreciate
I,
think
I
mean
I,
think
I
think
if
anyone
has
any
particular
reason
for
why
other
than
like
calico
talking
to
local
xcd,
why
they
need
to
talk
to
Ed
CD,
not
using
TLS
I.
Think
you'd
be
great
to
hear
that
I
think
you
know
we.
We
have
this
idea
of
pushing
cops
towards
a
most
secure
configuration
possible
and
I.
Think
that
does
mean
that
we
will
have
to
make
some
more
disruptive
upgrades.
I
think
this
is
like
definitely
on
the
like.
A
A
E
A
A
Think
that's
probably
right,
I
think
I
will
I
think
I
think
we
have
reached
that
conclusion
and
I
will
I
think
it's
possible.
I
mean
I'm
sort
of
proud
about,
like
some
of
the
tests
that
are
in
the
STD
manager
like
code
base
right
now,
there's
some
like
fairly
brutal
stress
tests
and
integration
tests,
so
I
can.
A
There
are
others,
but
yes,
there
are.
There
are
other
ones
like
that
B
there
are
lots
of
ones
where
we
take
a
lot
longer
than
we
would
like
to
to
converge,
so
I
think
I
would
I
think
you
will
be
happier
with
some
of
the
other
patches
but
yeah
it's.
It
is
super,
like
the
reason
why
that
matters
is,
if
you
are
down
for,
if
you're
just
sitting
there
for
five
minutes
waiting
for
a
DNS
timeout,
it's
a
little
frustrating,
because
you
don't
see
anything
when
you
do
coop
cuddle
like
operations.
A
So
it's
scary,
I,
don't
know
if
there's
other
things
you
want
to
talk
on
on
this
front,
I
think
the
the
action
items
I
think
we're
generally
agreed
on
the
approach.
I
think
there
are
action
items
for
me
to
look
at
breaking
it
up,
doing
a
release
for
that
CD
manager
that
we
can
turn
on
in
111
that
doesn't
necessary.
Turn
on
TLS
I.
A
Think
we're
generally
agreed
that
at
some
stage
will
like
duet,
CD
manager,
with
forcing
people
to
use
TLS
generally
agreed
that
the
Calico
upgrade
will
probably
have
to
be
disruptive
and
they'll
have
to
forget
what
psyllium
is
gonna
do
and
if
I
draw
up
and
we're
definitely
agree
that
I
need
better
and
clearer
docs
and
then
I
just
once
we
see
those
we
can
like.
This
is
not
like
the
final
say,
I
think.
Once
we
see
those
Doc's,
we
can
say
actually
that's
too
disruptive.
We
need
to
split
that
process
outside.
F
E
Agreed
so
somewhat
I
guess
related
tangent
I
know:
we've
had
a
bunch
of
PRS
in
the
past
about
improving
the
way
we
do
our
rolling
updates
across
nodes
and
speeding
that
process
up
when
there's
a
ton
of
nodes
that
where
we
landed
with
that
I
know
it
got
one
point:
I
got
kicked
back
API
machinery
and
then
it
looked
like
there
were
some
more
pr's
later
like.
Are
we
willing
to
take
some
changes
in
for
that
or.
A
A
The
short-term
roadmap
are
three
things
that
I'm
suggesting
that
I'm
basically
I'm
working
on
and
one
of
them
is
taking
those
first
steps
on
the
cluster
API,
so
the
cluster
API
should
enable
it
should
be
this
in
the
standard
way
that
were
like
a
lot
of
the
other
cops
were
like
taking
all
the
things
that
we've
built
and
we're
trying
to
like,
promote
them
into
re
usable
projects
that
other
components.
Other
tools
in
the
ecosystem
can
use
both.
A
If
you
want
to
write
a
cops
competitor
or
alternative
I
should
say
already
cops
alternative,
you
can
use
them
and
also
then
the
autoscaler
can,
for
example,
or
things
can
build
on
it,
so
the
autoscaler
would
be
better
integrated
with
cops
right
right
now
we
have
like
two
two
brains
and
like
this
is
a
problem.
It's
a
problem
in
you,
know
gke
and
I
suspect,
tks,
and
all
these
four
things
as
well
like
it's
hard
to
deal
with,
and
so
we
put
it
in
a
communities
type
and
everyone
can
control
consistently
the
rolling
update
logic.
A
I
personally
have
a
patch
which
reduces
the
times
that
we
wait.
Some
of
those
are
a
little
conservative,
so
we
can
certainly
do
that
that
helps.
The
other
thing
is.
There
are
some
much
bigger
fixes
which
are
good
fixes.
The
question
is
like
to
what
extent
we
want
to
do
them
when
the
cluster
epi
is
coming
and
I
guess.
A
My
my
concern
with
them
is
that
they've
all
been
fairly
large
right
and
if
someone
wanted
to
produce
a
if
that
that
PR
or
third
ward,
those
PR
throwers,
want
to
create
like
very
focused
PRS,
that
just
did
smaller
pieces
I'd
be
very
open
to
it,
but
I
feel
like
I
feel
like
if
we
were
to
do
the
the
note
reducing
the
node
default.
The
note
interval
is
a
good
change
and.
E
There
was
a
there
was
a
PR
recently
I,
don't
think
it
made
it
into
111
I
think
it's
I
think
it
was
merged
after
I
know.
Who
did
it
I
don't
remember,
but
for
changing
for
allowing
us
to
set
the
validation
time
out?
Oh
yeah.
E
If
we
can
merge
that
in
then
I
am
all
in
support
of
dropping
the
the
timeouts
on
the
other
stuff
is
that's.
The
biggest
thing
we
hit
is
we
have
those
actually
increased
over
the
defaults,
because
it's
like
Brazil
and
Sydney
and
some
of
these
other
regions.
Some
of
these,
sometimes
we
hit
instances,
are
slow
to
come
up
from
API
calls
and
Amazon.
E
E
A
Cool,
yes,
I
think
I,
actually
change
I
raise
the
time.
I
was
well
like
it's
definitely
important.
Once
you
lower
the
interval,
you
have
to
erase
time
at
the
other
one
focus
thing
we
could
do.
One
small
thing
we
could
do
is
we
could
enable
you
to
pass
a
flag
to
run
more
than
one
node
or
master,
but
really
node
is
gonna,
be
the
one
you
wanted
to
do,
bookmark,
but
yes,
more
than
one
node
at
the
same
time,
which
would
it's
not
what
you
it's
not
perfect,
like
what
you
really
want
to
do.
A
Is
you
want
to
grow
the
number
of
nodes
and
then
reduce
them,
but
we
could
say
we
could.
We
could
almost
put
them
on.
They
use
put
that
on
the
user
and
we
could
say
like
if
you
want,
if
you
want
this,
please
manually
resize,
your
node
pull
your
instance
group.
Sorry,
your
needs
group
before
before
doing
the
rolling
update,
in
other
words
and
then
put
it
back
afterwards.
That
sort
of
been
a
stumbling
block
about
with
a
lot
of
these
pr's
is
like
it's
hard
to
recover
from
some
of
those
scenarios.
A
If
we
get
interrupted
in
the
middle,
but
if
we
had
power
run
nodes
in
parallel,
some
come
with
a
better
option.
Like
you
know,
parallel
node
count
equals
5
and
then
you'd
opt
into
that,
and
that
would
be
I.
Think
they'll
be
fine
because
it
would
get
us
to
sort
of
like
the
future
right.
I
feel
like
weird
cluster
a
is
coming,
but
it's
not
here,
and
it
would
be
nice
if
it
was
here.
A
F
B
C
A
When
people
would
do
those
certain
things
do
tell
me
and
we
can
like
try
to
like
make
them
into
Co,
but
yeah
the
I
like
that
/
/,
IG
parallel
rolls.
We
can
definitely
do
that.
The
other
one
is
if
we
enabled
you
to
run
the
master
with
parallel
counts,
I
think.
Maybe
we
could
look.
Maybe
it
could
be
a
non-disruptive
upgrade
it's
sort
of
cheating.
Maybe
our
validate
logic
would
work.
In
other
words,
that's
worth
I
will
now
I
might
keep
that
in
my
back
pocket.
A
If
all
else
fails
like
that,
might
work
cuz.
It's
certainly
not
something
you
do
normally,
but
yeah
I
like
I
like
that
idea
like
in
practice.
We
want
to
start
so
cluster
API
is
gonna.
Have
the
Machine
deployment
is
the
is
the
name
of
this
thing
and
it
will
look
a
lot
like
a
deployment.
It
will
have
all
the
options
we
want.
It
will
have
like
do.
I
want
to
surge
max
unavailable
like
to
specify
the
number
of
roles
either
as
a
percentage
or
as
an
absolute
number
like.
These
are
all
things
we
want.
A
I,
don't
know
how
many
of
those
it
makes
sense
to
code
in
for,
like
the
percent.
Now
that
you've
said
the
per
instance
group,
whoa
and
I'm
feeling
like
the
percentage,
is
a
an
important
one,
but
I
want
to
keep
it
I
want
to
keep
it
scoped
or
scoped
I.
Don't
keep
it
sensible,
oh
yeah,
that
feels
that
feels
reasonable
I.
So
I,
don't
know.
If
that
answers
your
question
Ryan
about
like
and
I,
don't
know,
I,
don't
know,
I,
don't
know
if
it
answers
your
use
case.
Yeah.
E
A
Okay,
that's
cherry-pick
validation.
Tonight,
okay,
we
have
one
last
little
item
on
the
agenda,
which
is
the
thing
I
just
alluded
to,
which
is
after
we
finally
get
at
CD
manager
or
squared
away
like
what
am
I
person
anything
about
doing
that
switch
is
CR
DS,
there's
a
work
in
progress,
PR
up
for
that
replacing
the
every
API
server
with
CR
DS,
which
will
actually
that
cop
server
hopefully
work.
They
first
steps
on
the
cluster
API,
which
we
just
mentioned,
cost
a
P.
A
I
is
still
very
alpha,
so
we
likely
won't
make
it
that
if
I
would
I
can't
imagine
what
will
we
make
it
the
default
yet,
but
we
can
make
make
it
so
that
you
can
like
have
an
instance
group
that
is
backed
by
this
group
of
nodes
that
is
backed
by
posture
API,
at
least
that
people
start
to
experiment.
I,
don't
know,
I,
think
that
will
definitely
have
any
feature
flagged,
because
one
its
alpha
and
two
I
suspect
we
won't
have
parity
with
all
the
features
of
Cox.
A
A
A
Yeah,
I
think
answer
is
no
one
we're
still
working
on
it.
Ie
we
have
a
real
problem.
We
have
a
challenge
in
cops
that
today
we
embed
our
bundle
or
specification
of
our
add-ons
into
the
cult
binary
itself
and
that's
not
the
right
way
to
do
things
because
for
causes
challenges
because,
for
example,
every
time
there's
a
new
version
of
add-on
X,
we
have
to
release
a
new
version
of
cops
to
get
that
new
version.
A
That's
a
little
suboptimal
and
the
bundle
externalizes
that
into
a
gamble
file
that
or
llamo
Fowler
llamo
like
file
that
is
sort
of
referenced
and
can
be
updated
separately
from
cops
itself
or
released
separately
from
kubernetes,
for
example,
the
add-on
operators
are
answer
the
question
of
how
do
you
apply
those
changes
safely?
So
in
theory,
you
know
it's
some
of
the
SCD.
Well,
the
seti
pretty
like
the
problem
we
face
where
XUV
has
to
upgrade
and
then
change
the
api
server
flag.