►
From YouTube: kubernetes kops office hours 20190621
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello:
everyone:
it
is
Friday
June
21st.
This
is
cops
office
hours.
I,
am
your
moderator
facilitator,
just
in
Santa,
Barbara
I
work
at
Google
a
reminder.
This
meeting
is
being
recorded.
What
we
put
on
the
Internet,
please
be
mindful
apply
for
our
kind
of
conduct.
I
put
a
link
to
the
agenda
in
the
chat.
There
are
a
few
things
on
it,
but
we
should
be
able
to
get
through
it
without
too
much
trouble,
but
please
do
put
any
other
items
on
there
so
that
we
can
be
sure
to
get
to
them.
A
B
A
Yes,
thank
you.
Yes,
so
there
is.
There
is
a
working
group,
Cates
infra,
which
should
host
these
things.
I
would
say
it's
not
yet
ready
to
do
that
yet,
but
I
am
trying
to
get
the
ducks
in
a
row
in
preparation
for
that,
so
the
script,
building
a
new
images
I
think
it
can
be
done
done
more
or
less
unattended.
Now
so,
once
we
forget,
the
mechanics
I
think
we
could.
We
could
actually
get
there.
A
They
the
big
thing
from
that
one
of
you
is
going
to
be
figuring
out
where
to
how
to
securely
store
AWS
credentials.
But
that
is
a
problem
that
the
working
group
kicked
in
for
has
to
solve
anyway,
for
a
bunch
of
other
things,
so
for
like
storage
of
secrets,
so
that
that
will
happen
eventually,
but
I'm
also
going
to
try
to
get
better
at
pushing
images,
maybe
even
trying
to
go
into
a
schedule
like
a
weekly
type
thing.
The
there
are
separately,
though,
is
another
thing
happening,
which
is.
There
is
a
sub-project.
A
I
think
we
should
try
to
align
our
work
with
that,
though
they're
also
not
tackling
the
actual
publication
of
the
image
so
they're
just
tackling
building
a
tool
that
would
add
files
to
an
ami,
but
I
think
this
is
a
first
step
towards
a
bigger
effort
in
my
mind.
So
hopefully
we
can
get
cube,
deploy
integrated
into
that
effort
and
yes,
no
great
progress
yet,
but
lots
of
things
happening
I
would
I.
Would
summarize
I.
A
B
A
The
first
step
would
be
double
you
do
it
every
week
and
then
the
next
step
would
get
automate
it.
Yes,
oh
yeah,
I'm,
gonna,
I'm
gonna
rearrange
they
Jen's
a
little
bit
guy.
If
that's
alright,
to
like
put
your
thing
after
the
other.
Given
we
talked
about
releases
I'm,
gonna
move
your
CNI
thing
down
just
sure
after
I
guess
everything,
oh
well,
I
hope
they're
all
rusted
agendas.
A
Do
you
mind?
Do
you
mind
if
I
do
it?
Last
or
after
the
release
discussion
can
do
yeah?
You
know
I'm
a
time
conflict.
All
right,
great
I
mean
I'm
leaving
after
but
yeah.
Okay.
What
we
can?
Let's
talk
about
it
now
then
see
you
now
see
and
I
scalability
for
112.
It
looks
like
you
have
a
good
suggestion
guide.
One.
Take
it
away.
A
C
For
context,
sky
Tanner
have
sort
of
moved
from
as
we're
calling
them
artists
and
clusters
to
corpse,
and
part
of
that
was
there's
there's
already
a
pull
request
that
Alex
raised
around
trying
to
get
cool
fights
about
the
BGP
route.
Reflection
were
now
spending
up
some
more
clusters
that
would
quite
like
to
have
a
scale
boss
or
existing
clusters,
but
don't
need
the
fill
way
of
BGP
route
reflectors
and
for
that
calico
heavily
recommend
and
using
tyfa
to
register
on
the
API
server
and
I
know
because
of
the
weight.
C
A
local
sort
of
when
this
switched
over
to
recommending
a
CB
is
the
data
store
and
then
back
to
the
API
server
over
the
course
of
the
three
sort
of
release
cycle,
and
that's
why
typhoon
was
previously
there
and
they're
not
there
and
then
is
their
back
and
albeit
set
to
zero
replicas.
So
I
was
hoping
and
to
basically
allow
users
the
flexibility
to
set
up
tyfa
in
front
of
their
api
server
as
if
they
want
to
start
scaling
above
100
nodes.
C
C
The
the
BGP
route,
reflectors,
there's,
there's
a
lot
more
complexity
there
in
terms
of
I.
Think
different
people
have
different
opinions
about
whether
they
should
so
the
trouble
is.
You
can
configure
each
node
to
be
a
reflector
through
and
annotations
in
the
CR
D,
and
that
there's
there's
different
opinions
from
what
I've
seen
as
to
whether
or
not
you
should
be
doing
that
on
your
masters
or
somewhere
else,
and
so
there's
a
lot
more
complexity
there
in
terms
of
an
opinionated
way
of
there's,
not
one
accepted
way
of
doing
it.
C
A
A
C
A
This
sounds
great
I,
don't
know
whether
we
should
automagically
change
it,
but
I
think
probably
the
easiest
step
would
be
first
step
like
expose
the
functionality
and
then
in
a
separate
PR.
We
could
at
the
behavior
because,
if
probably
should
change
the
default,
if
I
don't
know
it's
complicated,
but
yes,
we
yeah,
let's
start
with
the
exposing
the
functionality
and
then
I,
don't
see
any
reason
that
I
don't
see
any
reason
not
to
get
that
it
yeah.
That
sounds
great
as.
D
A
E
I
just
want
the
I
posted
this
issue
yesterday,
that's
something
we
hit
when
you're
using
mixed
instance,
policies
with
cops,
something
we
plan
for
so
we
mixed
instance
policies.
So
it's
not
familiar,
usually
I
want
to
run
40%
demand,
60
percent
spot
and
it
tries
to
keep
that
balance,
which
is
great,
except
that
kubernetes
doesn't
really
know
about
the
changes
that
EWS
mates.
So,
for
instance,
you
still
scale
down
a
couple
nodes
and
now
you're,
don't
don't
have
the
right
mix.
E
So
AWS
will
then
scale
up
and
scale
down
a
note
out
of
erased
G
on
you,
because
it
happens
through
the
load
balancer
or
through
dollars.
Going
group
there's
no
drain
on
the
node,
so
you
end
up
with
everything
getting
sometimes
which
was
really
bad.
If
core
DNS
happens
to
be
on
that
node,
which
is
how
we
actually
noticed
the
issue
so
there's
two
solutions:
the
first
solution
and
what
we
actually
implemented
was
to
turn
on
instance,
protection
on
the
ASG
automatic
instance,
protection
on
the
SG.
E
E
So
that's
the
downside
there,
but
it's
a
simple
change
and
it
just
works
and
we
vetted
the
issues,
but
we
voted
feted
both
these
solutions
with
AWS
support
as
well,
and
then
the
other
solution
would
be
to
implement
lifecycle
hooks
on
ESG
and
then
use
something
like
logic
monitor.
Has
a
project
already
to
do.
E
Interception
of
those
lifecycle
hooks
drain
the
nodes,
so
either
one
of
those
solutions
would
be
better
than
we
are
today.
I
think
that
it
looks
like
my
comment
down
there:
instant
using
work
on
the
PR
for
the
first
burst
solution
which
I
look
at
that's
what
we
deploy
minutes
good
for
that,
but
I
just
want
to
bring
back
this
I
have.
F
A
comment
on
this:
this
is
the
same
thing
we
see
with
regular
aSG's
right
because
the
rib
area,
she
is
also
blue,
cross
bone
rebalancing.
So
if
the
autoscaler
terminates
something
it
will
price
autoscaler,
so
you
just
turned
off
the
sub
process
the
easily
balance.
So
this
is
something
similar.
We
do
this
internally
and
I.
Think
Ops
has
that
okay,
yeah.
D
I
actually
think
for
me,
yeah
I've
done
that
be
the
suspend
process
on
an
instance
group.
So
that's
one
way
you
can
fix
that
aspect,
but
that's
just
suspend
like
we
do
reap
suspend
rebalance
if
you're
across
multiple
regions.
That
way
it
doesn't
try
to
spin
up
new
things.
But
this
is
this.
The
I
first
I
was
like
this
is
an
interesting
edge
case
and,
frankly,
I
didn't
know.
The
autoscaler
had
added
extensions
group
support
which
that
was
great
to
see.
So
now
we
can
move
to
that,
but
I
it
does
I.
D
B
The
same
case,
I
can
also
recommend
AWS
in
the
labs,
as
wonders
posit
Ori,
when
I
tell
you
to
like
they
wrote
a
script,
it's
just
watching
the
the
instance
metadata
and
waiting
for
the
termination
to
all
the
drain
or
you
to
be
able
to
write
on
it.
This
is
what
we
used
with
ECS
you
just
like
automatically
terminate
the
nose
to
drain
it
before
it
actually
gets
shut
down.
So
if
you
use
spot
instances,
that's
potentially
something
for
cops
as
well
to
just
tell
the
AWS
API
tikkun
IDs
api,
hey!
E
D
A
Sounds
good
if
we
we
at
least
could
do
that
with
a.
What
do
we
care,
mindful
of
like
whether
it's
a
breaking
considered
a
breaking
change
or
a
significant
change
that
we
would
probably
want
to
do
on
a
dotto
release,
so
you
can
definitely
get
into
130
No
and
trip
rave
a
release.
Note
if
we're
talking
about
like
putting
into
the
next
112
release.
We
should
be
more
careful,
but
if
it's
bad
enough,
we
can
do
that.
I
would.
A
A
A
The
other
trick
is,
as
you
say,
to
like
just
when
creating
a
new
cluster
to
do
that
and
till
I
change,
the
value
with
which
we
create
new
instance
groups.
I,
guess
in
this
case-
and
we
also
could
put
supporting
2cups
upgrade
to
like
recommend
these
things.
I,
don't
know
whether
anyone
actually
runs
Cox
upgrade
once
they
get
into
like
a
certain
level,
but
it
maybe
we
could
stop
encouraging
people
to
run
cuss
cups,
upgrade
it
without
actually
applying
it.
So
you
can
sort
of
see
those
things
yeah.
D
E
D
I
had
it
in
the
cops
way
almost
complete
and
I
was
just
going
to
leave
it
the
normal
way
that
we
contribute
things
like
this,
where
it's
off
by
default,
and
then
we
can
enable
it
and
either
in
a
separate
PR
or
we
can
have
a
discussion
and
I
can
you
know
patch
the
default
in
the
PR
once
we
have
it.
So
why
don't?
We
does
anyone
feel
disagree
that
that
go
with
go
ahead
with
that
for
now
cool
all
right.
A
A
Consider
me
poked
yes,
I
I
just
want
to
think
about
it.
Yes,
it
looks
great
I
would
just
need
to
like
really
think
about
it
very
carefully,
but
yeah.
It
looks
good
cuz,
it
look
yeah,
it's
gonna.
Four
people
don't
know
this
should
support
changes
that
are
non
version.
Changes
make
sure
that
they
are
applied
correctly.
A
So,
for
example,
if
in
calico
we
have
a
field
to
enable
the
yeah
to
enabled
life
I
guess
and
that
changes
the
manifest,
but
not
the
version
of
the
manifest
Ryan's
fix
will
like
basically
hash
the
manifest
and
say
this
is
a
different
manifest.
Even
though
has
the
same
version
because
it's
still
calico
like
oh,
whatever
version
bran,
it
is
a
different
configuration,
so
that
will
actually
be
a
good
fix
that
otherwise
would
make
those.
So
that
makes
those
have
changes
a
little
bit
harder
to
apply.
D
That's
pretty
much
it.
You
know
we're
a
couple:
betas
in
and
Justin
cut
another
one.
Last
night,
so
I
mean
it
became
easily.
Once
we
have
the
beta,
then
we
should
wait
another
you
know
a
week
or
so,
but
I'm
not
aware
of
any
major
blockers.
I
was
more
just
curious.
If
other
people
are
and
especially
students,
115
was
cut,
you
know
we
should
push
113
sooner.
E
We've
got
a
half
an
hour,
deploy
system
to
get
around.
This
I
mean
we've.
We
just
stripped
annotations
off
to
make
it
worse
to
upgrade.
So
we
can
certainly
live.
Another
release
like
to
have
a
bigger
concern
with
getting
the
PR
Mike's
working
on
or
the
PR
Mike
said.
He
would
do,
for
instance,
profiles
for
other
Sudak
you
put
in.
That
seems
like
a
bigger
issue.
A
Or
beat
me
to
one
okay,
that's
great,
but
yeah
yup
right!
So
ok!
So
let's
try
to
do
one
13-0
sort
of
Midland
next
week.
I.
Do
you
think
we
should
get?
Actually
the
image
didn't
change?
So
it's
not
a
big
deal,
but
we
should
get
the
TCP
sac
one
out
of
the
Alpha
Channel
and
then
yeah.
Let's
do
that
and
then,
at
the
same
time,
I
will
also
cut
there
for
115
zero
alpha
one
and
we
can
bump
one
14-0
to
beta
and
look
at
us
we're
almost
caught
up
we're
getting
there.
F
A
There
are
binary
artifacts
which
go
to
s3
buckets
and
those
are
that
is
a
work
in
progress
in
the
working
group.
Capes
in
working
group
case
infra
to
do
those,
but
we
actually
have
a
PR
and
we
might
want
to
get
that
working
and
then
there's
binary
artifacts
which
go
to
github,
which
I
think
we
want
to
keep,
and
we
don't
really
have
a
strong
plan
for
how
to
do
that.
A
Yet
I
think
honestly,
if
we
got
if
we
got
the,
if
we
got
the
tag,
if
we
got
a
tag
build
going
in
prowl,
so
that
we
could
do
a
tag
and
out
would
come
the
golden
wineries.
That
would
be
a
great
first
step
maiden,
will
still
promote
them
manually,
while
we're
still
gonna
support
them
manually
regardless.
But
the
process
will
change
the
that.
Then,
then
we
don't
worry
as
much
about
like
how
do
we
know
the
tag
is
valid.
A
A
We
will
do
another
pull
request,
stating
the
DNS
controllers,
the
energy
image
so
again,
controller
116
zero
has
this
Shah
and
we
basically
send
that
to
a
send
a
pull
request
to
Yama
file
and
that
the
committing
of
that
will
trigger
the
from
image
promoter
to
bet,
and
we
can
do
the
same
thing
for
for
the
binary
as
well.
Okay,
let
me
start
with
the
tag.
Release
that'll
be
great.
Yes,
we
I
mean
you
know.
A
D
B
It's
Google
cloud
build
okay,
I've
been
in
branch
management
like
it's,
like
I
can
take
four
thousand
lines
of
code
thing.
You
are
unable
to
run
this
by
yourself,
like
I
spent
like
almost
a
day
to
try
to
build
to
Banaras
by
myself.
My
own
Google
Cloud
account
with
the
same
tool
link.
It
is
impossible.
A
That's
reassuring.
The
the
big
advantage
of
Google
cloud
build
over
prowl
is
be,
there's
always
concerns
about
the
security
of
prowl.
So
we
pour
I.
Don't
want
to
do
the
actual
binary
promotion
from
prowl,
but
I
mean
we
can
probably
figure
it
out,
but
anyway,
like
today.
That's
pretty
why
they're
using
Google
topo.
But
yes,
then
the
secrets
thing
is
a
whole
nother
thing.
A
G
I'm
trying
to
understand
what
yeah,
what
what
the
goal
is,
is
it
to
allow
cops
to
the
store
stake
in
CR
or
is
there
you
know
some
sort
of
cops
controller
coming
or
that's
an
idea
and
and
I
also
I
guess
maybe
a
related
question.
What
what
is
the
cops
API
server
and
is
it?
Is
it
usable
today
or
you
know
what
yeah
I
I
don't
know?
Sorry,
it's
kind
of
a.
A
A
A
The
project
I
guess,
kicked
off
I
guess
years
ago.
Now,
at
the
time
it
was
aggregated,
I
created.
Api
servers
were
the
thing.
A
great
API
servers
are
no
longer
the
cool
thing
and
the
writing
is
on
the
wall,
and
so
CRTs
are
now
the
new
cool
thing,
the
new
hotness
and
they
are
a
much
more
a
much
easier
interface
for
users
to
use.
You
don't
need
a
separate
API
server.
You
don't
need
any
of
this.
You
just
register
your
CDs,
so
we
cups
already
moved
to
like
real
API
types.
A
A
long
time
ago
like
we
did
even
before
Cup
server,
but
they
are
real
API
types.
I
think
we
changed
the
API
group
so
that
they
can
be
registered
as
CR
DS.
Today
you
can,
and
so
yes,
we,
we
will
support
controllers
from
there
and
there
are
two
goals.
The
short-term
goal
is
to
enable
cluster
API,
so
I
don't
think
it's
all
merged,
but
on
the,
if
all
the
PRS
go
in,
we
should
be
able
to
given
an
existing
cops.
A
Cluster
create
a
new
instance
group,
CRD
instance
in
that
cluster,
only
on
GCP
today,
but
create
it,
and
it
will
like
basically
spawn
nodes
from
there
without
having
to
run
the
cop
CLI.
So
it's
it's.
It's
introducing
cluster
it
guy
into
cops,
not
for
masters
not
for
new
cluster
bring
up,
but
just
for
that
and
then
the
sort
of
longer
roadmap
is
full
management
of
of
cops
clusters.
Optionally
from
that
direction
as
well,
so
you
can
still
use
the
the
CLI
tool.
E
G
A
G
A
The
dumbest
one
possible
would
be
to
effectively
watch
those
watch,
those
objects
and
we
trigger
the
same
logic.
The
really
demonstrate
would
be
exact,
cops
right
right
so
can
we
can
presumably
like
there
is
a
separation
of
the
CLI
rapper
to
the
commands
that
it
runs.
So
you
could
like,
in
your
controller,
start
to
get
smarter
and
I.
Think
that's
what
we've
started
to
do
in
I'll,
see
see
you
on
the
PR
or
the
I
know.
There's
Mercia,
but
I'll
see
you
on
the
PR,
where
we
do
that.
G
A
A
So
you
would
still,
if
you
wanted
to
do
it
from
scratch.
You'd
still
have
to
like
generate
keys
to
the
s3
bucket,
and
this
is
what
the
cluster
base
or
VFS
cluster
base
I
think
there's
a
field
there,
which
is
supposed
to
enable
that
to
be
split
but
you're
at
the
edge
of
what
might
or
may
not
work.
At
this
point,
okay,.
A
B
A
Is
there
is
anybody
I
thought
they
also
were
there
I
thought
there
was
a
separation
already
so
like
the
the
CLI
commands
already
call
into
a
package,
and
there
is
more
logic
than
there
should
be
in
those
CLI
commands,
but
you
can
already
do
most
of
that.
But,
yes,
I,
think.
The
problem
with
that
PR
was
I
was
I
was
very
happy
to
see
that
PR
accept
because
it
moved
everything
at
once.
It
like
immediately
went
into
rebase
hell
and
we
could
never
get
out
of
rebase
health
I.
Just.
A
A
D
Was
going
to
stay-
and
you
know
commenting
on
this
further,
the
CRB
and
all
that
stuff.
We
did
talk
about
this
acute
Connie,
you
it
so
if
you
guys
haven't
seen,
I'll
actually
put
our
slides
in
the
zoom
channel
and
on
there,
but
yeah.
If
you
guys
want
to
take
a
look,
we
went
through
some
of
the
CRD
stuff
and
how
that
fits
in
with
cluster
yeah.
A
And
it
looks
like
a
quick
follow
up
or
quicker
follow-up
and
Daniel
about
what
happens
when
cops
crashes
in
the
middle
of
writing
state
or
plant
industry
sources,
uh-huh
yeah,
the
answer
is,
you
can
be
left
in
a
partial
state.
You
shouldn't
neither
s3
nor
GCS,
nor
the
coop
API
server
will
let
you
write
like
half
a
file
but
and
we're
sort
of
careful
not
to
be
too
weird.
But,
yes,
you
certainly
will
be
in
an
odd
state
and
you
should
run
you
should
wrap
your
computer
run.
You
should
reconcile
again.
A
A
You
you,
it
is
idempotent,
so
you
can
run
it
twice.
It's
just
like
if
you
get
halfway
through,
if
we're
doing
like
a
sequence
of
updates
and
we
get
halfway
through
honestly,
it's
not
clear
what
the
what
where
you
where
you
are
and
I
guess
it
depends
on
the
nature
of
the
change
you're
making
and
how
disruptive
it
is,
but
for
for
normal
changes,
it's
okay,
but
yeah,
like
I'm,
showing
of
an
example
when
it
would
like
be
not
good.
A
So
if
we,
depending
on
the
order,
we
do
that
like
until
both
are
done,
it's
not
gonna
work,
but
at
the
same
time,
until
both
are
done,
it's
not
gonna.
It's
not
gonna
work.
Your
nodes
are
it'll
talk
to
each
other
until
that's
fully
rolled
out,
you're
gonna
be
weird
to
say
anyway,
it's
it's
better
to
so
it's
better
to
finish
it,
and
then
we
also
have
the
rolling
update
command,
which
will
like.
A
We
currently
only
changed
the
the
specification
on
the
a
diverse,
auto
scaling
group,
and
you
have
to
do
a
rolling
update
to
get
it
actually
apply
to
the
instances
and
same
story
there.
It
should
be
idempotent
if
it
doesn't
complete,
you
should
trigger
it
again
to
make
sure
everyone
reconciles,
but
that
should
actually
fit
pretty
well
I
think
into
the
controller
loop.
A
A
Sorry
this
one
say
something:
sorry
buddy
spoke
over
someone.
No
okay.
Would
you
think
oh
cool
I'm
Mike,
it
looks
like
you
put
the
CR
DS
in
cluster
effect.
You
can
talk
Lincoln.
There
was
anything
you
want
to
add
their
work.
No
I
was
just
having
that
on
cool.
Thank
you
for
doing
that.
The
last
item
on
our
agenda
ryan,
149
city
manager,
to
as
measured
for
two
notes,
yeah.
E
A
F
D
F
Because
I
want
a
way
to
be
able
to
rapidly
remediate
an
issue
boost
and
right
now,
I'd
have
to
go
and
build
that
binary
and
find
out
where
it
is
and
keep
the
same
version
and
whatnot
so
I've
been
meaning
to
put
a
PR
and
I
just
haven't
gotten
around
to
it.
But
I
checked
I,
checked
in
with
you
before
I
do
so
yeah.
A
I
think
that
would
be
wonderful
to
do.
We
we
talked
about
putting
it
into
the
EDD
manager,
the
container
image
we
did
about
having
it
as
a
github
download
as
well.
Oh
good,
I,
don't
know
if
there's
any
value
in
having
it
separate.
I
can't
imagine
there's
that
much
value,
because
anyway,
I
think,
though,
was
one
one
or
either
both
of
those
as
a
start
would
be
good.
A
The
if
you
want
together.
That's
wonderful!
Thank
you.
Otherwise
it's
definitely
on
my
list
but
yeah.
If
you
do
it,
that's
wonderful
what
I
created
on
faster
and
then
you
can
run
it
from
your
local
machine
and
point
at
the
the
same
@cd
state.
So
I
guess
we'll
call
it,
which
is
an
s3
bucket
yep,
and
it
shouldn't
be
that
important
to
match
the
exact
versions.
But
it's
pretty
good
idea.
Yeah.
F
B
F
F
F
A
Yes,
so
we
is
his
life
cycle,
you
mean
like
cluster
API,
yeah,
closer
yeah,
sorry,
yeah,
okay,
yes,
I
do
hope
we
do
get
there.
But
yes,
as
you
say,
it's
it's
not
imminent.
It's
not
like
happening
this
in
115
as
it
were,
I
woke
up.
Some
15
I
think
we
did
change
the
timings
so
that
it's
there's
there's
fewer.
Like
big
synthetic
delays.
You
know
that's
gonna,
be
more
to
teen
as
well
that
the
big
change
I
think
would
be
to
support
parallel
parallel
rolling.
You.
A
I
think
if
we
did
I
think
if
we
did
that,
maybe
we
could
like
scope
down
the
change
a
little
bit
to
try
to
do
some
of
some
of
these
things,
because
if
I
recall
correctly
that
the
the
reason
why
the
PRF
and
gamble
became
so
large
was
around
the
edge
cases
around
what
happens
when
you
are
interrupted
when
I
was
in,
you
are
interrupted
effectively.
Mm-Hmm.
A
Beer
for
that
so
I
think
I
think
if
we,
my
money
gut,
feels
if
we,
if
we,
if
we
did,
if
we
terminated
more
than
one
instance
in
parallel,
mm-hm
or
actually
that
the
first
thing
to
do
is
for
people
to
have
more
than
one
instance
group
to
run
those
instance
groups
in
parallel
and
then
the
second
one
would
be
to
to
do
more
than
one
node
in
our
instance
group.
At
the
same
time,
I
think.
G
Question
this
might
not
be
relevant
or
any
more
I,
don't
know,
I,
probably
everybody's
already
on
b3,
but
I'm
just
curious
for
the
v2
to
v3
migration.
Did
you
stop
all
right
and
wait
for
the
rap
indices
to
converge
before
starting
the
migration
there?
There
is
a
there
there's
an
issue,
I
guess,
with
with
keys
that
have
that
have
Leah
that
leases?
Yes,
sir,
if
you,
if
you
don't,
do
it
there,
there
yeah
there's
some
there's
some
some
corruption
that.
A
G
A
So
we
avoided
that
Oh
a
not
so,
basically,
the
the
long
and
short
of
it
is
STD
to
death.
Three
migration
does
not
work.
If
you
have
multiple,
if
you
have
AJ,
if
you
have
more
than
one
note,
just
doesn't
work
and
we
did
not
use
the
this
approach
also
is
has
problems
the
one
that
described
in
this,
and
so
we
did
a
different
approach
where
we
effectively.
We
do
stock
all
rights,
and
we
do
that
by
moving
sed
onto
a
different
port
because
it
seemed
he
doesn't
support
that.
A
We
then
moving
the
whole
cluster
onto
a
different
port.
So
basically
no
one
knows
where
to
reach
it,
because
they're
all
configured
to
talk
to
different
port
and
then
we
dump
the
backup
and
we
restore
the
backup
when
it
is
at
cd32
XE
d3
when
it
is
compatible.
When
is
snapshot
and
restore
compatible.
We
do
that,
but
when
it
is
not
ie
be
at
cv-22,
sp3
migration
will
restore
it
into
a
temporary
at
C,
D,
D,
D
to
cluster
and
then
load
every
single
key
and
restore
it.
A
I
can
write
them
into
ed,
CD
3
manually
and
which
is
basically
what
that
migrate
stripped
is
doing,
and
then
we
restart
the
API
servers
and
then
we
also
bounce
all
the
clients
bounce
as
well.
So
that
should
be
how
we
get
around
this
I
think,
but
it
also
helps
that
we
do
this
as
part
of
an
update.
So
we
also
typically
bounce
all
the
nodes
shortly
after
so
I
think
that's
how
we
own
the
other
thing
is
it's
not
because
it's
not
a
live
upgrade.
A
We
we
blew
all
the
API
servers
have
a
hard
disconnect.
So
that's
how
I
think
we
get
around
this,
but
we
can
talk
more
about
this
I
guess
somewhere.
If
you
want
to
to
worry
about
it,
but
you
know
my
plan
for
stdm
convergence
is
that
we
try
not
to
do
this
in
at
CD
ADM.
We
don't.
We
just
never
encourage
any
form
of
Ed
CD
migration.
That
is
not
that
is
of
this
nature.
Ever
again,
I,
don't
know
you
feel
about
that.
Yeah
yeah.
A
G
G
A
Yes,
and
so
the
good
news
is
because
it's
because
it's
a
lease,
everyone
loses
their
these
anyway,
during
the
upgrade
so
great
but
yeah.
So
that's
I
think
that's,
basically
why
we
get
rid
of
right.
We
get
around
it,
but
yeah
the
whole.
The
whole
thing
is
not
the
best,
but
but
in
terms
of
how
we
do
it
to
get
to
a
consistent
like
it
looked
like
they
tried
to
do
it
on.
No,
they
suppress
it
anyway.
The
way
we
do
it
is
we
basically
bring
the
old
data
into
the
new
cluster.
A
The
new
cluster
is
already
a
che
or
like
multi-member,
and
then
we
write
data
into
it.
So,
if
we
believe
xv-3
is
stable,
then
nothing
and
nothing
bad
can
happen
to
the
sed
three
data,
because
we
are
just
interacting
with
as
a
client
got
it.
Thank
you.
Thank
you.
There
was
a
little
if
you
believe
it
to
be
stable
thing,
but
yes,
all
right.
A
Ryan
you
asked:
should
we
cherry
pick
those
faster
rolling
updates,
we're
gonna
call
time
a
couple
ribs?
Should
we
cherry
pick
those
faster
League
updates
to
113
and
114
I?
Don't
know
I
will
say
I
did
that
I
seems
to
be
working
fine
for
me.
I
got
some
pushback
on
the
PR
when
it
was
first
merged
around
like
how
do
we
know
this
is
safe,
so
I
also
we've.
A
Well,
why
don't
we
I
I
can't
say
yes,
because
I
felt
like
I
got
it
in
by
saying
we're,
not
gonna
cherry
pick
it
so,
but
I
think
113
is
probably
I'd
say.
The
answer
is
probably
no,
because
we
want
to
get
that
release
if
people
want
to
tree
picket
114
I
would
be
supportive
of
that.
But
it's
not
like
it's
it's.
A
A
Then
mm-hmm
excuse
me,
then
I
wish
everyone
a
very
happy
weekend
and
I
will
see
everybody
in
two
weeks.