►
From YouTube: Kubernetes kops office hours 20200925
Description
Recording of the kops office hours meeting held on 20200925
A
Hey
everyone:
this
is
kops
office
hours
meeting
from
the
25th
of
september.
I
am
cyprian
your
host
for
today,
please
be
nice
to
each
other
and
bring
up
interesting
topics.
So,
yes,
let's
start.
A
There
was
a
bit
of
a
mix
up
with
the
artifacts.
I
really
thought
that
it
would
work
without
copying
to
s3
and
the
artifacts.
I
o
and
well
bumped
into
let's
say
a
bug
but
ended
up
being
bug,
slash
feature,
so
we
will
discuss
about
it
a
bit
later.
B
A
But
anyway
it
works.
We
guided
the
people
that
couldn't
use
it
to
use
the
cops
base.
Url
and
everything
went
okay,
guess
being
an
alpha.
It
wasn't
such
a
big
issue.
C
I'm
gonna
share
my
screen,
so
everyone
can
see
this
so
here's
the
doodle
poll
results.
C
This
is
in
eastern
u.s
eastern
time,
there's
one
time
slot
with
four
votes
and
a
few
others
with
three
votes.
So
we
go
with
the
four
votes:
tuesday
at
nine
a.m.
Eastern.
A
I
guess
depends
a
lot
on
justin's
schedule.
Well,.
B
I
mean
9am
works
for
me,
it
might
not
work
for
people
who
might
not
work
terrible
people
on
the
west
coast,
but
I
think
I
think
we
should
try
it
and
actually
this
is.
I
was
wondering
whether
this
we
could
relate
this
to
hacktoberfest,
and
if
we
choose
to
do
that,
one
of
the
things
we
could
do
is
we
could
hold
a
like.
One
of
the
suggestions
on
hacktoberfest
is
to
hold
a
effectively
office
hours
for
contributors
type
thing,
and
so
we
could
try.
B
We
could
try
a
couple
of
these
time
slots
and
see
actually
see
how
they
feel
see,
who
shows
up
see
where
they
get
new
people
showing
up
in
them.
So,
like
it's
super
helpful
to
see
like
the
four,
the
couple
of
threes
so
like
we
can
try,
we
can
actually
experiment
with
a
bunch
of
these.
If
we
want
I'll,
I'm
not
going
to
do
8am.
Sorry.
B
Adam
is
a
little
tough
for
me
actually,
but
yeah
it
might
work.
I
have
to
do
school
drop
off
for
for
8
am.
A
Let's
step
a
bit
back,
so
one
of
the
topics
was:
do
we
want
once
a
week,
or
at
least
for
oktoberfest
time,
or
do
we
want
to
move
it
permanently?
To
a
different,
I
mean:
do
we
want
to
keep
the
current
one
and
add
another
one,
or
do
we
want
to
just
move
the
current
one.
B
I
suggest
we
make
we
we
add
one.
We
add,
we
add
one
to
start
with
and
we
see
like
you
know
like.
Actually,
it's
nine
am
too
early.
Who
shows
up
doing
people
show
up.
Does
everyone?
It's
nine
am
too
early
for
everyone.
That
sort
of
thing
like
is,
is
tuesday
a
bad
day
like
what
happens
in
practice.
We
add
one.
We
we
have
a
natural.
B
If
we
choose
to
do
oktoberfest,
we
have
sort
of
a
natural
additional
thing
that
won't
cause
too
much
conflict
with,
like
you
know
what
happens
in
each
meeting
type
thing,
so
we
can
use
that
more
for,
like
digging
into
the
code
or
like
that,
it's
like
more
in
more
in-depth
technical
discussions.
B
I
guess,
and
this
one
more
for
steering
guidance
which
is
sort
of
how
it's
traditionally
been
and
sort
of
evaluate
from
there,
but
I
propose
we
start
by
adding
one
or
possibly
rotate
through
a
couple
of
these
slots
and
see
how
we
feel,
but
not
not
touching
our
current
time
slot
until
we
are
comfortable
with
the
new
time
slots,
and
then
we
can
make
a
more
informed
decision
about
raising
the
frequency
changing
the
time
whatever
we
want
to
do.
B
B
B
So
hopefully
my
kids
will
never
watch
this,
so
so,
yes,
we
could
do
it
on
the.
We
could
also
do
it
just
like
a
little
bit
before
the
kickoff
of
that,
but.
D
B
A
Yeah,
so
next
one
would
be
thursday.
B
B
A
And
I
think
it
would
be
pretty
close
to
this
one
and
people
would
not
get
to
see.
I
like
that.
Okay,
so
I
mean,
even
if
we
want
to
do
it,
let's
say
more
regularly:
it's
a
bit
awkward
anyway.
You
put
it
because
if
you
put
it
after
it's
just
three
or
four
days
after,
if
you
put
it
before
it's
three
four
days
before
the
regular
one,
so
thursday
seems
a
bit
more
spaced
in
between.
B
C
A
A
Okay,
guess
not
so
moving
on
119
branch
planning,
I
think
the
bugs
the
reported
bugs
kind
of
slowed
down
for
118.
D
So
I'm
talking
about
9953
later,
I
think
we
need
to
get
an
answer
from
that.
That
might
be
a
blocker.
A
D
A
B
It
would
also
be
great
to
have
have
the
effort
branched
if
we,
if
we
want
to
participate
and
encourage
people
to
start
contributing
to
not
say
like
actually
can't
contribute
right
now,
because
we're
like
stabilizing
the
branch
so
it'd
be
good
to
have
master
more
open.
Okay,.
A
So
branch
as
soon
as
possible,
and
then
we
see
what
we
merge
into
119.
any
thoughts
about
beta
one.
Should
we
wait
a
week
or
two
or
release.
A
B
Right,
yes,
I
think
I
I
I'd
like
to
reach
a
decision
on
this
issue.
I
found
this
morning
around
the
coupe,
config
and
inference
of
the
of
the
cluster.
I
think
I'm
probably
the
biggest
objector,
and
I
think
I've
probably
satisfied
myself
that
it's
okay
not
to
preempt
that
discussion,
so
I
don't
think
it's
gonna
be
a
big
deal.
I
think
we
should
just
decide
that
it
is
okay
before
we
okay.
A
So,
let's
keep
that
for
the
end
of
the
s3
discussion
and
we
can
move
to
the
next
few
things.
So
the
next
one
is
mine.
It's
related
to
cops
artifacts
hashes.
Until
now
they
were
only
picked
up
from
the
s3
mirror.
Considering
it's
it
was
the
safest,
let's
say-
and
I
added
a
pr
to
use
all
the
mirrors
to
to
read
those
hashes
in
case.
One
of
them
is
down
to
to
have
a
backup.
A
I
had
a
discussion
with
justin
over
email
and
he
said
that
this
was
also
a
security
feature,
because
at
least
github
probably
can
be
modified
manually.
Yes
or
I
don't
know,
it's
less
secure.
B
It's
more
that
if
we
want
to
have
what
it
lets
us
do
is
it
lets
us
have
untrusted
mirrors
in
a
list.
I'm
not
sure
whether
I
consider
github
untrusted
or
not,
but
if
we
wanted
to
have
mirrors
that
we
really
didn't
have
to
trust.
We
can
do
that
today
by
by
sort
of
having
the
two
lists
so
any
as
long
as
anything
we
download
from
a
mirror
is
sha
checked
and
we
got
that
shot
from
something
we
trust
more.
B
We
can,
we
can
sort
of
add
almost
any
mirror
to
that
the
sort
of
the
wide
list,
but
we
need
to
keep
the
the
list
of
sources
of
the
shas
more
tightly
controlled
to
ones
which
we
feel
we
can
trust.
B
I
don't
think
ordering
is
necessarily
safe,
because
what,
if
like
they
all
happen
to
be
down
but
like,
I
think
we
could
have
two
lists
like
a
a
fully
trusted
and
a
wider
list.
I
don't
know
actually,
if
we
would
put
where
we
would
put
github
like,
would
we
consider
it
fully
trusted?
Probably,
but
I
don't
know.
B
Yes,
that's
a
good
point,
excellent
point
and
I
think
yeah.
So
we
need
to
decide
right
now.
Github
is
definitely
in
the
critical
trusted
path.
We
could
say,
look
always
download
from
artifacts.gates.io
and
then
it's
no
longer
in
the
credit
cluster
path.
But
yes,
I
think
it's
fine
to
so.
I
think
I
think
your
change
is
great.
I
don't
think
we
have
any
mirrors
today
that
we
consider
untrusted.
B
I
do
think
it
would
be
great
in
future
to
be
able
to
have
untrusted
mirrors,
so
I
it's
not
necessarily
a
blocker
for
any
releases,
but
it
would
be
a
nice
feature
to
be
able
to
have
these
two
mirror
lists,
one
of
which
could
be
a
broad
list
of
mirrors
that
aren't
particularly
trusted
or
don't
have
we
don't
have
to
worry
about
trusting
them
like
we
just
don't
have
to
question
whether
we
need
to
because
if
it's
there
and
it
matches
great,
if
it
doesn't
doesn't
matter.
A
Okay,
but
I
think,
unless
we
actually
find
some
mirrors,
that
we
don't
trust
this
is
I
don't
know
an
optimization
that
we
shouldn't
do
right
now.
A
I
think
the
order
that
I
put
is
artifacts
github
and
then
s3,
I'm
not
sure
if
that's
the
greatest
or
if
you
want
to
considering
that
s3
is
legacy
or
do
we.
I
like
that.
I,
like
that
notion.
I
think.
B
We
can't
rely
on
the
ordering
to
be
a
a
security
guarantee
because
someone
could
always
block
like
it's
fairly
easy
to
attack
and
like
block
a
particular
access
to
a
particular
site,
but
I
think
that
ordering
makes
sense
long
term.
I
agree.
We
want
artifacts,
like
case
studio,
to
be
the
canonical
source
so
that
we
get
onto
that
sort
of
trusted,
release,
process
and
now
we're
just
quibbling
about,
like
the
potential
existence
of
future
other
mirrors
and
whether
we,
where
we
put
github
in
the
in
the
trust
hierarchy.
B
A
If
we
find
some
way
or
for
some
mirrors
that
we
want
to
use,
then
it
would
be
pretty
easy
to
add
trusted
and
untrusted
later,
okay,
anyone
any
other
thoughts
on
this.
A
Okay,
guess
not
so
next
one
again
mean
default
docker
version,
so
I
bumped
docker
to
1903
13
for
let's
say
119.
A
A
There
is
also
the
precedent
that
we
did
it
for
the
cve.
We
bumped
everything
to
that
11
and
also
we
do
it
for
pretty
much
anything
else
like
cni's
and
so
on.
So
I
don't
think
we
should
keep
the
container
runtime
to
a
fixed
version,
and
I
mean
I
think
that
it
should
be
fixed
to
the
minor
version,
but
the
patch
versions
should
be
rolling
based
on
the
cops
release.
So
if
you
have
something
new,
then
people
should
be
upgraded
to
something
new.
B
That's
probably
more
consistent
with
the
way
we
do
things
otherwise
and
as
long
as
we
do
it
on
the
cups,
chaos
minor
version
upgrades,
I
think
that's
fine
like
I
don't
want
to
start
doing
on
a
on
like
a
patch
release.
B
Oh
I
don't
know
I
don't
unless
it's
a
security
issue,
I
don't
do
it
on
a
patch
release
about
that
yeah.
So
I
I
agree
with
that.
Was
there
anything
but
out
of
interest
in
the
difference
between
11
and
13,
that
motivated
this
or
was
it
just
hydrene.
A
I
think
I
saw
something
a
bit
interesting
in
there.
One
of
the
things
was
that
they
changed
to
a
newer
container
d,
so
they
consider
that
container
d
one
they
moved
from
the
one
to
branch
to
the
one
tree
branch,
which
is
a
pretty
important
change
from
many
points
of
view,
and
also
there
were
some
well.
I
think
there
were
some
windows
things,
but
these
are
not
really
important
to
us.
I
don't
really
remember
all
the
patches.
B
Okay,
the
the
container
d
moving
to
a
bigger
minor
version,
is
sort
of
interesting
in
terms
of
our
rules.
Right,
like
that's
a
that's,
not
a
patch
change
anymore.
But
yes,
I
s
yeah.
Let's.
A
Question
if
I
managed
to
make
to
add
all
the
hashes
for
all
the
docker
versions,
would
it
be
okay?
So
if
anyone
wants
to
use
like-
I
don't
know,
1903
12,
because
I
will
move
to
tgz,
we
are
already
on
tgz.
So
basically
means
a
list
of
hashes
for
all
the
releases.
Would
it
be
okay
to
put
them,
so
anyone
can
use
any
docker
version.
B
Yeah,
I
think
that's
that's
a
purely
additive
thing
and
just
nicer
user
experience.
I
think
we
should
also.
I
don't
know
whether
you
you
previously
talked
about
being
able
to
allow
users
to
specify
the
hash
alongside
a
version.
I
don't
know
whether
you've
done
that.
B
A
That
and
I
think
this
week
I
will
have
the
pr-
and
this
other
thing
to
specify
the
version.
I
will
allow
to
specify
one
the
version,
but
you
will
be.
You
will
need
to
specify
a
hash.
So
if
you
want
to
provide
the
newer
version
but
for
which
we
don't
have
a
hash,
provide
the
hash
for
it
and
also
to
provide
also
the
url
so
url
and
hash
so
to
be.
B
That
sounds
good
and
being
able
to
specify
1903.11
if
there's
some
incompatibility
with
13,
like
that
sounds
like
a
good,
reasonable
user
experience
in
the
event
that
something
goes
wrong.
So
that
sounds
good
and
like
yeah.
Similarly,
with
12
like
it
feels
like
a
good
thing
to
have
like
yes,
I
like
that,
don't
how
others.
A
Feel,
okay,
no
objections
and
the
next
one
is
again
mine.
Arm64
support
just
try
to
do
a
rebase
after
the
latest
changes.
Unfortunately,
I
can't
do
much
because
the
new
image
is
amd
64.
Only
then
oh,
no,
I
thought
the
whole
point.
B
Really
yeah:
okay,
that's
a
whoops!
Okay!
I
thought
we
moved
to
that
image
because
it
supported
arm64,
but
I
will
so
okay.
B
Okay,
I
mean
we
can
build
all
right.
I
will.
I
will
look
at
that
again,
I'm
sorry.
I
thought
I
was
actually
gonna
address
the
issue,
but
I
apologize
it
does
not.
I
will
one
thing
we
can
do.
Obviously
is
we
can?
I
can
see
whether
that's
a
google
maintained
image?
I
can
see
whether
google
can
build
that
for
arm
64..
I
can
see
whether
we
want
to
take
ownership
of
it.
B
I
don't
think
we
do,
but
that
it's
not
the
end
of
the
world,
but
and
we
can
look
again
for
other
images
that
might
be
available
on
all
architectures
and
support
it.
B
Yes,
I
mean
the
one
we
switched
to
the
marketplace.
Cloud
marketplace
is
not
in
the
kubernetes
repos.
It's
under
a
google,
some
google
github,
but
the
scripts
and
procedure
to
generate
them
is,
is
out
there.
So
I
felt
reasonably.
A
A
B
Yes,
so
I
filed
this
issue
basically
to
this.
I
think
we
should
reach
a
decision
on
what
we
want
to
do
with
this.
One
of
the
things
we've
historically
done
is
inferred
the
cluster
name
from
the
current
cubeconfig,
which
is
useful
primarily
for
users,
interacting
with
the
cli
directly,
not
running
it
in
jenkins
or
any
other
wrapper,
for
example,
but
it
is
a
nice
sort
of.
B
I
guess
I
call
it
like
optimization
or
ux
accommodation
for,
for
that
makes
it
a
little
nicer
to
run
a
sequence
of
chaos
commands
the
we've
lost
some
of
that
today,
because
we
made
a
good
security
change
to
not
export
coupe
config
by
default,
and
so
what
that
means
is
there
is
no
coupe
config
from
which
to
infer
the
cluster
name,
so
you
have
to
the
so.
Currently
a
user
would
have
to
type
the
cluster
name
repeatedly.
B
There
is
a
workaround
which
we
can
pass,
I
think
admin
or
something
or
export.
I
think
it's
admin
as
a
flag
to
update
cluster
and
it
will
create
that
cube
conflict
for
you.
There
is
a
potential
longer
term
or
there's
a
there's
another
work
around
where
we
can
still
export
kube
config
without
exporting
a
forever
credential
by
making
use
of
some
currently
experimental
functionality
to
generate
a
credential
short-lived
credential
on
the
fly
and
renew
it.
B
When
it's
required
the
that
was,
I
wrote
that
it
wasn't
intended
to
ship
immediately,
it's
a
little
bit
more
involved.
B
So
those
are
our
sort
of
options.
I
would
say
I
wanted
to
get
agreement
that
we
are
okay
with
sort
of
the
current
way
it
currently
works,
and
we
might
need
to
tweak
some
of
the
either
we
need
to
like
continue
to
export
some
variety
of
coop
config
or
we
need
to
say
that
the
new
user
experience
for
119,
possibly
not
for
120
if
we
get
in
the
dynamic
generation
but
for
119
at
least
we'll-
require
you
to
specify
the
cluster
name
every
time
or
for
us
to
recommend
the
admin
flag.
D
D
B
So
this
does
suggest
an
option
which
is
we
we
do
this
and
we
say
yes,
we
can
export
it
with
no
credentials.
You
can
pass
an
option
to
say,
use
the
experimental
generation.
You
can
say
I
don't
know
how
you
activate
oidc
or
but
that
sort
of
behavior
and
you
can
activate
the
legacy
mode
and
then
we
could
also
that
the
nice
thing
about
this
is
that
then
the
behavior
doesn't
change
too
much
between
like
the
versions
where
118
or
at
least
117.
B
D
B
D
B
So
we
might
need
another
command,
which
actually
is
not
a
bad
thing
like
when
I
think
about
like
the
sort
of
discovery
flow,
we
might
need
to
add
another
command
temporarily
to
say,
like
export
export,
your
cube,
config
credentials
using
this
command,
and
then
we
can
have
a
nice
discussion
of
like
here
is
like
we'll
do
a
static
credential
in
this
guide,
which
isn't
the
most
secure
and
you
might
want
to
consider
oitc
link
and
you
might
want
to
consider
dynamic
like
short-lived
credentials,
link
right,
experimental,
mako
beta
in
120,
or
something
like
that.
B
I,
like
that
approach.
Okay,
if
other
people
agree
with
that
idea,
I
can
implement
the
always
export,
but
not
always
export
credentials
so
that
we
have
that
behavior.
We
don't
change
the
behavior
in
terms
of
inferring.
The
clustering.
B
D
Yeah
so
of
the
in
my
19,
I
changed.
The
node
bring
up
our
bootstrap
to
use
cop
controller
as
a
way
of
getting
the
certificates.
D
The
problem
that
we
found
is
that,
if,
when
you're
upgrading
from
an
earlier
cops
to
cops
119
as
soon
as
you
start
as
soon
as
you
apply
your
cluster
or
as
soon
as
that
new
cops
controller,
manifest
gets
applied,
it
will
then
rolling
update
that
deployment
or
sorry
that
daemon
set
and
all
of
the
cops
controllers
on
your
old
control
plane
nodes
will
fail
to
come
up
because
they
don't
have
access
to
files
that
the
new
version
of
nodeup
provisions,
and
so
your
control
plane
notes,
fail.
D
Cluster
validation,
so
rolling
update
will
not
proceed
so
now
on
non-aws
there
was
a
workaround
where
you
could.
You
know
sort
of
make
the
directory,
so
it
could
not
directory,
but
on
aws
you're
going
to
actually
need
secrets.
Provisioned
for
that
for
cops
controllers
come
up.
D
So
I'm
not
quite
sure
what
the
solution
is.
We
might
have
to
release
note
that
you
have
to
do
a
cloud-only,
rolling,
update
of
your
control,
plane
or-
and
I
don't
know
if
we
can
put
the
daemon
set
in
if
we
put
the
damon's
head
in
delete,
update
mode
or
on
delete,
whether
that
would
take
effect
early
enough
to
not
break
the
old.
D
B
And
so
this
manifests,
I'm
sorry
which
nodes
are
we.
I
need
to
do
this
issue
more
detail.
D
So
the
issue
is
the
cops
controller
is
deployed
using
a
daemon
set,
so
the
daemon
set
gets
updated
first,
which
then
causes
it
to
try
to
provision
the
new
cops
controller
pods
on
the
old
control
control,
plane
nodes,
and
that
fails
because
it's
trying
to
host
mount
uos
mount
volumes
for
files
that
aren't
provisioned
because
you
have
it.
B
Two
things
come:
two
approaches
come
to
mind,
one
of
which
is
to
tolerate
this,
which
isn't
great
tolerate
this
in
the
in
the
validation
logic
I
say.
Actually
this
is
not
really
a
problem
that
seems
like
it
might
mask
something,
but
it's
an
option.
The
other
one
is
to
use
a
label
selector.
So
we've
done
this
elsewhere.
B
I
think,
like
the
fluenty
had
a
ds,
ready
and
a
node
label
so
that
you
we
would
effectively
label
the
nodes.
I
guess
with
some
form
of
label
such
that
we
wouldn't
only
target
the
newer
controller
to
the
newer
nodes
that
does
require
multiple
daemon
sets.
It's
a
bit
of
a
pain,
but
it's
a
pattern.
We
can
then
reuse
when
we
do
more
of
this,
as
I'm
sure
we
will.
B
Actually.
I
would.
D
B
D
B
Could
we
use
the
same
strategy
where
we
just
change
the
label
change
the
yes
change
include
the
version
in
some
way
and
the
version's
not
ideal,
but
yes,
some
for
something
like
that.
Basically
have
a
new
label
and
a
new
node
selector
and
use
the
same
trick.
D
So
probably
yeah,
probably
more
of
a
capability
type
label.
I
don't
know,
I'm
not
sure
yeah.
B
B
Fluentd
used
ds-ready,
which
is
fine
but
yeah.
We
can
probably.
I
also
don't
like
that.
It's
sort
of
there
now
forever
but
like
it'd,
be
nice
if
it
rolled
off
but
yeah,
I
don't
know
it'll
get
it.
B
If
it
works,
I
think
that's
reasonable,
I'm
I
I
I
need
to
think
a
little
about.
B
B
B
I'll
definitely
take
a
look
at
this
issue,
particularly
as
it
blocks
I
agree,
should
block
and
I
it
is
marked
as
blocking
and
yeah.
I
think
the
node
label
feels
like
the
most.
B
Most
likely
to
be
yes,
it's
most
likely
to
be
reasonable,
particularly
because
the
other
thing
I
suggested,
the
ignoring
is
not
going
to
be
any
more
reliable.
It's
just
going
to
be
that
we're
we're
ignoring.
B
B
E
B
B
A
Cool
sounds
even
better:
okay,
next
peter
nlb
support.
C
So
this
was,
we
decided.
C
This
was
our
most
likely
work
around
for
the
ca
acm
certificate
issue
that
stemmed
as
a
result
of
us,
disabling
basic
auth,
and
so
that's
now
totally
gone
in
119,
and
so
in
order
for
anyone
with
a
acm
cert
on
their
api
load
balancer
to
be
able
to
authenticate
with
cops
credentials,
we
need
some
way
of
providing
a
tcp
listener
if
we're
using
client
certificate
authentication
for
the
cops
comps
credentials
keep
config
that
it
generates,
so
the
nlb
seemed
to
be
the
most
reasonable
path
forward.
C
For
that
this
is
a
pretty
big.
Mr
pull
request,
I'll
I'll
admit.
I
haven't
had
a
chance
to
review
it
recently,
but
I'm
thinking
that
this
will
need
to
go
into
119,
which
is
a
bit
unfortunate
for
how
big
it
is
and
how
significant
of
a
change
it'll
be.
So
if
anyone
else
can
provide
input
on
it,
that
would
be
much
appreciated.
C
I
guess
my
concerns
are
specifically
the
migration
path
and
if
we
can
avoid
downtime,
which
would
likely
involve
needing
to
be
running
both
an
elb
and
nlb
at
the
same
time
and
what
that
would
look
like
from
the
api
design
or
other
you
know,
cops
update,
commands
that
would
need
to
be
involved
with
doing
that.
So
if
anyone
else
has
any
other
concerns
or
anything,
it'd
be
nice
to
help
move
this
forward.
C
D
Concern
is
he's
having
a
problem
with
the
health
checks
because
well
he
doesn't
support
hps
health
checks.
F
B
F
D
F
A
F
A
F
Right
and
then
clean
up
is
done
usually
first,
so
perhaps
the
order
could
be
changed
so
that
cleanup
happens.
I
mean
there's
a
reason
to
do
cleanup
first.
I
suppose,
but
that's
we
could
do
cleanup
after
the
nlb
is
ready
as
well.
A
One
other
idea
would
be
to
not
make
it
default,
so
I
agree
with
peter
that
is
needed
for
the
use
case
of
I
don't
know,
custom
search,
but
generally
people
won't
really
care
if
it's
nlb
or
classic
elb
like
it
is
now,
so
we
can
have
it
as
an
option
and
document
the
downsides.
F
F
The
the
fact
that
the
masternodes
need
to
be
rolled
to
pick
up
the
new
cert
and
then
the
other
issue
is
at
least
that's
what
I
remember
it
being
I'll
double
check.
Perhaps
this
weekend,
one
more
time,
but
I
remember
I
had
to
roll
the
masters
last
time
and
then
also
sorry
about
that.
F
Also
there
was
a
there
was
an
issue
with
the
so
yes
rolling
rolling.
The
masters,
I
believe,
is
one
other
downside
to
it,
and
I
guess
that's
the
only
thing
I
think
of
right.
This
second.
B
B
We
could,
I
wonder
if
we
could
say
like
in
the
both
case,
like.
B
E
C
F
D
A
A
F
With
the
value
of
both,
I
guess
you
would
go
to
the
whatever,
which
I
mean.
If
they
both
work,
then
it's
arbitrary.
But
if
you
want
no
downtime,
then
I
think
the
decisions
made
to
yeah.
I
don't
know
what
would
be
what
would
happen
if
you,
if
I
don't
know
what
the
negative
is
to
point
from
one
to
the
other,
but
we
could.
F
I
would
have
to
see
if,
if
I
point
to
the,
if
I
create
an
nlb
and
then
there's
gonna
be
downtime,
then
we'll
keep
it
on
the
elb.
If
it's
arbitrary,
then
obviously,
if
they're
going
from
they're
gonna
go
to
both
and
I
how
do
you
decide?
Yeah,
that's
a
good
question.
B
We
could
have
a
comma
separated,
it
might
be
interesting
to
try
a
comma
separated
list
right
or
or
a
slice.
I
don't,
but
and
then
that
allows
for
the
user
to
express
it,
and
we
would
document
the
procedure
for
a
zero
downtime
switch,
I'm
just
it.
It
might
be
overkill
right.
I
don't
know
like.
D
C
So
I
made
a
lot
of
good
progress
on
this.
I'm
currently
blocked
by
the
user.
Data
of
bottle
rocket
needs
to
have
access
to
the
ca
certificate.
The
cluster
is
ca,
and
so
this
is
kind
of
a
new
dependency
for
the
user
data
because
previously
didn't
need
to
know
the
ca
and
I'm
trying
to
follow
the
model
that
justin
followed
with
the
oidc
provider
now
needing
to
know
the
ca,
but
I'm
still
having
issues
with
how
the
fee
task
on
all
the
rendering
and
the
dependency
tree
is
built.
C
So
I'm
I'm
getting
close
to
having
support,
but
that's
currently
my
blocker.
So
if
anyone
can
comment
on
it,
that
would
be
great.
B
I'll
definitely
take
a
look.
Yes,
I
will
oh,
the
yeah
okay,
the
dry
run
problem
is,
we
added
is
ready
or
something
like
that
we
added
a.
We
had
an
interface
here
on
the
dry
run
problem.
I
thought
it
was
generic,
but
it
might
not
be
generic,
but
there
should
be
an
approach
there,
but
it
sounds
like
there's
a
deeper
problem.
Not
just
dry
run
right
or
is
it
just
dry
run.
B
C
B
Yes,
but
if
I
recall
correctly,
it
doesn't
print
the
full
data
on
a
create
it
prints,
something
like
will
like,
so
it
prints
a
diff
when
it
so
it
it
sort
of
works
out
like
it
will
print
a
diff
if
the
second
time
through
it
will
print
a
nice
diff
and
but
the
first
time
I
think
it
just
doesn't
print
it,
because
it
would
just
be
too
big
anyway,
so
it
just
doesn't
output
it.
B
D
Okay,
well
on
bottler
I
mean
I
do
have
a
concern
it.
It
completely
cuts
out
node
up,
so
it's
a
very
different
animal
than
anything
else.
D
A
C
A
A
B
B
A
B
And
I
one
of
the
things
I
really
like
about
the
nlb
pr
by
the
way
is
we
sort
of
buried
the
delete?
It
adds
support
for
nlp
right,
it's
not
just
to
work
around
for
this,
but
like
anyone
that
wants
to
use
nlb
for
this
can
also
you
send
out
b,
which
is
really
good.
Like
that's
awesome
like
that,
we
shouldn't,
like
nlp.
Eight
of
us
is
like
pushing
nlb
pretty
hard
and
trying
to
get
rid
of
elb
and
yeah
so
like
that.
That
is
a
big,
a
big
win.
B
So
getting
it,
in
my
view,
is
between
those
I
don't
put.
I
don't
put
nine
nine
nine
zero
in
the
same
bucket,
the
coupe
completely
in
the
same
bucket,
but
between
those
two
I
sounds
like
we
will.
We
are
unlikely
to
do
a
beta
in
the
next
two
weeks,
but
I
I
say
we
put
nine
nine
sorry,
nine
zero
one,
the
nlb
support
in
it
or
john's
typing
faster
than
I
can
do
so
put
it
in
as
like
at
least
a
yeah,
a
blocker
or
a
consider
treat
as
a
blocker
and
evaluate.
A
Okay
and
also
john's
issue
with
99.3.
B
Yes,
99.3
yeah
yeah,
I
mean
9953-9901
like
fairly
big
things
that
are
significant
challenges
that
are
gonna,
be
a
benefit
for
the
extra
two
weeks.
9990,
hopefully,
won't
be
too
much
work.
A
A
A
Okay,
sorry!
I
have
just
two
things:
small
ones.
If
you
can
look
at
the
chat
since
a
month
or
two
ago,
I
don't
know
who
did
some
changes
around?
I
keep
seeing
load
balancer
for
api
server
falls
to
true
on
any
update,
so
I
try
to
look
into
it,
but.
B
It's
I
think
it's
in
the
code.
It's
spurious
updates.
I
thought
I'd
seen
this.
I
can.
If
you
open
an
issue,
it's
it's
an
easy
fix.
We
just
need
to.
When
we
do
in
the
find
method.
We
need
to
make
sure
that
the
four
api
server
in
the
I
don't.
A
And
another
one,
I
noticed
two
or
three
cases
where
people
had
problems:
upgrading
atcd
to
newer
version,
so
they
were
coming
from,
let's
say:
3.2
or
something
or
3.2.
What
doesn't
matter
and
the
upgrade
to
3
4
sum
343
was
stuck
one
of
the
notes.
Not
the
leader
was
three
four
three
and
the
other
two
didn't
want
to
upgrade.
B
A
B
Lcd
manager-
repo,
I
think,
and
then,
but
you
can
put
it
in
the
the
locker
list-
referencing
that
one,
I
think
that's
fine,
because
there's
also
another
one
in
ncd
manager.
I
need
to
check
where
someone
was
saying.
I
think
you
were
saying
but
yeah
there's
a
couple
of
icd
manager
issues.
I
need
to
like
dig
down
and
dig
through.
A
Okay,
I
think
that's
pretty
much
it
for
this
week.
I
guess
we
will
meet
again
on
thursday.
B
Yes,
thursday
at
9
00
a.m.
Eastern
for
anyone
that
wants
to
attend.
We
will
try
to
do
some.
Oh,
I
guess
we
didn't
talk
about
oktoberfest,
but
I
want
to
sort
of
at
least
participate
in
that
if
anyone
else
wants
to
they're
very
welcome
to,
but
I
was
gonna
kick
that
off
and
see
if
people
wanted
to
show
up
and
please
publicize
it
in
whatever,
whatever
forms
you
want
to.