►
From YouTube: kubeadm backlog grooming 2019-07-18
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
B
115,
we
don't
have
any
kid
open:
okay,
116.
There
are
a
lot
of
items
in
116
and
let's
see
what
we,
what
was
added
since
last
time.
This
is
something
that
the
Revell,
what
it's?
Basically
an
action
item
is
for
him
to
if
he
wants
to
modify
upgrade
apply.
So
do
you
want
to
discuss
this
here
Fabrice
or
should
we
move
to
the
next
item?
I.
A
B
A
B
C
One
of
the
nodes
like
one
of
the
control
plane
nodes
and
it
does
perform
upgrade
apply,
is
doing
right
now
and
turn
the
next
one
like
next
iteration
on
another
of
the
control
plane
nodes
it
may
actually
try
to
rerun
upgrade
apply
instead
of
around
upgrade
node
yeah.
So
we
have
to
be
careful
to
avoid
these
kinds
of
books
here.
B
B
B
C
B
A
B
A
A
B
A
A
B
B
We
are
handling
500
status,
Authority,
which
is
kinda.
Try
planning
to
handle
statuses
at
first
of
all,
Alex
Alexander
Sasha
doesn't
have
much
time
to
work
on
this,
so
maybe
we
should
implement
what
the
current
state
of
the
PRS
last
time.
I
checked
is
simply
remove
the
body
check
and
I
think
we
should
just
merge
it
to
solve
the
problem
and
eventually,
if
sasha
has
the
time,
he
can
send
a
separate
pair
for
gateways
and
stuff
I.
B
Yeah,
so
Tim
gave
me
sort
of
the
like
semi
approval
to
go
with
the
client
version
work
early.
If
the
user
provides
a
label,
this
is
an
indication
that
they
want
to
use
a
version
from
the
internet,
so
maybe
I'm
just
probably
not
going
to
have
the
time
for
this,
but
maybe
like
Fabrice.
We
have
four
bit
today
that
we
have
one
more
month
for
feature,
so
maybe
I
can
just
send
a
pass
for
this
already
and
get
rid
of
the
defaults.
That's.
C
A
B
A
B
A
C
Well,
the
gateway
in
China
apparently
returns
502
and
some
non
named
body
in
the
response,
which
is
actually
the
same
thing
to
do
when
you're
operating
a
web
browser.
But
in
our
case
we
actually
tried
to
parse
the
502
found
out
that
there
is
no
version
inside
it
and
then
like
dumped
an
error
when
we
actually
should
have
full
phone
back
to
the
cube
ATM
version.
If.
B
Also
it
also,
if
some,
if
some
something
along
the
line
between
you,
know
the
images
that
are
at
Google
if
something
deviates
and
returns,
strange
pair
of
luck
may
be
returns
of
body,
but
the
error
I
mean
it.
Could
anything
could
happen?
Our
logic
can
get
very
complicated
because
of
these
special
handlings.
We
are
doing
right
now.
I
think
that
the
fetching
from
the
Internet
should
only
be
on-demand
cube.
Atm
is
the
first
software
I
see
that
defaults
artifacts,
like
that.
It's
very
strange
anyway,.
C
B
A
A
B
We
also
start
a
stuff
to
start
kicking
some
stuff
out,
because
we
don't
have
time
for
everything
so
security
we
discuss
this
see
is
last
time
best
work,
also
rotating
the
controller
manager.
We
discuss
it
as
well
no
root.
This
is
actually
ending
up
very
complicated,
much
more
complicated
than
except
expected.
I
posted
the
summary,
and
what
we
are
doing
is
basically
the
TLDR
is
that
we
cannot
use
EFS
group
with
hospice
and
the
way
to
solve
it
is
to
have
an
innate
container
in
its
a
mess.
B
A
B
So
I
can
assure
the
couplet
served
with
mustard
CS,
so
this
one
honestly
I,
don't
think
we're
going
to
find
a
solution
for
it,
but
since
Tim
also
started
bringing
this
up
internally
like
this,
for
everybody
on
the
meeting
was
for
humor
anyway.
So
internally
we
have
a
ticket
for
this
I.
Think
anyone
who
keep
the
Python.
A
B
A
B
There
are
a
couple
of
solutions
and
basically
nadir
I'm,
not
sure
he's
going
to
have
the
time,
but
he
promised
to
write
dogs,
and
this
is
basically
the
tracking
issue
for
the
dogs.
So
internally
also
someone
reported
it.
That
was
my
point.
A
lot
of
people
are
reporting.
It
basically
not
dear
promise
to
write
dogs
furries,
because
one
of
the
ways
he
is
proposing
I,
don't
understand
it.
Exactly
I
only
understand
one
of
the
work
routes,
which
is
not
exactly
pretty.
You
have
to
manually
sign
this
document
service
certificate
with
the
root
CA
I
have.
A
B
A
B
B
B
But
unfortunately,
this
is,
the
kind
of
the
label
is
misleading
in
communities.
Okay.
So
this
is
the
same.
This
is
the
dogs
basically,
for
it
feature,
gate
enabled
DNS
cache,
so
Tim
started
pushing
for
this
I,
not
exactly
sure.
Why?
Because
we
haven't
seen
that
many
user
requests,
we
occasionally
see
somebody
maybe
assert
investigate
locally
what
how
to
enable
DNS
cache
without
any
modifications
in
the
coated,
cube,
atmo
and
basically
treat
it
as
an
add-on.
All
you
have
to
do
is
the
point.
Demon
set
with
the
cache
and
baby
is
going
to
work.
C
D
B
Yes,
I
am
definitely
leaning
towards
this
Ben
Ben
another
not
not
enabled
by
default.
I
still
I
have
to
ask
Tim
why?
Why
are
we
pushing
for
this
to
be
enabled
by
default?
I
know
that
to
boppin
about
it
by
default,
but
I
don't
see
a
reason
to
enable
this
form
or
users
will
opinion.
It's
not
essential.
B
C
B
C
Yes,
so
basically,
at
least
in
my
opinion,
we
should
enable
like
tweaking
the
settings
of
the
DNS.
So
it's
basically
tweaking
a
few
of
the
things
that
we
already
deploy
inside
of
the
knee
and
nestled
on
and
enable
the
user
to
actually
use
Kubica
to
apply
to
apply
additional
core
DNS
instance,
like
a
demon
set
of
coordinates.
B
A
B
Tim
is
basically
talked
about
a
future
Gate
for
this,
which
is
a
pretty
core
implementation.
Canadian,
a
set
of
a
feature
gate
I
think
the
user
can
already
like
the
poi
DS
object,
so
I'm
going
to
test
it
locally
if
it
works
without
modifications
to
key
area
I'm
going
to
propose
to
write
dogs
instead
of
this
and
treated
as
another.
B
B
B
D
B
B
Has
to
it
has
to
be
in
a
shared
location.
It
cannot
be
in
cakey
because
if
you
move
Cuba
the
amount
of
cakey
but
like
can
we
even
get
rid
of
it
on
our
side?
I,
don't
think
we
can
at
this
point
it.
So
basically
that's
the
uto
system
package.
Let
me
show
it
I,
don't
think
it's
possible
together.
Basically
our
system,
preflight
checks,
depend
on
this.
B
A
A
A
D
Here
one
question
is
the
docker:
validator
is
the
only
thing
that
is
used
by
inch?
No,
no.
Actually,
they
use
everything
like
like.
We
do.
I
disgusts
well
when
I
opened
PR
and
two
underage
hills:
Tim
Hawkins
suggested
having
Colonel
package
under
this
rep
a
repo,
so
maybe
we
can
like
if
we
rework
a
bit
the
API
to
be
agnostic
and
low
level.
We
can
move
this
these
out
to
but
like
have
the
darker
one
into
a
shared
location
and
the
all
the
stuff
that
related
to
turn
now
under
Eugene's,
a
problem.
B
Is
that
they
use
a
common
interface
that
is
called
the
validator,
which
means
that
the
validator
has
been
a
shared
repository,
and
then
we
have
to
have
another
couple
of
repositories,
one
for
Tokyo,
one
for
Colonel
it
we.
We
are
starting
to
split
things
a
lot
and
if
we,
if
we
can
get
rid
of
this
in
Cuba
DM,
this
is
going
to
be
amazing,
like
this
is
like
the
ultimate
solution
for
yeah.
B
C
D
B
D
For
the
validators,
no
no
like
he
suggested
on
a
PR
to
have
a
kernel
package
and
there
you
chills
when
I
try
to
move
some
of
the
IPPS
stuff.
So
it's
like
it's
more
of
a
suggestion
from
me
like
if
it's
related
to
kernel-
and
it
does
kernel
related
checks,
you
chose-
might
be
a
good
location
for
it.
Since
we're
trying
to
have
a
package
there.
Aha.
B
So,
but
if
you
choose
a
good
location
for
all
these
validators
because
they
definitely
not
because
there's
some
for
docker,
so
yes,
yeah
docker-
is
not
suitable
for
there.
So
I
I
think
that
deems
gives
in
clear
and
me
when
we
spoke
about
in
the
testing
Commons
meeting.
We
all
agreed
on
this
location
now,
if
Sigma
are
not
happy
with
this
or
like
foxy
like
Team
Hawking.
B
D
But
like
I'm
I'm,
not
like
I'm,
not
against
this
moving
to
this
repo,
but
I
would
still
like
first
rework
it
internally.
The
code
like
we
work
it
internally
and
then
decide
where
to
move
it.
Yeah.
C
B
B
A
B
We,
when
we
released
one
15.1,
we
have
a
release.
Note
that
said
hey
by
the
way
we
nice
now
support
concurrent
join,
but
at
the
same
time
we
don't
have
been
doin
this
for
it.
So
it's
really
a
question
whether
we
should
push
this
as
a
higher
priority.
This
cycle,
we
we
can
close
our
our
eyes
and
say:
hey
like
we're
going
to
have
a
test
for
that
next
cycle.
Maybe.
B
A
B
Yes,
I
actually
send
a
pure
for
kind
to
enable
control
of
control.
A
node
and
I
saw
a
flake
that
I
don't
have
the
time
to
the
book.
Also
something
else
that
I
saw
is
that
I
don't
see
performance
improvements,
which
is
super
strange,
so
I
was
thinking
that
maybe
a
kind
has
synchronization
points
somewhere
hidden
in
the
logic.
That's
now
the.
A
B
Painting
start
better:
yes,
that
is
true
I'm
talking
about
like
the
secondary
control,
plane
notes
they're
supposed
to
join
currently,
but
when
the
kind
cluster
you
know
the
current
command
exits.
Something
strange
happens:
all
the
pots
are
already
up,
which
shouldn't
be
the
case.
Something
is
waiting
somewhere,
yeah.
B
And
that's
that's
that's
why
I'm
not
seeing
the
performance
improvements,
but
anyway,
that's
we
can
leave
it
for
the
next
cycle.
It's
not
super
I
mean
it's
kinda
important,
but
not
super
important.
So
this
is
a
hairy
topic
that
I
really
want
to
skip,
because
we
have
for
15
minutes
it's
about
removing
the
ex-wife
from
the
complete,
config
and
I.
Don't
think
we're
going
to
have
the
time
for
this
d'
cycle.
So
let's
move
it
next.
B
A
A
B
A
B
A
B
B
B
B
B
B
B
B
B
B
D
B
B
B
So
basically,
this
is
something
that
Tim
requested
for
us
to
mark
because
he
did
not
like,
like
the
retry
logic
in
hubei
DM
that
refer
on
it.
You
know
we
have
a
couple
of
camps
of
people
here:
I
am
in
the
camp
of
Jordan,
where
John
is
saying.
Basically,
one
of
solution,
I
mean
one
time:
executions,
makkya
medium
should
potentially
always
have
retry
logic.
Timmo
DRK
me
saying
that
maybe
we
have
problems
in
the
API
server,
so
we
can.
Oh,
we
should
moderate,
try
I'm,
always
trying
to
catch
problems,
so
this
is.
B
D
B
D
B
A
I'm
just
wondering
that,
unfortunately,
our
test
infrastructure
is
not
doing
feeling
really
sophisticated,
and
this
is
part
of
the
reason
of
this
problem.
I
make
you
an
example
in
production,
typically
before
upgrading
an
MPI
server,
they
remove
it
from
the
load
balancer
and
then
they
do
upgrade
and
then
whether
the
latest,
in
said,
they
they
put
it
back
again
in
the
load.
Balancer
are.
A
No,
they
are
modified
advanced
only
only
the
load
balancer,
okay-
and
this
is
a
good
practice,
and
we
are
not
doing
this.
That
doesn't
mean
that
we
are
pinging
lot
API
server,
why
they
are
upgrading,
which
is
not
ideal.
So
I
agree
with
team
that
the
API
server
is
not
answering
the
properly
in
that
specific
model.
It
does
in
that
specific
time
frame
and
and
typically
it
compares
between
yeah.
B
A
A
Is
slightly
different
is
easy:
I
have
a
cluster
with
three
control
planning
under
a
load,
balancer
and
I
started.
Doing
that
doing,
I
should
do
upgrade
in
a
more
sophisticated
way.
For
instance,
I
should
remove
on
before
Africa
reading
a
complain
I
should
remove
from
the
banister,
do
a
grade
and
then
pick
it
back
put
it
back
under
the
rod
balance.
So
why
don't
we
have
this
individual
Docs
because
in
official
doc
we
don't
tell
nothing
about
that
of
our
assess,
but
also
we
don't
act
and
nothing
about
I.