►
From YouTube: SIG Cluster Lifecycle - kubeadm office hours 2021-11-10
A
A
A
A
Yes,
so
I
suspect
this
particular
failure
that
is
really
related
to
the
kubernetes
proxy
comes
from.
I
think
this
first
pr.
A
A
Let
me
check
yes,
this
is
the
other
one.
No,
actually
this
is
the
same.
This
is
this
q
proxy
service
account.
A
This
is
the
potential
changing
constants
that
is
resulting
in
this
failure,
which
is
the
q
proxy,
the
the
node
proxy
is
broken
and
the
other.
The
other
failures
that
I'm
seeing
is
the
corporate
config.
A
And
this
is,
I
should
fix
this,
but
other
than
that,
I
don't
see
anything
else
failing
so
I'm
going
to
send
up
a
couple
of
pr's
like
the
the
one
for
cube
proxy.
I
don't
understand
what's
happening
here.
Something
is
wrong,
which
probably
means
that
we
don't
have
sufficient
unit
tests
to
capture
these
problems.
B
A
A
A
B
A
A
I
think
you're
talking
about
the
issue.
Where
is
this
someone.
B
So
I
think
our
cubidium
office
hours
has
been
like
residual
to
one
hour
later
or
something
it
was
earlier,
like
9,
30
p.m.
For
me,
and
now
it's
started
at
10
30,
so
it's
maybe
like
daylight
saving
or
something.
A
A
B
Not
an
issue
I
was
just
wandering
over.
What's
what
the
reason
yeah,
you
know,
the
time
was
same.
A
Yeah,
you
don't
have
to
join
that
late.
I
mean
it's
not
super
important,
so
I
already
replied
to
this
issue.
I
think
that's.
B
A
The
theodore
is
that
I
think
the
change
is
contagious.
It
makes
sense
to
have
it.
It
makes
sense
to
have
it
definitely,
but
I
think
it's
complicated
and
I
just
as
one
of
the
people
that
are
active
in
maintaining
the
code
base.
I
don't
want
to
have
this
because
we
stuck
the
cd
is
very
fragile
at
this
point.
We
don't
want
to
touch
it
it
we
actually
write.
We
write
the
an
end
point
inside
the
api
server
port
so
that
the
api
server
can
connect
to
the
local
hcd.
B
A
If
we,
if
we
support
this,
we
have
to
take
the
value
pass
it
to
this
annotation
inside
the
pot,
and
we
also
have
to
check,
modify
the
the
prefix
checks,
because
the
pre-file
checks
are
also
critical
to
to
a
particular
port,
which
means
it's
a
pretty
big
change
and
honestly,
if
that
is,
that
is
for
the
purpose,
to
allow
multiple
hcd
instances
to
run
on
the
same
cost.
I
think
it's
a
really
bad
idea.
A
Yeah.
I
agree
yeah
because
I
I
explained
you
have
to
separate
the
concerns
of
data
directory
and
certificates
if
you're
running
multiple,
htds
and
yeah.
If
this
particular
host
is
slow,
lcd
will
start
start
failing.
Actually
it
will
have
still
reads
still
right
if
somebody
has
put
in
a
lot
of
pressure
on
the
cluster.
A
A
Yeah
I
mean
if
this
particular
host
is
very
fast.
It
doesn't
matter,
but
I
think
in
general
we
don't
want
to
support
this
scenario,
and
because
of
that,
I
I
think
it's
a
that's
my
judgment.
Of
course
I
didn't
talk
to
anyone.
I
just
caused
the
issue,
but
I
think
that's
that
I
think
to
do
at
this
point
and
like.
A
A
A
But
it
is
very,
very
messy.
I
don't
understand
how
they
are
managing
like
high
availability
with
this
like
if
they
want
a
single
hd
instance.
That's
fine!
But
if
you
have
three,
you
have
to
manually
manually,
restart
them
one
at
a
time
upgrade
it's,
and
if
you
have
multiple
it's
it's
the
instances
running
on
the
same
horse
on
all
those
aha
costs.
A
A
I
mean
we
can
have
a
look
at
the
back
walk,
but
I'm
going
to
pretty
much
move
most
of
these
items
to
the
next
milestone.
At
this
point,
because
code
freeze
is
like
in
five
days.
B
A
A
A
We
have
some
help
wanted,
but
you
know
if
nobody
is
taking
action
items
we,
the
existing
cubed
m
roster,
will
do
their
best
to
take
as
much
as
we
can.
A
And
I
mean
that's
pretty
much
it
do
you
have
any
specific
issue
from
here
that
you
want
to
discuss.
A
Yeah,
that's
that's
pretty
much
the
summary
for
what
we're
going
to
do
in
this
cycle.
I
see
the
scott
scott.
Do
you
have
any
topics
for
today.
A
All
right,
so
we
just
discussed
the
some
failing
entrances
that
I'm
going
to
try
to
fix
later
today
and
rohit
wanted
to
discuss
a
particular
issue
that
we
call
sterling
about
customs
reports,
but
yeah,
that's
pretty
much
it
so
unless
we
have
anything
else
last
minute
we
should
probably
end
the
meeting.
A
All
right,
if
you
have
any
topics,
please
wish
to
be
offline,
we
can
continue
the
discussion
on
swagger.