►
From YouTube: Kubernetes SIG Node CI 20230830
Description
SIG Node CI weekly meeting. Agenda and notes: https://docs.google.com/document/d/1fb-ugvgdSVIkkuJ388_nhp2pBTy_4HEVg5848Xy7n5U/edit#heading=h.2v8vzknys4nk
GMT20230830-170457_Recording_2560x1440.mp4
A
Recording
now
hello,
everyone,
this
is
sick.
Note
CI
meeting
today
is
August
30
2023
Let's
start
with
the
agenda.
First,
okay
looks
like
we
don't
have
any
agenda
items
today,
so
I'll
go
ahead
now
before
we
start
anybody
has
anything
they
want
to
talk
about.
If
not,
we
can
jump
into
the
triage.
A
C
Yeah
I
was
just
wondering
if
anyone
has
done
testing.
Okay,
so
I'll
give
you
the
scenario
what
we
observed.
So
we
have
certain
workload
testing
for
performance
testing
and
it
is
a
moderate
workload.
One
person,
SQL
parts
and
some
clients
communicating
to
each
other,
and
what
we
are
observing
is.
If
you
try
to
do
stress,
testing
and
create
the
parts
in
same
namespace,
you
will
see
more
CPU
usage
per
node
rather
than
let's
say,
if
you
have.
C
D
C
So
so,
let's
say:
if
you
you
have
a
workload
like
say:
there
is
a
part
with
a
postgres,
equal
power
and
there's
a
client.
It's
a
test
workload
and
it
the
job.
Is
this
job
keeps
on
creating
Parts.
After
part
till
you
reach
the
capacity,
and
if
you
take
these
parts
and
deploy
only
in
one
single
namespace,
we
observe
that
it
takes
per
node
for
a
more
amount
of
CPU
versus,
if
you
let's
say,
take
these
parts
and
distribute
amongst
three
four
namespaces
and
run
the
job.
C
So
the
conclusion
that
we
are
seeing
is
that
if
you
have
the
some
set
of
PODS
and
if
you
Club
them
together
in
a
single
namespace,
the
per
node
CPU
usage
is
higher
and
the
parts
are
distributed
across
different
nodes
by
the
way,
but
somehow
that
I'm
not
able
to
pinpoint
what
could
be
taking
more
CPU
and
the
CPU
usage
comes
from
the
cubelet
spike
cubed
slice.
The
spike
comes
from
there
so
wondering
if
there's
any
known
thing
here
that
I'm
missing
I
actually
have
a
something.
I
can
show
you.
C
Let
me
share
my
screen.
Disable
screen
share
host
disable
attending
things.
Can
you
make
me
host
or
something.
C
C
But
afterwards
we
tried
to
use
multiple
namespaces
and
the
CPU
says
per
node
was
significantly
down
here
and
the
next
test,
as
well
here,
so
so
I'm
trying
to
debug
why
it
takes
more
CPU
here
as
a
part
of
CI
job
like
why.
It's
the
only
difference
between
this-
and
this
is
there's
a
single
namespace,
all
the
parts
on
in
single
namespace,
and
here
there
are
multiple
name
spaces.
C
C
A
cubelet
CPU,
it's
a
the
spike,
comes
from
like
say,
for
example,
this
cubelet
Cube
Cube
part
slice.
The
most
of
the
spike
is
coming
from
there.
Other
parts
are
like
system,
slices
are
are
normal,
so
that
is
in
this
case
the
cryo
we
are
used
to
cryo
wasn't
taking
more
space,
there
are
more
CPU
is
just
this
cubelet
cubelet
slice.
That
means
the
parts
were
taking
more
CPU.
E
C
Nothing
it's
a
simple.
We
have
something.
We
call
no
density,
just
simple,
a
pod
which
has
a
one
Q,
one
one
postgres
equal
and
there's
a
corresponding
client,
and
this
client
makes
a
simple
request
and
then
that's.
It
keeps
going
like
that.
C
E
Do
you
know
if
so
so
this
tests
churning
through
ports,
so
it
creates
and
removes
them
I'm.
E
Just
saw
this
like
Spike,
very,
like
tiny
Spike
like
I,
wonder
if
it
will
continue
on
so
like
in
first
test.
It's
like
spiked
in
the
middle,
but
then
like
goes
down
quite
fast,
but
in
later
executions
it's
go
lower,
but
longer
at
least
from
like
just
observation.
C
I
just
so,
you
start
it's
a
tear
up,
so
we
Define
how
many
number
of
Max
parts
we
want
to
run
and
it
it
probably
reached
that
and
then,
while
doing
testing
someone
just
started
tearing
down
because
he
reached
that
value
and
other
cases,
they
might
have
just
kept
it
running
for
some
time.
So
you
start
the
part.
C
You
say:
I
want
to
run,
say
2000
parts
per
node
and
you
start
the
test
cases
and
it
starts
spawning
the
parts
everywhere
you
wait
when
the
number
is
reached,
and
then
you
start
a
manually
tearing
down
and
the
so
someone
manually
tear
down
it's,
not
it's
not
a
characteristic
of
a
pod
usage.
E
C
A
A
B
Here,
okay,
sorry
I
had
to
step
away
earlier,
but
yeah
so
I
think
we
talked
about
this
some
time
ago,
but
the
idea
is
kind
of
straightforward,
so
we
Define
a
grid
tests.
B
Let's
say
we
want
to
run
a
scenario
like
our
classic
conformance
scenario
that
we
run
on
pull
requests
on
every
day,
but
we
want
to
run
it
against.
Let's
say
two
Cris
four
operating
systems:
two
architectures
right
today.
Somebody
writes
all
those
jobs
out
by
hand
which
is
very
tedious,
and
if
you
want
to
introduce
a
new
operating
system,
it
kind
of
gets
a
bit
long,
so
Iota
generator
in
Python
that
obviously
there's
some
digital
playing
and
it
spits
out
the
tests.
B
So
it's
ready
to
be
merged.
This
the
wrapper
has
available
things
but
other
than
that
the
tests
are
functional.
So
if
you
go
to
files
changed
and
look
at
the
grid,
you
will
see
some
jobs
in
there
that
are
very
familiar
and
the
intent
is
these
go
green
straight
away:
I'm
planning
on
deprecating
the
other
jobs
that
they
overlapped
with
this
job,
so
both
working
on
the
AWS
and
gcp.
Okay
and
all
these
jobs
are
using
keep
tests
too
by
the
way.
B
A
Okay,
this
is
really
nice,
so
that's
your
script
generate
combinations
for
all,
dc2
and
and
GK
and.
B
The
current
images
that
we
use
right
now
is
cause
Ubuntu,
gke,
Al,
2023
and
then
vanilla
Ubuntu.
B
We
test
against
container
D,
but
I
want
to
expand.
I
want
to
expand
that
to
cryo
as
well.
That's
going
to
be
at
the
later
date.
B
One,
so
we,
this
is
the
top
a
little
bit
and
we
can
see
the
matrix
definition,
a.
B
To
not
build
jobs.poi
file,
okay,
it
will
be
done
another
down
a
little
bit
more
okay.
A
A
We
might
that's
another
option.
We
can
also
try
to
run.
We
also
have
got
our
images
we're
going
to
do
that
there
as
well.
A
B
One
thing
I'm,
not
a
big
fan
of
is
that
image.
Config
thing
it's
a
bit
annoying.
A
B
Yeah,
if
you
scroll
down
a
little
bit,
it's
back
to
a
notice
if
you
go
back
to
build
jobs.py
a
second
okay.
So
so
this
is
not
strictly
related
to
the
generated
Arrow,
but
in
general.
So
when
I
created
the
Keep
test
to
support
for
ec2
Stuff,
it
has
support
for
image.
Config
definitions,
where
you
define
like
the
image
that
you
want.
B
The
machine
type
and
use
user
data
you're
going
to
use,
but
one
thing
I
did
do
was
allow
someone
to
pass
all
those
flags,
so
you'd
create
one
job
per
OS
with
the
flag
of
the
machine
tab
that
you
want,
and
the
user
data
to
launch
the
answer
to
it
I
mean
the
good
benefit
of
that
is:
you'd
have
to
have
one
job
per
OS,
which
is
really
good,
because
you
can
it's
like
right
now
our
conformance
tests
actually
run
I
guess
what
two
or
three
different
os's
different
jobs
is
a
bit
tricky
to
kind
of
see
what's
going
on
and
when
it
goes
red
it
goes
red.
B
A
Yeah,
this
is
amazing.
Let
me
have
myself
the
reviewer
I
think
this
cheetah
are
unical.
You
might
also
be
interested
in
reviewing
this.
E
Yeah
I
I
mean
it's
great,
I
I
think
one
feedback
I
have.
Is
we
really
try
to?
We
need
to
make
sure
that
we're
testing
the
right
permutations.
We
don't
want
to
test
every
test
on
every
environments.
The
idea
was
to
make
sure
that
we
have
like
Upstream
test
needs
to
cover
Upstream
functionality,
so
we
need
to
make
sure
that
we
covering
enough
permutations
in
this
sense.
So
when
we
have
this
document
with
Alex
Dixie
wrote
about
pigmentations,
we
wanted
to
split
test
grids
into
like.
E
B
E
B
Right
up,
there
assumes
that
c
group
E1
is
a
special
scenario
and
all
all
test
cases
by
default
on
E2.
E
Oh
that's
great
okay
and
yeah,
and
we
also
may
need
to
do
this
extra
mapping
for
previous
releases,
like
what
Mike
started
when
every
previous
release
has
a
separate
image,
config
file,
so
we
locked
down
the
image
version
that
we
test
was
released,
so
it
wouldn't
be
like
we
don't
want
to
introduce
new
problems
just
because
image
got
updated
for
the
for
the
previous
releases
of
kubernetes.
E
So
that
may
be
interesting
adjustment
to
this
test.
So.
B
I
think
yeah.
We
can
do
that
because,
let's
say,
for
example,
we
released
kubernetes
125
with
Ubuntu
2004
right
doing
a
life
cycle
of
1.5
testing;
We're,
not
gonna,
try
and
run
one
to
22
or
four,
for
example,
because
we
didn't
test
against
that
on
when
it
was
released,
because
that
OS
has
like
what
five
years
before
time,
which
is
much
longer
than
the
release
window,
that
you
need
to
support.
125
foot,
yeah.
E
Yeah
because
you,
our
customers,
kubernetes
customers,
really
appreciate
when
they
have
some
stable
environment
and
they
don't
need
to
chase
like
if
they
set
up
like
kubernetes
specific
version
and
Oso
specific
versions.
They
can
be
guaranteed
that
it
will
run
for
some
time
and
we
keep
testing
for
this
permutation
and
they
don't
have
to
update
OS
just
because
OS
was
released
and
we
started
testing
with
new
OS
I.
Think.
B
Oh
I
agree
so
once
the
release
is
cut
whatever
OS
it
was
tested
against.
That's
the
osc
stays
on
for
the
rest
of
the
testing
life
cycle.
Okay,
so
we're
on
the
same
page.
It's
not
documented,
but
I
should
probably
add
that
to
a
file
that
we
wrote.
A
I
think
there's
only
one
exception
for,
and
this
is
going
to
happen
for
a
cost
of
for
128.
for
right
now
we're
using
cost
beta,
but
we
plan
to
move
it
to
stable,
but
it's
going
to
be
the
same
version.
It's
just
like
change.
Your
name
yeah.
E
But
I
think
timing
problem
that
can
be
resolved,
but.
B
Thank
you
unrelated.
No.
Does
the
project
have
like
a
defined
cells
like
what
it
consists
like,
for
instance,
let's
tier
the
operating
systems
right
like
hey?
These
are
public
operating
systems,
vanilla
and
Modified
by
their
vendors
right,
whoever's,
released
it,
whether
that's
Debian
or
somebody
else.
The.
D
B
Tests
X
version
against
a
particular
version
of
kubernetes,
and
then
we
have
all
these
other
operating
systems
that
we
also
test
on,
because
vendors
who've
been
very
helpful
to
project
and
you
you'd
use
those
core
os's
as
your
Baseline
in
the
sense
that,
for
example,
you'd
use
Ubuntu
and
broccoli
Linux
as
your
base
conformance
OS,
and
then
you
would
have
extra
operating
system.
They
test
onto
the
things
that
cause
and
Al
2023,
for
example,.
E
Yeah
I
think
I
was
told
that
there
was
some
history
about
trying
to
do
vendor,
neutral,
OSS
and
some
distress
we're
having
problems
and
nobody
geared
to
look
at
it
like.
So
it
was
like
problems.
That's
like
distro
owners,
don't
have
any
desire
to
have
updated
for
kubernetes
and
kubernetes.
Maintainers
have
different
affiliations
with
different
companies
and
they
would.
D
E
I
mean
they
have
more
levers
and
incentives
to
validate
kubernetes
in
those
distress.
So
that's
a
yeah.
B
That's
the
validation
that
we
do
on
the
current
set
of
os's
that
we
test
it's
just
that
we
consider
certain
other
operating
systems.
That's
like
our
Baseline,
like,
for
example,
which
OS
should
conformers
password
like
I'll,
be
very
surprised
if
I
created
a
rocket
Linux
instance
today,
loaded
a
one
of
these
standard,
Cloud
init
files
that
we
use
and
for.
D
B
Reason
the
conformance
tests
were
failing
like
how
did
that
happen,
and
was
there
something
funny
Google
I
know
it
could
be
somewhere.
E
Yeah
conformance
test
might
be
fine,
but
things
like
graceful
determination
tests
when
we
rely
on
what
is
it
like
inhibits
that
we
set
up
and
some
OSS
has
different
Behavior,
it's
slightly
different,
but
it
was
hard
like
I
think
it
was
unboxed
in
some
distress
that
we
discovered
and
nobody
like
it
got
stuck
in
this
state
of
buck
exists
and
nobody,
okay
and
some
tooling
is
Missing
sometimes
so
it
also
I.
E
Remember
another
example
for
graceful
determination
like
there
was
a
tooling
to
send
this
inhibit
signal
and
I
think
on
one
District
was
missing
and
test
started
failing.
So
there
are
cases
like
that.
One
I
agree
about
conformance,
but
beyond
conformance,
sometimes
maybe
a
little
bit
too
specific
and
I'm
not
sure
how
much
energy
we
have
as
Community
to
adjust.
E
E
Yep,
that's
maybe
done.
Okay,
cool
yeah.
B
Yeah,
that's
fine!
Okay,.
E
Yeah
and
for
this
specific
PR
I'm
I
want
to
review
it.
I
I'm
trying
to
I
want
to
merge
it
fast
so
to
learn
it
faster.
You
may
need
to
limit
the
scope
a
little
bit
like.
It
really
depends
on
like
how.
B
Many
you
could
imagine
right
now,
because
this
is
a
clone
of
what
we
have
today
with
just
different
job
names.
D
B
Plans
to
make
them
green
and
then
have
conversations
about
other
things
perfect.
Thank
you
right
now,
most
of
the
jobs
look
similar
to
other
jobs
except
running,
keep
testing
and
they're
testing
the
same
scenarios.
B
Yeah
but
yeah
it'll
probably
be
like
what
dims
did
for
the
pile
of
ec2
tests
that
introduced
things
that
were
a
bit
red
because
we
forgot
marks
and
they
always
wasn't
nothing.
D
E
Okay,
and
also
for
amount
of
resources
we
spent,
if
you
will
generate
a
test
for
both
Google
and
AWS,
you
may
need
to
adjust
the
timeout,
so
we
run
it
less
frequent
on
both
patterns
rather
than
more
frequent
on
one.
So
that
may
be.
B
F
And
yeah,
how
are
we
deciding
on
the
frequency
of
when
the
test
should
run
for
each
provider.
B
B
B
F
A
A
G
A
A
A
Yeah
I
didn't
see
any
new
favor
so
far,
so
I
guess
we're
just
the
usual
failures.
E
G
A
A
I
A
It
was
great
it's
on
the
review
right.
A
I
A
Okay.
This
is
this.
D
I
A
A
Okay,
I've
got
merge.
G
A
H
A
A
I
D
A
E
A
A
Yeah,
let's
go
now:
okay,
you
have
all
this
clean
now
great,
actually,
that
you
want
the
cover
for
about
trash.
We
have
20
minutes
more
or
less
should
be
good
enough.
Thank.
F
F
So
we
have
around
12
bugs
right
now.
The
first
one
Cube
exec
will
wait
for
commands
background
job
to
finish
when
exit
to
port
on
container
d.
F
What
happened
here
was
when
running
cubital
exec
with
containerdy
runtime.
The
command
will
be
waited
until
it
has
completed
to
return
result
to
the
client,
for
example,
Cube
exact
part
name,
minus
the
container
name,
and
then
the
bash
command
here
will
wait
for
10
seconds
before
returning
anything
to
the
client
terminal
and.
F
Yeah
just
looking
at
it.
This
also
happened
when
upgraded
from
eks
version
to
this
waiting
for
six
CLI.
To
give
an
opinion
on
this.
If
you
remove
Ampersand
or
hyphen
ID,
everything
goes
well.
Ampersand
means
run
the
background.
Job
I
think
this
is
expected.
Then
I,
yes,
I,
agree.
The
one
way
we
can
set
the
expectation
for
scripts
is
what
you
just
explained.
F
If
you
exit
Linux
will
close
all
the
process
started
by
this
session,
so
I
think.
If
this
runs
in
the
background,
then
the
command
would
return
instantly.
But
if
it,
if
that's
not
the
case,
it
will
wait
for
the
command
to
finish.
F
But
the
problem
is
it
worked
differently
before
and
it
works
differently
now
like
it
worked
differently
on
Docker
and
it
works
differently
on
containerdy.
F
F
Mm-Hmm
so
might
want
to
just
try
it
with
cryo
as
well,
but
should
we
move
it
to
another
dashboard.
E
There's
kind
remove
this
kind
bug
what
kind
of
feature.
E
Yeah,
perhaps
you
can
document
the
behavior
document,
the
fact
that
behavior
might
differ.
You
know
documentation
as
well.
F
E
Would
be
best
to
document
this
Behavior,
so
if
you
mentioned
CCL
I
say
like
CLI
and
say
if
you
want
to
update
documentation
on
that.
E
A
new
line
like
I
mean
write.
A
commands
should
be
on
the
same
separate
line
so
yeah.
F
This
one
or
just
not
ready
afternoon
becomes
not
ready
for
a
few
seconds,
so
some
pods
are
stuck
in
not
not
ready
status
after
related
note
becomes
not
ready
for
few
seconds,
and
then
it
is
healthy
again
from
queue.
Controller,
managing
logs,
pods
won't
be
evicted
and
are
still
running
given
one
part,
give
one
part,
for
example,
update
ready
status
of
God
to
false
event
occurred
mode.
Vm
is
healthy
again,
removing
all
the
tints.
F
E
Yeah,
you
can
three
hours
needs
information.
F
Annotation
container
dot,
app
armor,
dot
security,
dot
beta.com,
not
documented
kubernetes
uses
an
annotation,
but
it
is
not
registered
or
documented.
We
do
already
document
an
annotation
for
sec.com
document.
The
annotation
in
well-known
labels,
annotation
entails
pages
to
update.
We
need
to
register
this
annotation
if
it
is
out.
If
it
goes
out
of
use,
they
are
accepted.
E
E
Yeah,
if
anybody's
interested
to
take
a
look,
you
look
great.
So
it's
up
armor
right,
I,
I,
really
like
to
cross-reference
enhancement
issue
for
this
kind
of
issue.
So
whenever
we
somebody
will
work
on
enhancements,
they
will
notice
that
this
issue
needs
to
be
addressed
as
well.
E
I
can
try
to
look
up
enhancement.
Read
it
with
me.
F
F
Okay,
I
think
this
is
the
one
that
is
already
being
worked
on.
App
container
is
started
before
in
it,
containers
could
compute
on
the
Node
repo.
E
Worked
on
because
it's
a
regressions
as
I
mentioned
yeah,
so
it's
origin
right,
column,.
F
F
Disable
accelerator
usage
metrics
has
been
removed,
the
disable
accelerator
usage
Matrix
feature
gate,
disables,
metrics
and
collected
by
the
cubelet
with
the
timeline
for
enabling
this
feature
by
default
according
to
their
documentation.
The
feature
gate
has
already
been
removed,
so
the
section
needs
to
be
updated.
Okay,.
F
F
Indexed
one
everything
pod
due
to
empty
dirt,
violation
returns
as
completed
if
a
pod
is
evicted
because
it
violates
the
empty
door,
storage
limits,
your
pod
will
be
evicted,
but
it
will
be
marked
as
completed.
The
expectation
is
the
port
should
be
dominated
or
failed
rather
than
completed
for
a
job.
The
controller
will
take
the
spot
to
be
succeeded,
even
though
it
was
evicted.
F
F
E
F
Okay,
do
you
know
that
it's.
F
F
So
it's
I
think
it's.
There
is
a
call
being
made
to
unmount
volume,
but
the
expectation
is
there
should
never
be
a
call
to
that
particular
thing.
Let
me
see
not
completely
sure.
F
E
F
Race
window
between
scheduler
and
device,
plugin,
restart
and
cubelet,
so
we
restart
a
device
plugin
frequently
as
a
part
of
our
design.
When
the
device
plugin
goes
down,
cubelet
marks
the
devices
it
was
managing
as
unhealthy,
removing
it
from
allocate
table
and
preventing
pots
from
scheduling.
However,
it
keeps
the
resource
and
capacity
for
grace
period
in
hope
that
the
device
plugin
will
come
back
and
scheduled.
Pods
can
keep
running
with
no
disruption.
F
The
cube,
scheduler
reads:
node
State
and
chooses
nodes
to
schedule
the
bottom
waste
and
based
on
allocatable
resources.
If
the
device
plugin
restarts
between
Cube
scheduler
decision
and
the
tubular
tries
to
start
allocating
resources
for
the
Newport,
then
the
Pod
is
rejected
and
enters
the
phase
trade.
The
expectation
is,
poured
portrait
either
successfully
schedule
or
stay
in
pending
until
the
device
plugin
restarts.
F
So
is
this
part
of
scheduling,
six
scheduling
or
signal
I
think
what's
Happening
Here
is
when
you
restart
device
plugin,
the
pods
should
stay
in
pending
state.
E
E
Similarly
to
like,
if
you
try
to
schedule
something-
and
there
is
no
CPU
right
at
this
moment
and
like
even
if
like
Google,
is
about
to
reduce
some
CPU,
we
still
would
reject
the
sport.
Same
I
think
happens
with
devices
I
think
Francesca
dropped
from
the
cold,
so
he
was
here
before
he
may
know
more.
E
It
feels
like
Behavior
by
Design,
because
this
is
how
things
are
Paris
right
now.
So
if
you
fail
for
admissions
and
you
you'll
be
in
failed
state
I
think
it's
a
feature
request
more
than
a
bug.
F
J
So
I
think
they're
saying
that
it's
the
it's
one
of
the
last
plugin
goes
down.
The
qubit
keeps
the
resources
keeps
reporting.
Them
is
allocatable
for
a
short
period
of
time.
I
was
just
looking
at
that
first
link.
They
had
and
then
Cube
schedule
or
scheduled
a
new
pod,
because
the
note
says
it
has
it,
but
the
plugin
is
still
actually
not
there
yet,
and
then
it
fails.
E
So
they
won't
put
admission
to
wait.
Till
plugin
will
restart
completely.
D
J
Yeah,
it's
talking
about
a
once.
This
grace
period
expired,
we'll
actually
delete
the
resources,
so
plugin
goes
down,
Gray
Street
expires,
then
the
resource,
then
the
resource
stops
getting
reported
unallocated.
But
in
that
intermediate
time
new
pod
gets
scheduled
to
the
node,
because
it's
music
schedule
requirements,
but
the
resource
is
actually
not
there
and
then
it
fails.
E
J
E
E
So
what
they
want
is
they
want
kublet
to
be
able
to
recognize
that
the
device
plugin
is
down
so
pod
admission
needs
to
wait
a
little
bit.
Yeah
I
think
it's
a
feature
request
then
yeah
Dixie
did
you
get
it
like?
Can
you
write
it
in
a
sec
in
the
sentence?
Send
a
port
admission
is
not
designed
to
wait.
G
E
D
F
Cubelet
get
config
map
from
Cube,
API
server
and
hcd
every
time
QPS
of
config
map
get
request
is
too
high
in
Cube
API
server.
When
there
are
many
config
maps
and
clusters
mounted
by
running
pot,
the
QPS
of
config
map
get
request
will
be
high
and
the
request
will
also
need
to
get
config
Mark
resource
from
the
hcd
instead
of
being
returned
from
the
cube
API
server
cache.
F
So
what
would
you
expect
to
happen?
Do
not
get
config
map
from
hcd
get
it
from
cache
of
cube
server.
Cube
API
server
set
this
to
set
cubelet
config
map
and
secret
chain
reduction
strategy
to
cash
set.
This
to
this
create
lots
of
config
map.
F
A
Think
it
was
more
of
a
question
of
why
sometimes
it
goes
through
directly
to
the
uapi
server
instead
of
recruiting
from
memory.
But
honestly
I,
don't
recall
that
much.
F
F
C
To
understand
this,
they
want
to
reduce
the
number
of
requests,
the
cubed
at
Max
to
API
server
and
use
the
cash
value
of
the
config
map
instead
of
the
what
value
it
gets
latest
from
the
API
server.
Is
that
what
they're
saying
scroll
down
to
this
comment
comment
there,
which
says.
F
E
F
So
is:
can
someone
explain
because
I
couldn't
follow.
C
F
Yeah
Jordan's
comments
basically.
C
E
Yeah
we
intentionally
like
there
is
a
cache
of
config
Maps,
but
when
we
register
Port,
we
try
to
re-query
this
config
map,
even
though
it
may
already
be
in
Cache,
and
we
do
it
and
actually
to
get
a
fresh
version.
The
apparently
there
is
a
bug
that
we
call
this
register
report
on
every
resync,
and
since
we
do
that,
then-
and
we
have
this
logic
of
obtaining
a
fresh
config
map,
we
query
this
flash
config
map
a
little
bit
too
often.
So
you
want
to
minimize
number
of
the
stuff.