►
From YouTube: SIG - Performance and scale 2022-11-10
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
Okay,
welcome
to
sixth
scale:
it's
11.
10
22,
November,
10th,
2022.
A
A
A
meeting
notes,
I,
I,
think
so
Brian.
Maybe
we
can
start
start
with
you
and
talk
through
the
what
you
found
in
this
channel.
Yeah,
hey.
A
B
So
yeah
it
turns
out.
We
had
to
increase
for
the
for
the
regular
jobs
of
running
the
prior
workloads
cluster.
We
had
to
increase
the
memory
on
the
test
nodes,
so
increase
the
memory
of
the
test
notes
by
gigabytes.
This
is
Jew.
It
looks
to
be
due
to
a
bump
in
the
memory
usage
of
some
of
the
cuber
components.
So
I
have
opened
an
issue
on
Cube
vert,
hoping
to
guess
a
investigation
into
us
at
least
I.
B
Think
I've
identified
the
pr
that
probably
introduced
the
the
break
yeah
this
PR
exactly
and
so
just
bumps
a
couple
of
the
components,
the
memory
usage
or
the
memory
requests
for
those
components
which
basically
prevented
the
job
from
completing
because
it
left
a
couple
of
emis
in
scheduling
States,
so
so
the
ones
so
the
jobs.
Now
the
jobs
that
are
running
on
the
prior
workloads
cluster
should
be
okay.
B
There's
a
separate
issue
on
the
performance
cluster
and
I
was
hoping
to
kind
of
get
an
idea
of
who
owns
that
cluster
at
the
moment,
who's
who's
paying
for
that
cluster
and
because,
when
I
was
looking
at
it,
it
looks
like
it's
on
centers.
Eight
and
the
issue
that
we're
seeing
may
require
an
upgrade
of
container
D,
which
would
be
difficult
on
Center
State,
because
the
repositories
are
no
longer
active.
B
A
It
was
IBM
was
the
one
who
owns
this
and
Marcelo
was
the
one
who
brought
this
in
the
the
person
that
was
going
to
take
marcelo's
place.
His
name
is
Lee
I
can
give
you
the
full
email,
leeworks,
Red,
Hat
and
yeah
they're
they've
been
the
one
doing
it
so
ever
since
Marcelo
hasn't
been
around
in
Cuba,
there
hasn't
been
much
changes
to
this
cluster.
A
A
B
I
can
I
can
do
some
digging
on
my
end
as
well.
Just
to
see
I've
been
asking
a
few
other
people
in
the
community
just
to
see
if
they
know
what
the
stories
of
that
cluster,
because
even
just
maintaining
that
cluster
now
is
it's
on
I,
think
kubernetes
1.21,
and
it's
on
that
old
version
of
container
D
and
basically
what
we're
seeing
I
think
I
sent
you
on
slack
after
the
meeting
last
week,
we're
seeing
an
issue
just
with
the
vert
operator
starting
up.
B
So
it's
like
a
permission
from
operation
or
permission
to
carry
out
a
certain
operation.
It
fails
so
and
from
asking
a
couple
of
other
people
in
the
community
as
well.
It
looks
like
we
would
have
to
upgrade
container
D
I
think
we're
still
we're
still
doing
some
investigation
there,
but
it's
really
down
to
who
owns
the
cluster
now
and
what
we
can
do
with
it.
Basically,
because
it's
on
Center's
8,
which
is
no
longer
supported.
A
Yeah,
okay,
all
right
thanks
for
the
note,
so
all
right,
yeah
I,
think
so.
I
I
sent
you
the
person.
I
know
I
I
think
so
for
yeah.
For
next
it's
for
next
steps.
I!
Guess,
let's
see
what
let's
see
what
they
say:
I,
don't
know:
I
mean
I,
don't
I,
don't
know
the
person.
Maybe
let's
see
what
they
say.
Maybe
we
can
I,
don't
know,
moved
to
send
to
us
streams
and
then
get
to
move
to
124,
125
and
then
yeah.
A
A
There
we
go
that
sounds
good.
Okay,
all
right
I
did
want
to
talk
about
the
issue
you
opened
because
So
like.
Let
me
let
me
know
people
your
opinions
on
this,
because
I
I
think
this
is
we've.
We've
we've
run
into
this
a
few
times
right.
We've
seen
this
I,
don't
know,
I
think
it's
like
the
fourth
time
in
the
last
six
months,
where
we've
had
to
bump
the
memory
and.
A
B
Respond
to
memory
a
few
times,
I
was
just
being
consistently
happening,
so
I
just
want
I
was
hoping
for
some
kind
of
Investigation,
even
Jade,
and
the
special
notes
on
that
PR
says
that
there
should
be
an
investigation
carrier.
So
that's
why
my
kind
of
thinking
mind
opening
that
issue
just
to
have
an
issue
there
that
may
Force
an
investigation
into
the
memory
usage.
A
A
B
I
I
would
agree,
but
I'm
guessing
in
bigger
deployments.
People
wouldn't
be
running
so
close
on
memory,
I
guess
in
the
sick
performance,
when
we're
kind
of
very,
like
I've,
only
increased
this
since
I've.
Since
the
increase
I
did
know,
I
only
increased
the
memory
by
one
gigabyte
per
node
just
to
get
the
tests
on
blocks
so
we're
still
very
close
to
the
edge
with
regards
to
our
memory
limits
on
the
North.
So
I
don't
know
if
we
see
this
in
bigger
deployments,
but
it's
obviously
it
could
impact
yeah.
B
A
Yeah
I,
like
I
thinking
like
in
terms
of
I,
mean
we're
talking
megabytes
here,
but
I
I
know
like
I,
don't
even
know
from
internally
like
we
we're.
We
try
to
squeeze
as
much
as
we
can
onto
notes
and
and
as.
B
A
Like
you
know,
we
look
at
Handler
right
right,
like
the
270
megabytes
and
the
workload
that
we
want
like
word.
Handler
is
the
control
plane
right?
It's
not
the!
Then
we
want
to
prioritize
the
workload,
and
so
this
going
up
as
isn't
isn't
good
right.
It's
I
mean
it's
fine,
but
it's
like
you
know.
We
just
want
to
be
aware
of
it,
because
perhaps
we've
made
assumptions
on
our
workload
based
on
our
memory,
size,
CPU.
Whatever
usage,
we,
you
know,
we've
we've
adjusted
tuned
according
to
those
things,
and
now
this
is
changing.
A
So
it's
it's
that
kind
of
thing,
right
that
and
then
like
that's
what
that's
why
I
think,
like
you
know
when
we're
like
I
think
CI
is
a
perfect
example
like
CI
is,
is
almost
like
a
vendor.
It's
like
the
sort
of
just
the
Upstream
vendor
right
and
it's
breaking
us.
You
know
and
we're
like
you're
saying
we're
up
against
the
limit.
You
know
it's
it's
sort
of
like
it's
sort
of
like
that.
I
I
just
wanted
to
make
sure
like
if
we,
if
people
are
doing
this,
like
change,
I,
understand
the
reason
behind
them.
A
I
just
wanna
make
people
aware
that
this
is
going
to
affect
a
lot
of
people
and
I.
Don't
want
to
I
want
to
just
make
sure
we're
like
we're
shouting
as
loud
as
we
can
about
these
changes
and
I
don't
know.
Maybe
we
need
to
raise
something
in
the
community
college
just
so
that
people
are
aware.
Like
that
doing
this
kind
of
change
is
is
important,
but
we
just
want
to
you
know,
be
careful.
B
A
I
agree,
I,
think,
release
note
or
like
I,
think
like
out
of
this,
like,
if,
like
Andrew's,
talked
about
on
the
mailing
list,
areas
of
PR
actually
about
changing
how
we
deal
with
these
notes.
You
know,
maybe
we
you
know
as
a
part
of
sixth
scale.
We
have
a
release
like
a
single
release.
Note
where
we
say
it's
six
scale
like
memory
is
changing
from
this
from
the
Past
release.
You
know
just
to
cover
all
the
cases
where
this
happened
in
a
single
release
or
something
like
that.
A
Maybe
that's
something
we
can
do.
I
I,
don't
know,
that's
just
an
idea,
well
how
we
can
communicate
this
or
something.
A
Okay,
so
we
have
this
issue
oops
this
issue
to
better
to
better
investigate
this.
So
all
right,
we'll
we'll
continue
to
track
this
and
see
where
we
can
fit
the
seven
in
upcoming
calls.
Okay,
I
think
so.
Does
this
so
in
the
periodic?
Now,
let's
see
or
the
pre-submit,
so
should
this
I
don't
know
when
when
this
change
emerged,
just
went
in
this
change,
work
yeah.
C
A
A
Pretty
soon,
okay,
all
right,
so
then
we
don't
have
anything
to
yeah
to
review
here.
Okay,
that's
fine!
You
know
all
right
all
right,
I
think
that's
all
I
had
for
for
topics
for
today.
I
was
just
to
get
an
update
and
get
some
opinions
on
on
how
we
actually
communicate
this
stuff
then
and
then
I
think
next
time
listen.
Maybe
we
can
aim
to
try
and
learn
a
bit
about,
is
testing
and
wondering
about
the
performance
cluster
and
how
we
can
change
some
things
there.
A
A
A
Okay,
like
the
title,
okay
is
this:
is
this
on
what
oh
something?
Okay,.
D
I
would
like
to
to
know
who
I
can
talk
to
to
know
what
are
the
plans?
Okay,.
A
I'm
just
reading
in
just
a
second,
so
separation
of
go
client.
The
API.
A
This
looks
like
oh
okay,
so
this
is
okay.
This
is
what's
happening
there
we
go
so
the
cuber
it's
breaking
out
its
API
into
its
own
repo.
Just
like
the
way
kubernetes
did
with
API
machinery
so
that
it
can
be
easily
vendored.
That's
what
that's!
What
this
change
looks
like
and
then
looks
like
I
guess,
maybe
client
go
is
yeah
client.
D
A
So
it
looks
like
oh
so,
what's
the
path
yeah,
it
should
be
the
okay.
So
instead
of
vendoring
in
cuberts,
Hubert
Hubert's.
A
Api
I
think
it
is,
you
would
go
to
you,
go
to
Qbert
API
now,
I
think
that's
what
it
is
Does
it
show
the
vendor
in
here.
As
an
example,
let
me
see.
A
It
looks
like
Roman
and
well
when
Mike
Hendricks
were
discussing
it
on
the
Mindless
I
think
I
mean
Andre.
I
would
just
respond
to
the
thread.
This
is
so
I
mean
your
question
is
that
what's
the
what's
the
New
Path
looks
like
yeah,
you
know
I
would
ask
on
the
millions
and.
D
A
Yeah
you
so
you
should
I
think.
Let
me
just
see
if
they're
both
supported
now
in
kubert,
because.
A
A
Well,
I
I
would
report
some
bugs
I
would
report
the
bugs
I
think
this
is
the
the
future.
This
is.
This
looks
like
the
future
I
would
so
I
would
I
would
report
the
bugs
and
I
mean
if
you,
if
you
want
to
migrate
early
I,
think
this
is.
A
D
Yes
and
talk
to
are
more
is
yeah.
A
A
C
A
Respond
to
this
thread
and
and
ask
oh,
this
is
from
2021,
oh
wow,
so,
yes,
that's
why
I
didn't
realize.
A
Okay,
no
I
I
mean
I,
think
I
mean
it
looks
like
they're.
Both
gonna
be
remain
supportive.
This
has
been
going
on
since
2021.
C
Could
it
be
possible
that
this
happened
in
stages,
because
I
know
that
the
the
API
was
put
into
its
own
directory
a
while
back
like
six
months
or
eight
months
back,
and
then
the
next
step
would
be
to
break
it
out
in
the
separate
depository
foreign.
A
I
didn't
realize
so
I
didn't
see
the
date,
so
it's
all
done
in
2021
and
looks
like
they're
I
mean
this
looks,
supported,
I
I,
think
I
think
it
might
just
be
that
there's
you're
just
running
into
some
bugs
some
of
the
dependencies
I
I
just
report,
the
bugs
Andre
I,
think
I
think
this
is
the
path
to
go
with
as
Blanco
and
the
API
I
think
this
is
the
right
way
you
want
to
consume
this.
It's
just
less
dependency
Health.
What
version
are
you
running
by
the
way
that
would
help
of.