►
From YouTube: 2021-05-13 Kubernetes SIG Scalability Meeting
Description
A
Start
all
right
welcome
everyone
to
our
bio,
86
community
meeting.
We
have
already
an
entry
for
today.
Let
me
feeling
the
attendees.
I
think
I
think
you
can
start
so
if
you
could
quickly
summarize
how
we
can
help
you
and
what
you
would
like
to
know
about
cubemark
and
potential.
B
B
First
of
all,
thank
you
for
letting
me
present
this
fabrizio
pandini.
I
work
in
secretary
cycle
and
cluster
api.
If
you
let
me
share,
I
have
a
few
slides
to
to
to
try
to
keep
this
short
and
sweet,
and
then
we
can
have
a
discussion.
A
Go
ahead.
Do
I
need
to
give
you
some
permission
to
share
and
ask
how
it
works,
or
you
can
just
you
probably
have
to.
A
B
B
Fine,
so
let
me
present
yeah,
I
will
present
the
desktop
so
okay,
so
why
cluster
api
on
cooper
mark?
So
we
we
we
reached
the
so
we
start
to
look
at
the
kubernetes,
because
we
want
to
stress
cluster
api
itself,
and
so
this
means
creating
a
lot
of
of
nodes
and
machine
hosting
nodes
and
also
we
want
to
easily
test
the
classroom
autoscaler.
So
this
is
why
we,
we
basically
started
looking
at
cooper
mark
and
but
yeah
after
looking
at
these.
B
So
we
want
to
gather
some
feedback.
We
want
to
see
if
you
are
interested
in
in
somehow
collaborating
on
developing
this
idea,
which
is
really
at
the
early
stage
and
and
and
yeah,
and
also
we
want
to
find
out.
If
there
are
some
someone
could
that
can
help
help
us
or
validate
our
approach.
So
this
is
more
more
or
less
the
contest.
B
I
I
give
you
a
five
second
introduction
about
class
api.
I
don't
know,
probably
everyone
knows
cluster
api,
but
I
don't
know
so
pressure.
Api
basically
allows
you
to
use
a
declarative
approach
to
manage
your
cluster,
so
you
can
use,
do
kuber
cattle
create
my
cluster,
given
a
spec
or
you
can
scale
your
machine
using
the
cattle
scale
so
fully
declarative
in
terms
of
architecture.
B
So
the
demo,
the
I've
I've
tried
to
create
a
demo.
That
could
be
interesting
for
you
and
I
I
look
at
that
at
the
test
cube
mark
for
the
in
kubernetes
and
and
basically
what
I'm,
what
I'm
going
to
demo.
I
have
a
a
windows
where
you
will
see
three
separated
tabs.
The
first
one
will
be
the
management
cluster,
which
is
kind
which
is
a
local
management
clustering
time
and
this
management
cluster.
I
I
basically
applied
the
spec
for
two
to
cluster.
The
first
one
is:
is
my
external
cluster,
so
the
cluster.
B
Basically,
this
is
a
standard
cluster
api
cluster,
where
I
want
that
the
kubernetes
are
going
to
run
and
the
second
one
is
is
the
is
the
actual
test
cluster
which,
which
is
a
a
special
capital,
and
then
I
will-
and
this
is
basically
where
I
want
that
the
old
nodes
should
join-
does
it
make
sense
as
a
setup,
because
you
know
I
I
tried
to
to
the
decide
this
from
the
script
is
okay.
A
Yeah,
it
makes
sense
like
we
in
cubemark
test
that
we
run.
We
also
have
two
clusters.
Always
there
is
one
cluster
which
is
just
the
control
plane,
so
just
the
master
machines
and
another.
We
call
it
sometimes
root
cluster,
where
we
basically
schedule
the
hollow
node
pods
controlling.
So
it's
a
similar
setup.
B
B
But
then
I
I
will
have
all
the
worker
nodes
that
that
will
be
deploying
using
a
new
kind
of
machine
that
I
call
it
olo
machine
and
you
can
guess
why,
because
it's
basically
is
backed
by
the
olo
board
from
the
old
node
from
all
the
kubernetes
from
kubernetes,
and
this
is
very
simple
to
achieve
in
cluster
api,
because,
basically,
when
we
define
the
machine
deployment,
the
machine
deployment
that
defines
the
nodes,
you
can
specify
the
infrastructure
rest
that's
mean
the
the
the
type
or
the
template
for
your
machine.
B
B
So
basically,
this
is
a
kind
of
wrapper
on
top
of
of
the
pod
that
I'm
that
I'm
going
to
generate
and
so
trying
to
to
go
really
fast.
Here
it
is
my
management
cluster.
Where
I
I
have,
let
me
say
the
cluster
api
view
of
the
world.
So
I
can
see
I
have
a
cutter
get
clusters,
so
I
have
to
cluster
one
both
of
them
provisioning.
B
Let's
see,
I
have
some
machines
in
this
caster
and
so
the
class.
The
first
cluster,
which
I
call
external
one,
currently
has
two
machine:
the
control,
the
control
plane
and
and
a
worker
node.
And
if
I
go
into
this
cluster
I
can
see,
let
me
say
the
kubernetes
view
of
the
word.
So
I
can
do
quick
cut
or
get
the
nodes
and
it
has
two
nodes.
One
is
a
control
plane
and
the
other
is:
is
a
worker
just
to
establish
a
baseline.
B
Currently,
there
are
no
pods
running
in
this
cluster
and
yeah,
and
the
second
cluster
is
the
cluster.
Basically,
where
I
I'm
going
to
run
my
stress
test,
and
currently
it
has
only
a
control,
plane
and
yeah.
We
can
see
also
here,
kuber
cattle
get
the
nodes,
so
it
is
a
cluster
with
only
a
control,
plane
and
yeah
that
definitely
enough.
Nothing
running
here,
get
bot
okay.
B
So
in
in
this
in
this
cluster,
I
already
have
the
machine
deployment
that
I
just
showed
to
you
and
and
basically
now,
I'm
going
to
scale
it
up
to
one
replica
to
show
what
what
happened.
B
B
One
is
basically
the
kuber
proxy
secret,
which
is
shared
across
all
the
board
and
and
the
the
other
one
is
the
kubelet
cooper
config,
and
I
I
created
a
kubernetes
config
for
each
for
each
node,
because
I
saw
that
we
are
interested
in
testing
the
node,
authorizer
and
stuff
like
that
and
so
yeah.
B
If
we
look
at
this
machine
at
this
cluster,
we
see
that
basically
they
are.
The
only
port
that
is
running
here
is
is
registered
into
into
this
cluster
and
yeah,
and
just
to
prove
that
that
we
are
not
polluting
this
one.
I
don't
have
that.
I
I
don't
have
pots
running
here
and
so
yeah.
Basically,
this
is
all
I
can
of
course
scale
up
and-
and
I
also
also
let
me
say
that
the
the
cluster
api
view
of
the
world-
we
will
show
that
machine
are
provisioning
here.
A
So
let
me
start
saying
this
is
simply
amazing
to
me.
I
I
love
it.
I
think
it's
actually
a
way
we
should
probably
if
we
can
make
it
work,
for.
I
assume
it's
possible
right,
because
cluster
api
already
supports
the
gce
provider.
So
can
we
like
create
a
gc
cluster
with
cluster
api?
A
I
assume
the
answer
is
yes.
If
so,
then
we
can
probably
use
this
approach
to
set
up
our
cube
marketers,
because
this
seems
so
much
so
much
better.
Basically,
currently
what
we
have
to
set
up
qmark
is
a
bunch
of
bash
scripts
and
they're
like
really
hard
to
manage.
Also,
it's
really
hard
to
basically
explain
anyone
else
how
to
create
a
kumar
cluster
using
a
different
provider
so
with
cluster
api
that
gets
so
much
easier.
B
Yeah,
basically,
what
you
need
is
is
to
yaml
one
for
for
creating
this
cluster
and
the
other
for
creating
the
other
one
but
yeah
if
you're.
First
of
all,
thank
you
for
the
feedback.
I'm
happy
that
you
find
this
interesting
and
yeah.
B
I
think
that
it
makes
sense
to
to
basically
pair
together
us
the
people
from
the
kappa
g
provider,
carlos
panato,
and
maybe
maybe
also
teams
and
and
try
to
if
you
are
interesting
for
me
and
for
the
cluster
api
community
is
a
huge
win
because
yeah,
basically,
we
will
get
a
direct
connection
with
people,
maintaining
kubernetes
and
and
and
a
new
set
of
use
case
and
and
feedback
from
cluster
api,
which
is
just
amazing.
A
Yeah,
you
will
also
like
get
some
like.
If
we
migrate
our
test
setup
to
to
this,
then
you
get
some
some
scale
tests
right
like
because
we
are
creating
q
mark
cluster
of
5
000
nodes,
so
that
is
some
stressors
that
probably
you
will
be
interested
in.
Obviously
you
can.
You
will
need
more,
probably
like
more
use
cases
like
testing.
You
know
adding
deleting
machines.
We
don't
do
that
in
our
status,
but
we
could
like
yeah.
So
I
believe
we
are
open.
A
Probably
so
my
only
question
is
I
want
to
understand,
so
it
is
possible
with
cluster
api,
to
create
currently
a
cluster
using
the
gce
provider.
So
basically
setting
up
the
vms
on
on
gcp
and
yeah.
B
Yeah,
yes,
it
is
possible.
We
already
have
something
in
because
there
is
the
cluster
api
provider
for
for
google,
and
there
are
already
tests
running
in
in
pro
doing
so
so.
C
B
Out
there
are
still
room
of
improvement
because
yeah
we
are
basically
building
the
the
gc
images
to
to
use
with
the
latest
kubernetes
version.
So
this
is
a
little
bit
slow
to
the
setup
time.
But
as
soon
as,
let's
maybe
open
an
issue
and
and
try
to
to
sort
out
what
are
you
need
and
and
and
and
then
we
we
eventually
like
probably
will.
A
A
B
Are
there
is
already
something?
So
if
you
look
at
how
I
set
up
the
the
the
control
plane,
I
use
it
basically
kubernete
min
as
a
provider,
and
there
is
this
object
that
basically
cluster
configuration
and
you
can
go
and
set
api
server
flags.
Eventually,
you
can
set
also
kubelet
flags
for
the
noise
joining.
I
don't
know
if
this
is
required,
or
we
do
this
via
the
pod,
but
yeah
there
are.
A
We
we
can
check
that
later,
but
that
seems
promising
all
right.
That
was,
like
my
feedback,
magic
martial.
What
do
you
guys.
A
C
Yeah,
I
just
wanted
to
say
that
yeah,
like
I
see
here
like
really
big
opportunity
for
like
deprecating
our
current
club
mark
scripts.
So
this
is
more
or
less
what
the
matar
said
so
yeah
I
I
was
I
was.
I
was
doing
some
notifications
to
to
the
scripts
and
basically
they
they're
completely
ugly.
So
I
think
like
if,
if
you
manage
to
duplicate
that
and
migrate
to
something
some
some
cleaner
solution,
I
think
I
think
that
that
would
be
good
and
yeah
this
this
upper
system.
C
For
me
to
be
definitely
definitely
a
good
good
thing.
Another
thing
I
was
I
was
trying
to
to
think
like.
Is
it
possible
here
to
configure
a
cubemark
with
like
more
than
one
control
plane
machine
because,
like
using
the
old
scripts,
it's
not
possible,
but
maybe
with
this
approach
it
will
be
easier
to
do
so
and
we'll
be
able
to
test
hi
marks.
B
C
Yeah
one
more
thing,
so
in
fact
yeah
in
fact,
we
can
also
like
try
to
think
if
you
can
use
this
approach
for
like
creating
regular
clusters
like
I'm,
not
not
even
thinking
about
the
cube
mark,
but
also
like
the
like
our
aggregate,
because
currently
we
are
using
like
cube
up
and
there
is
no
way
to
create
like
master
there
like
or
like
it's
kind
of
hard
to
do
so
in
the
pro
environment.
C
So
maybe
we
can
think
about
like
migrating
to
that
yeah.
A
I
had
like
actually
this
question
like
I
was
wondering
why
this
hasn't
come
up
when
we
were
discussing
the
the
cube
admin
and
we
decided
like
commando
is
not
enough,
because
we
need
something
more
than
cube
admin
to
basically
set
up
the
vms,
and
it
looks
like
cluster
api
is
doing
that
right.
So
I
wonder
why
we
just
haven't
thought
of
that,
but
that's
promising.
A
Interesting
with
the
also
the
control
plane
scaling,
I
assume,
like
the
the
caveat
I
see
like
to
scale
control
plane,
you
need
to
actually
reconfigure
a
cd
right
like
basically
join
the
replicas
of
the
cluster,
so
it
might
be
more
complicated,
but
if
it
already
works
on
a
great,
if
not
that's,
also
something
we
can
like
figure
out,
but
I
definitely
like
this
approach
better
than
like
doing
that
in
part
script.
B
I
can
give
you
a
little
bit
background
about
atcd,
so
kubernetes
mean
by
default,
run
a
tcd
in
this
taken
mode.
That
means
that
the
tcd
it
is
basically
deploying
on
the
control,
plane
machine
and
run
as
a.
B
Static
pod,
okay,
but
there
is,
there
is
a
work
on
going
upstream
that
that
basically
introduce
an
external
etcd
manager,
so
you
can
provide
you
know
your
own
ltcd
and
tweaks
everything.
So
it
is
not
yet
there,
but
we
have
also
interesting
things
in
rotterdam
for
how
we
deploy
tcd,
that's
cool,
that's
cool.
D
White
tech
yeah,
I
mean
I'm
mostly,
like
mostly
will
repeat
what
you
already
said.
Basically,
every
single
startup
script
that
we
are
using,
we
don't
want
to
maintain
it
like
we
want.
We
want
to
use
things
that
are
basically
standard
and
supported
by
ideally
a
cluster
life
cycle,
or
whoever
else
will
be
doing
so,
the
more
of
those
scripts
that
we
can
deprecate
and
remove
and
eventually,
ideally
remove
the
better.
D
So
if
we
can
make
it
work,
especially
given
like
like
all
the
benefits
here,
that
we
get
from
like
being
able
to
run
hi
and
so
on,
and
so
on,.
B
Let
me
say
that
the
person
in
charge
for
the
google
provider
and
teams
is
helping,
but
it
is
also
you
know
everywhere,
and
they
can
unblock
a
lot
of
things.
So,
let's,
let's
open
a
thread
or
or
on
an
issue,
look
in
these
people
and
try
to
to
figure
out
what's
come
next:
okay,.
A
I
believe
the
reason
a
reasonable
first
step
is
basically
trying
to
build
a
proof
of
concept
test,
like
some
small
cubemark
tests
like
100
nodes,
something
like
that
set
up
using
cluster
api
right.
So
we
can
open
an
issue
like
like
work
together
on
this.
B
Yeah
this
could
this
could
be
the,
let
me
say,
the
goal
of
the
first.
A
Can
like
figure
out
what
to
do
from
there,
probably
like
migrating
all
the
tests
eventually
and
even
going
further
as
magic
said,
we
can
even
think
of
doing
using
cluster
uk
for
our
other
scalability
tests,
like
not
non-due
part,
but
you
know
that
sounds
really
promising.
So.
B
Okay,
so
I
will
start
with
the
thread
looking
into
teams,
and
carlos
goal
is
to
have
a
first
pro
job
that
tests
100
machines
on
on
google
as
a
as
a
provider.
Basically,
okay,.
A
So
thank
you
for
all
the
good
feedback.
Yeah
all
right.
Thank
you
so
much
great
demo
and
that's
that's
super
promising.
So
let's
work
on
this
all
right.
We
have
two
more
things
on
the
agenda,
but
they
are
not
critical,
maybe
other
than
current
dash.
We
can
like
discuss
a
bit
what's
the
status
and
whether
there's
any
work
there
do
we
have
anything
else
to
discuss.
A
We
have
three
more
minutes.
So
if
there's
anything
okay,
I
assume
there
is
nothing
magic.
What
is
the
status
of
product?
Currently
I
I
know,
like
you,
reverted.
C
Yeah
so
yeah
so
yeah,
so
yeah.
I
try
to
like
that.
In
fact
I
merged
this,
enabling
node
output
provisioner
in
our
this
aaa
cluster
and
we
have
more
resources
there,
but
unfortunately,
like
the
new
publish
version
requires
a
lot
of
resources
in
terms
of
memory,
so
basically
with
the
32
gigabytes
is
still
like
projectly
crashing.
C
So
I
decided
to
roll
it
back
to
the
old
version
and
like
the
current
thing
about
the
resolution,
for
that
is
to
make
the
difference
between
the
old
version
and
the
new
version
like
to
enable
the
difference
conditionally
using
some
flag
and
like
in
the
open
source,
we'll
be
disabling.
That
and
our
in
our
internal
dash
department
will
be
enabling
that
option,
but
it
hasn't
happened
so
yeah.
I
guess
we
need
some
volunteer
to
to
implement
that
flag.
D
A
It
was
my
change
I
can
find
it,
so
I
added
this
like
catch-all
parser,
that
basically
doesn't
look,
because
in
the
parsers
we
have
defined
in
perth
dash.
We
are
looking.
A
Of
the
file,
which
is
the
name
of
the
test,
so
as
a
result,
if
you
name
the
test
differently
different
than
load
or
density,
you
don't
have
any
results
in
the
parameters.
I
added
this
like
catch-all
parser
that
ignores
basically
the
prefix,
but
as
a
result
like
some
of
the
test,
results
are
now
duplicated
like
we
have
them
both
under,
for
example,
density
and
under
this
ignore
perfect
schedule.
C
A
Did
that
because
we
had
like
some
other
instances
of
paradise,
we
had
some
tests
that
weren't
named
per
the
decision
or
nor
hello,
so
yeah.
That
was
reasonable,
but
I
wasn't
aware
how
how
much
memory
this
will
require
in
our
offensive
spirit.
Also,
alright.
So
can
we
open
an
issue
for
this,
and
I
think
marking
is
a
good
first
issue,
probably
probably
will
be
able
to
find
someone
from
the
community
like
that.
A
Okay-
and
I
believe
we
have
just
one
more
minute-
oh
I'm
not
presenting,
but
anyway
like
we-
can
discuss
this
later
next
meeting,
all
right,
so
thank
you,
everyone!
I
need
to
find
you.
A
Where
is
this
zoom?
I
never
know
here
yeah,
so
thank
you
for
joining
us
today,
thanks
again
for
reach
a
great
demo
and
see
you
in
two
weeks
bye.
Thank
you.
Thank
you.