►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Let
me
introduce
you
again
as
always,
and
then
so
the
next
talk
it's
a
pleasure
for
me
to
introduce
marcelo
again,
who
will
talk
about
extending
cube
burner
to
support
kubert
series?
I'm
looking
forward
to
hear
more
about
that
and
feel
free
to
ask
any
questions
in
the
chat
marcelo.
All
yours.
B
Thank
you.
Thank
you,
roman
thank
everyone
to
attend
this
presentation,
so
I
will
as
rama
mention
so
I
like,
I
represent
the
extension
to
good
burn
to
you
know:
support
cooperate,
objects
to
be
created
both
in
kubernetes
and
openshift
cluster,
to
test
performance,
most
specifically
to
test
the
control
plane.
You
know
performance
of
of
the
crts
and
also
the
plane,
control
plane
controllers.
B
B
Okay,
before
the
introduction,
the
motivation
of
this
work
is
it's.
There
is
like
increasing
the
popularity
of
vms
running
kubernetes
yeah,
you
know
what
means
convert
and
because
of
that,
there
is
also
increasing.
You
know
the
interest
from
the
convertible
minute
about
performance
and
scalability
of
cooperate,
control,
plane
and,
and
then,
because
of
that,
it's
interesting
to
have
a
tool
to
help
to
measure
those
those
things
to
test.
B
It's
well
known
to
that.
Kubernetes
can
safely
scale
for
4k
nodes,
for
example,
with
a
lot
of
creating
a
lot
of
objects.
However,
as
expected,
kubernetes
does
not
provide
any
guarantee
for
third
party
crgs
for
that's,
for
example,
cube
vertices,
and
also
because
of
that
doing
to
the
lack
of
crg
support.
B
You
know
typical
kubernetes
benchmark
suits,
it
doesn't
does
not
support.
You
know
only
support
playing
kubernetes
resource
again
as
as
expected,
so
the
idea
is
to
extend
you
know
one
of
these
benchmark
tools
to
be
able
to.
You
know
tests
cooperate,
objects,
the
most
famous
projects
for
benchmark
tools
for
testing
kubernetes
resource.
B
Well,
it's
the
kubernetes
itself,
the
cluster
load
2,
which
it's
a
big
tool
that
kubernetes
has
for
their
performance
test.
It's
a
it's
a
very
complex
tool
and
that
actually
also
creates
the
the
clusters
and
and
also
it's
specifically
for
kubernetes.
So
you
know
we
cannot
like
submit
prs
to
extend
it
easily.
You
know,
and
then
we
also
have
the
vmware
key
bench,
which
is
also
creating
plan.
B
Kubernetes
resource
and
testing
and
openshift
also
has
good
burner,
and
this
in
this
talk,
we
are
going
to
focus
on
cool
burn.
B
Okay,
so,
regarding
the
motivations
you
know,
convert
control,
plane,
performance
analysis
is
challenging,
especially
because
how
we
cont
we
configure
representative
workloads
like
with
how
we
easily
configure
different
kind
of
vmis.
You
know
with
different
configurations,
as
we
were
seeing
all
the
presentations
in
the
converge
summit.
It's
we
can
have
like
very
different
configurations
that
we
can
apply
to
vms
and
how
does
it
impact
the
performance?
B
So
a
tool
should
help
with
that
measuring
the
performance.
How
what
we
measure?
What's
the
the
the
timestamps
that
we
can
get
from
the,
for
example,
vm
creation,
I'm
going
to
talk
about
that.
You
know
in
the
next
slides
and
also
which
kind
of
metrics
are
interesting.
You
know
to
be
collected
to
measure
the
performance
and
the
benchmark
you
know
should
help
with
all
of
this.
B
So
therefore,
we
have
extended
quickburn
benchmark
to
support
crds,
to
create,
convert
via
objects
and
collect
the
data.
You
know
detailed
latency
information
for
deep
performance
evaluation.
B
B
It
can,
for
example,
create
the
thousands
of
pods
and
delete
them
and
get
detailed
lattice
information
that
I
would
describe
later
so
coburn
is
designed
to
be
compatible
with
the
plain
kubernetes,
but
it
also
supports
some
custom
open
shift
crgs,
but
it's
it's
can
be
used
only
for
kubernetes.
It's
not
specific
for
ruby,
okay,
so
cool
burns
also
is
it's
written
golem
and
and
then
we
can,
you
know
easily
configure
the
throughput
for
object
using
the
client
goal,
library
that
access
the
kubernetes
api
server.
I
will
talk
that
later,
also
about
that
here.
B
So
kuburn
has
some
configuration
file
to
create
a
test.
It's
a
burst
test
and
dense
tests.
Also
that
you
create
many
objects.
Okay,
so
this
is
the.
This
is
the
default
test
that
cooper
has
for
the
the
the
global
configuration
it
can
have
like
a
right
to
file
which
means
right,
metrics
to
file
it
dumps
metrics
to
file,
and
then
it
can
be
later
indexed
in,
for
example,
in
some
other
database,
elastic
search
and
and
then
be
visualized.
B
The
data
with
graphona
dashboard
or
you
can
just
keep
local
data
and
parse
it
to
which
you
do
some
performance
analyze.
The
other
thing
that
it's
very
interesting
is
the
measurements
that
has,
for
example,
pod
latency
measurement.
It
has
some
detailed
quality
creation
latency,
so
it
verifies
different
part
conditions.
B
You
know
time
stamps
and
and
then
we
can
have
like
for
the
object.
You
know
phase
conditions
transitions
latency
we
can.
We
can
show
all
this
metrics
and
and
then
it's
also
has
some.
You
know
for
the
measurements
some
thresholds.
B
It
can
define
some
thresholds,
for
example
via
when
the
via,
when
the
pod.
Sorry,
when
the
pod
gets
in
the
ready
conditions,
the
average
of
all
the
pods
getting
the
red
condition,
and
then
you
put
a
threshold
and
then
the
execution
of
the
test
will
succeed
or
fail
depends
on
the
threshold.
Okay.
This
is
also
good
for
maybe
cicd.
You
know,
cicd
systems
that
we
want
to
you
know
run
run
tests
that
will
fail,
depends
on
the
some
specific
condition.
B
Okay,
so
and
the
cool
burn
also
has
the
definition
of
jobs,
okay,
so
a
job
that
can
create
pods,
for
example.
Here
then
we
run
against
tests.
It
has
number
of
iterations
that
can
the
job
can
run
many
times.
Then
here,
as
I
was
talking
before
it
has
this,
you
know,
object
creation
rate,
that's
the
queries
per
second,
the
burst,
which
basically
the
the
the
kubernetes
client
call.
You
know
configuration
for
request
rate
and
then
we
can
control
here.
B
The
rate
that
we
create
object
and
namespace
iteration
iteration
can
be
will
have
a
namespace.
If
we
can
have
like
many
tests,
many
different
namespace
objects
per
namespace.
We
can
have
more
iterations
and
also
have
namespace
iteration
equal
through
and
then
each
iteration
will
have
a
different
name.
Space
and
kubernetes.
It's
well
known
to
you
know,
have
a
scale
less
when
we
have
more
namespace.
B
So
it's
something
interesting
to
test,
and
we,
the
other
thing
that
are
interesting,
is
the
wait
for
it's
wait
for
a
crg,
for
example,
wait
for
pod
object
and
this
will
be
for
the
ready
condition.
Okay,
so
it
waits
for
all
the
pods
being
the
ready
condition,
and
it
also
has
some
specific
pod
weight
parameter.
That's
the
same
means
the
same
here
then
the
object.
The
objects
are
defined
as
with
templates.
B
So
then
it's
very
flexible
then
so
you
can
have
a
pod
template
and
the
number
of
replicas
the
number
of
objects
that
will
be
created
with
this
template
and
input
files.
It's
you
know,
user
defined
variables.
The
template
can
have
many
internal
variables
and
then
the
variables
can
be
defined
here.
It's
also
flexible
that
you
can
use
environment
variables
bash.
You
know,
environment
variables,
inside
the
template
and
to
create
templates
okay.
B
So
in
the
it
also
has
you
know
the
detailed
latency
that
it's
collected
it
collects,
so
it
could
burn
a
little.
You
know
allows
to
have
you
know
detailed
latency.
It
has
some
least
watch
informers
that
is
watching
for
events
and
and
then
it
can
have
like
a
create
and
update
events
for
all
the
objects
and
keep
it
in.
You
know
in
a
map
and-
and
then
we
can,
you
know,
match
all
the
latencies
related
to
our
object.
B
So,
for
example,
this
this
events,
when
I
even
when
a
pod
is
created
it
can
it
get
the
timestamp.
B
You
know
information
here,
the
timestamp
also
it
doesn't
have
like
great
pod
here,
the
name-
it's
just
timestamp,
because
this
you
know
this
part
in
this
variable.
It's
used
later
to
index
the
metrics
in
elasticsearch.
So
it
will
the
timestamp.
B
It's
actually
the
pod
creation
time
here
and,
however,
we
have
other
that
has
more
meaningful
names
like
a
scheduling
latency,
which
the
time
that
the
pod
enters
in
the
you
know,
scheduling,
phase,
sketching
conditions
for
the
scheduling
condition.
You
know
minus
the
pod
creation
time.
So
we
have
the
scheduling
latency
of
the
pod
here
and
and
then
for
all
the
parts
created.
We
have
a
lot
of
samples
and
it's
also
reports
some.
You
could
know
quantiles
now
p99
p95
and
that
can
be
used
later
for
analysis.
B
Okay,
so
kubernetes,
you
know,
copyright,
sorry,
copyright,
it's
a
kubernetes
add-on
and
to
create
virtual
machines
and
and
to
to
extend
kubernetes.
You
know
to
create
virtual
machines
it's
by
using
in
the
cooperative
projects
by
using
custom
resource
definitions,
crgs
and
that
you
know
you
can
create
different
and
description,
different
objects,
and
then
you
and
then
you
have
also
the
controllers
to
control
of
this
object
and
so
on.
B
Okay,
so
and
cooper
defines
basically,
you
know
so
far,
three
set
of
crgs,
mainly
so
it's
has
the
basic
one.
You
know
the
fundamental
one
is
the
virtual
machine
instance,
the
vmi,
which
define
the
properties
of
the
virtual
machine,
and
then
it
has
all
kubra
and
kubrick
has
also
virtual
machine
crd,
which
actually
it's
on
top
of
vmi.
It
creates
vmi
and
it's
it's
behaves
in
a
way
that
any
user
can,
you
know,
operate
a
vm
as
it
operates.
B
You
know
cloud
public
cloud
or
you
know
openstack,
it
means
you
can
start
and
you
know,
pause
and
shut
down
vms.
Using
this
object.
Okay
and
the
vmi
can
create
a
sorry.
A
vm
can
create
a
vmi
object,
and
then
we
have
a
replica
set
vmi
replica
set,
which
it's
identical
to
kubernetes
replica
set,
but
for
vmis,
so
it
can
create
many
identical.
Vmis
objects
with
only
one
object:
the
replica
set
okay,
so
we
have
extended
burn
to
understand
all
of
these
objects.
B
All
of
the
crgs,
especially
for
to
wait.
You
know
this.
This
object
to
be
in
the
ready
state,
so
it
must.
It
needs
to
have
some
logic
inside
and
it's
waits
for
the
desired
state.
For
example,
for
today
ready
state
it
can
be
out,
it's
configurable
can
be
other
states
also
and
then
the
first
extension
is
to
h
this
object.
You
know
when
run
the
test
and
the
second
extension
is
to
watch
the
objects
phase.
You
know
phase
and
conditions
to
to
get
the
timestamps
of
this.
B
You
know
transitions
and
and
report
some
detailed
latency,
as
it
was
doing
for
a
pod
before,
but
now
for
the
vms
and
vmis,
for
example.
Here
we
we
still
have
the
timestamp,
you
know
variable
there.
The
timestamp
now
can
means
the
vm
creation
or
the
vmi
creates
okay.
So
if
it's
a
vm
that
it's
creating
a
vmi
that
the
template
that
we
are
using
to
do
the
test,
then
the
baseline
timestamp
will
be
the
vm
creation.
B
However,
if
the
template
it's
creating
vmi,
it's
not
a
vm,
then
the
timestamp,
the
baseline
timestamp,
will
be
the
vmi
creation
time.
Okay,
so
it's
understand
that
when
it's
watching
for
resource
and
then
it
collects
the
all
the
pod
transitions
when
the
pod
is
created
by
the
you
know,
when
the
vmi
is
scheduled,
it's
the
virtual
controller
creates
a
pod
and
then
we
get
all
the
the
timestamps.
You
know
the
latency
here
from
the
vm
creation,
the
vmi
creation,
all
the
vmi.
You
know
phase
and
the
pod
conditions.
B
I'm
saying
here
pod
conditions
and
via
my
phase,
because
phase
and
condition
is
something
a
little
bit
different
in
the
crgs,
but
in
the
vmi
we
we
get.
You
know
phase
and
conditions
as
well
here
so
for
the
timestamps.
B
So
quickburn
has
some
metric
collection
mechanism
that,
as
I
mentioned
before,
it
can
be
this
option.
Okay,
it
can
be
used
depends
on
the
test
that
you
are
doing
to
collect
prometheus
metrics.
It
can
dump
the
metrics
to
a
to
a
file
or
or
directly
index,
that
to
external
database,
for
example,
elasticsearch.
B
It
might
be
help.
You
know
how
very
helpful
when
you
are
running
multiple
tests,
and
then
you
want
to
you
know,
isolate
those
datas
in
some
other
database,
but
again
it's
optional.
You
can
also
keep
all
the
data
in
parameters
and
analyze
your
experiments
and
use
kubernetes
to
generate
the
load
and
also
get
the
detailed.
You
know,
latency
that
it's
collecting
from
the
client
perspective.
B
Okay,
so
from
the
cooper
good
burn
perspective,
so
then
the
third
extension
it's
a
metric
profile
that
it's
defining
some
converged,
control,
plane,
metrics
plus
many
other
relevant
metrics
for
the
kubernetes
and
kubernetes
control
plane
to
be
and
collect
and
analyze
it.
B
B
B
You
know
latest
breakdown
the
first,
the
first
figure
in
the
top
left.
We
can
see
here
the
vmi
creation
and
when,
where
is
the
the
biggest
latency
here,
this
experiment?
I
don't
remember
how
many
vms
I
was
creating.
I
can
double
check
that
later
and
that
I
think
was
100.
Maybe
100
or
I
don't
know
sorry
how
many
vmws
were
being
created
here.
B
But
it's
getting
the
the
worst
case
scenario
the
p99,
and
we
can
see
that
the
pod
you
know
initialization,
so
the
pod
schedule
was
32
seconds
only
and
the
pod
created
also
and
then
the
vmi
scheduling
when
it's
understand
that
the
pod
was
scheduled
and
then
the
pod
initialization,
which
means
you
know
all
the
rich
launcher.
B
You
know
pod
and
the
live
vertices
being
initialized,
creates
the
vmi
domain
and
so
on.
It
takes
five
minutes
in
this
case.
Here
and
and
then
the
the
pod
gets
ready.
You
know
if
few
seconds
after
of
the
pod
is
in
all
the
compositionalizing
means
the
other
containers
are
initialized,
and
then
they
have
the
containers
ready.
Here
all
the
containers
I
started
and
it's
running
and
and
then
we
have
you
know,
vm
is
being
scheduled
the
state
and
we
have
the
vmi
ready.
B
So
from
the
you
know,
from
this
detailed
timestamps,
we
can
understand
where
the
latency
is
so
so
that
we
can
analyze
the
performance.
You
know
more
apart
from
that.
We
also
have
permits
data
that
I
mentioned
before.
For
example,
getting
the
convert.
Rest
client,
read,
request,
count
and
write,
request,
count
so
and
then
analyze
those
informations.
B
Okay,
so
the
final
considerations
here,
just
to
summarize,
we
have
extended
burn
support,
convert
crgs
for
performance
on
others
in
a
way
that
it's
can
wait
for
very
conditions
for
the
kubrick
crgs.
B
It
can
collect
the
daily
and
detailed
latency
formations
from
the
cigs
phase
and
and
conditions,
and
also
it
can
collect
well-defined
parameters.
Metrics
there
is
a
set
of
metrics
that
are
very
relevant
relevant
for
performance
evaluation
that
is
defined
and
also
it's
possible
to
visualize
those
metrics.
You
know
when
they
are
indexed
in
the
elastic
search,
using
grafana
dashboard
and
just
just
to
have
some
information
here.
Why
it's
interesting
to
use
good
burn,
also
so
by
using
templates
to
define
the
objects.
You
know
that
will
be
created.
B
The
users.
Can
configuring
representative
workloads
as
with
different
configurations,
for
example,
creating
vmis
with
ephemeral
g
score,
pvcs
or
multinix,
or,
as
we
were
seeing
the
previous
presentation
with
you
know,
network
functions,
virtualization
and
so
on.
So
we
can
easily
define
the
workload
and
do
performance
evaluation.
When
we
have
multiple
templates
for
spoiler,
you
know
spoiler
alert
and
for
future
work.
You
know
cooper
now
has
when
it's
waiting
for
the
resource.
B
So
it's
it's
a
rather
working
progress.
I
will
submit
the
pr
for
that
is
when
it's
actually
already
using
some
watcher
for
watch
the
latency.
We
can
avoid
using
getting
this
and
just
use
the
information
from
the
watchers
for
weight.
You
know
the
condition
of
the
object.
B
However,
it's
something
that
we
are
discussing
in
this
first
capability
analysis
and
something
also
that
kubernetes
is
doing,
which
means
the
steady
state
test,
which
has
a
constant
number
of
operations
per
second
create
update,
delete
what
kubernetes
call
churn,
but
it's
a
cycling.
You
know
the
these
operations
and
both
burst
tests
and
steady
state
tests
has
some
some
some
goals,
some
different
goals
to
test
different
things,
so
they
are
both
interesting
and
the
next
steps
is
to
implement
steady
state
tests
for
cooper,
okay,
questions.
B
Yes,
so
actually
I
I
didn't
run
this
test,
but
there
is
someone
someone
else
running
this
test
and,
for
example,
when
we,
when
we
run
pvcs,
it
has
more.
You
know,
cube
api
requests,
more
requested,
the
code,
api,
introducing
much
more
load
to
the
copy,
verge
control
plane
and
this
impacts
the
performance.
Also
pvcs
has
an
additional
latency.
For
example,
vmi
has
conditions
that
it's
waiting
for
pvc,
so
we're
having
pvcs
it.