►
From YouTube: SIG - Performance and scale 2023-04-27
Description
Meeting Notes:
https://docs.google.com/document/d/1d_b2o05FfBG37VwlC2Z1ZArnT9-_AEJoQTe7iKaQZ6I/edit#heading=h.tybh
A
Okay,
we
need
so
we've
got
our
performance.
Periodic
table
results,
so
I
mean
I.
I,
haven't
checked
this,
but
I'm
I'm
I
haven't
been
able
to
make
a
change
recently,
so
I'm,
assuming
it's
still
nothing.
A
So
no
change
here
only
churches
that
that's
gonna,
okay
and
then
for
the
other
things.
I
think
this
is
La.
This
is
just
follow-ups
from
last
time
right.
C
Yeah,
so
I
was
able
to
make
a
bunch
of
progress
on
what
other
things
are
this
so
This
PR
is
out,
but
I
think
there
is
some
race
in
the
in
the
logic
where
I
don't
see
all
the
three
artifact
directory
I
just
see
only
one,
which
is
the
last
one,
so
I
need
to
rework
the
tests
here.
C
That's
one
side
of
thing:
this
is
the
easier
bit.
If
you
go
back
to
the
dock,
the
harder
part
is
I
mean
for
me
because
I'm
not
aware
about
it.
Is
this
next
point
so
so
we
have
the
density
test
as
well
as
the
periodic
performance
jobs
right.
The
periodic
performance
job
is
configured
in
such
a
way
that
there
is
a
an
explicit
artifact
directory
here.
C
C
C
That's
that's
the
harder
part
I
was
talking
about.
So
it
looks
like
there
is
an
explicit
artifact
variable
that
Pro
sends
out
I'm,
not
sure
how
it
is
to
be
consumed.
A
Okay,
let's
ask
lubo,
since
you
know
the
most
about
this,
even
even
this
one
I
think
maybe
could
ask
about
this
opinion
as
well.
A
C
Yeah
I
think,
once
we
have
these
two,
these
two
immediate
tasks
of
exporting
the
audit
output
into
separate
directories
will
be
done
and
then
we
can,
you
know,
take
down
other
things.
A
Okay,
I
was
hoping
we're
gonna
have
a
little
about
tonight.
Okay,
so
sorry,
well,
Sonic,
Road
and
keyword
Dev.
Let's
we'll
get
his
attention
there
and
I.
A
C
Yeah,
so
the
the
so
one
thing,
an
update.
The
very
first
chart
was
missing
in
the
last
call
that
added
that
metric
and
it
looks
like
over
eight
week
period,
we
are
quite
stable
on
VMI
creation
to
running
nothing
surprising
there
periodically
you'll
see
one
or
two
Spike
every
other
day,
but
then
I,
that's
just
an
outlier.
The
the
one
I
was
talking
about
is,
if
you
can
search
for
update.
C
Yeah
that
one
so
it
I
know
it's
stable,
but
it
looks
like
we
are
making
10
update
calls
per
virtual
machine
instance
I.
It
looks
like
this
might
be
due
to
reconciling
these
status
conditions
or
something
like
that.
So
10
update,
call
per
VMI
life
cycle
and
two
patch
calls
on
the
same
resource
per
VMI
life
cycle.
So
there
was
something
surprising
for
me:
I,
don't
know.
If
you
have
consistently
observed
this
in
the
past
or
I,
don't
know
what
the
threshold
is.
Yeah
I
forget.
A
A
So
I
I
wanted
to
look
at
the
thresholds
because
foreign.
A
A
It
wouldn't
be
this
one
right.
It
would
be
yeah.
C
B
A
B
A
Exceeding
the
threshold
that
I
thought,
we
should
be
at
wait
a
second
so
hold
on
three
value.
One
944.
A
A
This
is
so
we
set
this
actually
to
be
high,
like
we
didn't,
we
didn't
think
we'd
actually
run
into
this.
We
set
them
to
be
very
high,
at
least
until
like
until
we
could
get
more
data
to
narrow
it
down.
This
is
this,
is
our
this
is
exceeding
it.
So
this
is
which
test
is
this.
B
This
is
the
expensive
tests
right
honorary
running
VMS,
using
a
single
instance
type.
A
You
can
see
this
so
here's
here's
the
view.
Here's
the
VMI
crates
we're
at
600,
800..
Well,
I,
don't
know
why
I'm
looking
at
this,
you
have
it
graphed
so,
but
this
must
be
from
this.
Is
this:
this
must
be
from
the
instance
types
I'm
trying
to
understand
the
data
like
so
I
can.
So
maybe
this
is
what
so,
maybe
these
these
lower
dots,
yeah
844.
Okay,
maybe
that's
what
it
is.
That's
that's
really
what
yeah
I'm
confused.
C
C
On
the
VM,
so
if
you
click
the
second
link
on
the
Google
Doc.
D
D
A
C
You
check
what
date
is
that
periodic
job
from
the
one
you
are
looking
with
600
value,
yeah.
C
No,
it
should
be
because
I
have
yeah
I've
created
like
last
eight
weeks
of
things.
Yeah
is
it.
It
should
be
right
there
on
the
far
Corner
yeah.
Can
you
also
check
for
yeah?
No,
all
right.
B
D
B
C
Yeah,
let
me
so
if
you
can
you.
A
C
B
D
A
C
C
Yeah,
so
is
that
so
I
remember
I
saw
two
two
failures
in
data
processing.
All
right,
if
you
don't
find
it
here,
then
maybe
good
to
check
if
the
job
you're
seeing
is
one
of
the
failures.
C
Yeah
I,
don't
think
it's
definitely
okay,
so
the
two
values
that
I'm,
seeing
in
my
Json
R,
994
and
1166.
A
C
C
The
latest
is
624.
after
that
we
I
don't.
A
C
Sorry,
okay
hold
on
the
this
one.
It's
the
last
three
is
two.
So
I
do
but
I
have
eight
two,
seven
two
so.
A
C
A
A
A
A
C
Stay
on
the
the
the
green
graph
yeah,
so
the
the
first
one
is
the
test
overall
test
name
and
then
within
this
there
is
three
Ginkgo
tests.
A
E
E
C
Mean
one
job:
ID
yeah:
that's
why?
If
you
click
any
one
of
them,
it
will
be
the
same
job
ID.
But
it's
separately
shown
here
just
for.
D
C
No,
so
those
were
392
624
is.
D
C
B
C
A
It's
it's
it's
after
the
name
of
it's
after
after
the
thing,
the
data
it's.
This
is
the
100
BMI
density
test.
A
Yeah
it's
after!
Oh,
are
you
grabbing?
Oh
you
looking
at
the
name
then,
and
then
grabbing
people
you're
not
getting
this
correct.
C
A
Yeah
so,
like
you
can
see
right
right
here,
so
here's
here's
test
two
name
and.
B
B
C
Hold
on
that
doesn't
make
sense.
So
is
there
any
audit
log
output
for
the
primer
that
happens
at
the
very
beginning.
C
Yeah,
let's
see
for
this
one
right,
you
can
scroll
a
little
bit
down
yeah
online
2230.
C
So
some
tests
started
there,
so
I
just
ignored
this
whole
part
is
that
for
the
VMS.
A
Yeah
this
is
no.
This
is
for
the
vmis
this
one.
This
whole
test
is
the
vmis.
There
are
no
to
even
do
any
vmrs
on
the
other
ones.
Wait
I
thought
the
other
ones
were
all
VMS
VMS.
C
See,
okay,
then
I
think
what
I
misunderstood
is
that
the
first
data
point
for
the
primer
then
the
second
one
for
the
BMI
and
the
third
one
for
the
VM,
but.
A
A
Yeah,
the
primer
is
buried
somewhere,
I,
don't
I
just
forget
if
it's
so
I,
maybe
it's
here,
I
I,
that
might
be
it
where
it
just
gets
run
outside
of
Ginkgo.
I,
forget
I,
remember
but
yeah.
This
is
the
first
one.
C
A
C
Lot,
yeah
I
think
we
need
to
fix
this
bug
as
Next
Step.
A
Okay,
yeah
yeah
that
that
that
would
be
cool.
Okay,
let's
see
what
I'll
see
what
that
does
then
so,
instead
of
or
do
you
want
to,
instead
of
going
through
all
these
I
think,
let's,
let's
see
what
you
find
unless
you
like
what
other
conclusions
you
want
to
talk
about
with
these,
so
this
was
weird
I
want
this
one
two
patch
calls
for
BMI.
C
Yeah
so
I
think
the
conclusion
after
today's
finding
is
that
both
for
both
VMS,
through
instance,
types
and
normal
VMS,
there
are
like
10
update,
calls
and
two
patch
calls
made
to
VMI.
C
C
A
See
yeah
I
think
that
that's
been
the
mystery
about
this
about
this
API
call.
This
I
actually
might
explain
it
so
that
the
instance
type
and
the
VM
is
doing
a
lot
more
updates
than
just
to
plain
BMI.
Okay,
that'd
be
interesting
to
model.
That's
that's
kind
of
neat
to
see!
That's
a
pretty
interesting!
That's
a
that
would
be
interesting,
a
like
a
good
cross,
API
relationship
and
then
gets
into
the
instance
type
as
well.
A
C
Yeah
and
then
the
next
one
I
had
is
I
think
you
already
saw
it
in
the
grid.
Two
of
the
jobs
were
failing
one
for
one
job.
It
just
looked
like
not
yeah
for
this
one
I
think
something
went
wrong.
C
Yeah,
what
was
weird
is
the
first
test
n
and
from
the
second
second,
it
started
failing
so
yeah
halfway
through
my
desk,
like
my
sleeper
just
broke
on
on
this
and
then
the
second
one
it
looks
like
it
started
and
then
just
never
got
to
running.
C
Okay,
yeah
I,
don't
think
there
is
anything
abnormal,
since
it
is
not
consistent,
but
just
some
flakes
it's
actually
good
to
correlate
it
in
this
graph.
So.
A
Okay,
that
sounds
good.
Okay,
cool.
This
is
really
cool.
All
right,
thanks,
Eli
yeah.
This
is
this
is
good,
I'm
glad
we
got
to
talk
through
that.
So,
let's
go,
let's
see
what
we
show,
let's
see
what
shows
up
after
we
get
this
scraped
in
and
then
yeah,
let's
I
this
is.
This
is
actually
really
good.
This
will
give
us
some
some
real
I
guess
we'll
give
us
some
even
cooler
data.
Now
they
can
now.
This
makes
a
little
more
sense
and
it
misses
my
mind
we
can,
especially
with
this
one.
A
We
should
be
able
to
cross
all
of
the
apis
and
see
look
at
their
impact
on
the
API
server
yeah.
We
can
put
a
weight
of
this
interesting
like
we
can
put
a
weight
on
each
of
them.
I
mean
like
you
can
see
like
how
you're,
like
hey,
you're,
being
light
being
heavier
being
heavy
heaviest
whatever,
like
you
know,
you
can
put
a
weight
on
each
of
them
and
how
they.
C
A
How
they
could
affect
scale?
That's
good!
That's
a
really
good
step,
because
that's
like
because
in
the
sort
of
in
the
world
of
scale
right
when
people
want
to
ask
us
like
how
many
pods
you
can
create
right,
it's
sort
of
like
okay,
how
many
bmis
can
you
create
versus
how
many
VMS?
Can
you
create?
How
many
instance
types
can
you
create
and
from
what
you
can
tell
it's
like
the
number
is
going
to
be
different.
D
C
Yeah
I
mean
the
way
it
will
be
different.
Is
that
at
some
point
each
one
of
these
API
will
be
tough
enough
for
the
API
server
that
ABS
or
breaks
so
yeah
yeah
I
think
you
you've
boarded
it
nicely
as
to
but
each
API.
We
don't
know
that
breaking
point
and
it
could
be
different.
D
A
Okay,
cool
all
right,
that's
good,
but
let's
continue
on
that
path.
That's
some
nice
progress!
Okay!
So
for
the
last
topic,
this
is
from
kubecon.
Last
week,
I
talked
with
the
that,
like
who
runs
the
the
kubernetes
six
scale.
Ability
and
I
told
him
about
what
we've
been
doing,
and
there
was
a
lot
of
interest
in
having
further
discussions
so
I
I,
you
mentioned
there
that
they
would
be
nice
to
talk
more
about
it
in
their
in
their
Sig.
A
So
I
offered
to
talk
about
you
know
to
take
some
of
our
discoveries
and
bring
it
to
them
and
have
a
discussion
so
present
on.
You
know
what
we've
done
and
talk
about
some
of
the
things
that
are
that
we've
worked
on
and
I
think
you
know
bring
some
of
our
ideas
to
them
and,
and
particularly
I.
A
But
you
know
this
is
something
that
has
sort
of
been
on
our
radar
for
a
while,
and
you
know
and
I
think
I
think
we
have
ideas
how
to
do
it.
So
I
think
you
know
talking
about
that.
Our
ideas
and
talking
about
how
we
can
get
some
of
those
things
through
would
be
really
really
awesome.
So
I
think
what
So
the
plan
is
on
May
11th
we're
going
to
present
so
I
think
we
their
meetings.
A
Excuse
me,
the
meetings
are
not
very
long
they're
like
30
minutes,
so
we
nearly
need
to
cram
the
right
amount
of
content
into
the
amount
of
time
we
have,
and
you
know,
for
questions
and
other
stuff,
so
we're
gonna
need
to
brainstorm
a
little
bit
on
how
what
we
want
to
present
I.
Think
like
similar
to
what
we
did
like
I
was
thinking.
A
If
you
do
something
similar
to
what
we
did
at
Hubert
Summit,
we
talked
about
like
our
tools,
how
we
measure
or
what
we
have
at
our
disposal
for
resources
and
the
important
metrics
that
we
use
to
measure
and
then
I
think
those
are
the
things
that
I
might
at
least
give
us
a
note
like.
If
give
them
an
idea
of,
like
you
know
how
we're
doing
things
yeah.
C
Yeah
I
I
was
thinking
that
what
would
be
good
like
integration
point
right?
So
if
we
think
about
that
this,
what
is
missing
from
kubernetes?
That
would
be
helpful
to
to
predict
more
of
our
scaling,
behavior
and
I.
Think
what
you
are
getting
at
is
just
like
how
we
are
plotting
this
metrics
across
releases.
If
we
can
get
some
additional
data
points
on
the
internal
Matrix
like
how
is
cubelet
behaving,
how
is
apis
are
behaving,
etc,
etc.
C
Across
releases
with
some
similar
tools
or
I
mean
the
tool
does
not
matter,
but
if
we
can
get
data
points
of
how
these
metrics
are
evolving
across
releases
in
kubernetes
I
mean
that
will
okay,
okay
yeah,
that
I
think
what
that
will
help
us
do
is
correlate
that
Matrix.
With
our
observations.
C
C
A
Yeah
I
I,
that's
a
good
one,
I
think
there's
there's
like
so
much
that
I
I
think
we
could
do
so
I
I.
The
reason,
though,
is
that
I
mean
because
the
meeting
is
short
we
we
will
try
and
focus
on
me
one
or
two
things
the
most
and
we
can
continue
to
attend
the
meeting.
I
think
it's
like
every
two
two
weeks
or
something
like
that
and
and
continue
to
bring
our
ideas.
It's
just
sort
of
this
would
be
the
plan
for
the
introduction
on
May
11th.
C
You
know
what
that
reminds
me
actually,
so
there
is
a
metric
called
cubelet
end-to-end
pod
start
latency.
C
We
could,
as
part
of
this
density
test,
also
collect
that
metric
and
have
it
in
in
our
graphs.
C
The
reason
why
I
think
it
will
be
it
will
make
more
sense
to
do
it
in
six
scale,
for
kubernetes
is
that
they
can
be
much
closer
to
the
changes
that's
going
in
and
can
make
sense
for
for
it
right
like.
If
we
do
it
here,
we
would
be
able
to
get
some
additional
data
points,
but
we
would
not
be
able
to
act
on
it
like
okay.
What
does
this
mean
like
what
just
like
how
we
are
able
to
act
on
DQ
word
matrix?
It's.
C
It
will
be
out
of
scope
here,
so
I
mean
I'm,
just
restating
what
you
are
saying,
but
two
main
thoughts.
One
is
if
it
is
helpful
to
expose
those
metrics
in
order
to
prove
a
point
that
this
is
really
helpful.
We
could
do
that
and
then
second,
we
can
talk
about
all
of
these.
These
things
and
challenges
in
that.
A
Yeah
I
think
I
mean
I,
think
we
yeah
I
mean
I,
think
we
keep
talking
about
it.
I
I
think
we'll
have
we'll
have
you
know
we
can
make?
We
know
one
or
two
points
and
yeah
like
I
said
we
can
always,
if
there's
sort
of
lots
of
discussion
which
I
think
there
will
be,
we
can
always
attend
a
few
of
use
and
continue
to
discuss.
A
I
think
the
the
best
thing
is
that
I
that
they're
excited
to
hear
about
this
and
I
and
I
think
I.
Think
for
us,
like,
what's
gonna,
be
a
good
learning
opportunity
for
us.
I
also
think
that
this
and
things
we
can,
we
can
teach
I
I,
do
think
what
we're
doing
is
really
interesting
and
unique
in
a
lot
of
ways.
Just
because
we've
had
a
very
focused
approach
to
a
specific
part
as
a
component.
A
That's
you
know
as
like
a
owning
a
slice
of
kubernetes
you
know
and
where
their
focus
has
been
on
nodes
and
pods
and
stuff.
So
it'll
be
kind
of
interesting
when
we
sort
of
clash
these
two
worlds
and
and
I
think
they're
gonna
learn
some
things
and
we're
gonna
learn
something
so
yeah
I
mean
that's
what
I
think
would
be.
The
goal
is
like
when
we
collaborate
on
these
things.
A
You
know
we're
a
little
farther
down
the
are
up
the
stack,
I,
guess,
hearing
our
feedback
and
then
also
getting
there
I
think
it's
just
going
to
make
both
the
solutions
better,
so
yeah,
overall,
really
positive,
I.
Think
we're
going
to
get
a
lot
out
of
this
so
looking
forward
to
it
so
May
11th
is
the
plan.
Will
We
can
brainstorm
some
more
and
put
together
what
we
want
to
do
with
that
date?.
C
Sounds
good
yeah
thanks
for
you
know,
starting
this
I
think
this
will
be
a
good
direction.
A
A
You
know
you
mentioned
for
me
to
take
a
look
at
I
was
asking
him
about
like
how
to
measure
like
how
can
we
measure
get
all
the
details
out
of
like
how
long
it
takes
to
get
a
a
pod
started?
How
can
we
get
more
out
of
it?
What
are
the
metrics
that
you
have
in
Prometheus
and
we,
like
you
know,
we
like,
we
had
some
gaps
and
we
needed
to
develop
some
of
our
own,
and
you
know
what
else
is
there
out
there
and
so
you
sent
this
over.
A
So
we
can
check
this
out.
I
mean
I,
don't
we
don't
have
it
right
now,
because
we
are
early
on
125
for
our
job,
but
something
we
can
look
at
soon,
adding
that's
something
we.
C
Yeah,
this
is
an
interesting
choice
in
the
name
of
this
metric.
You
know
what
the
SLI
in
that
Matrix
stands
for.
A
In
his
he
talked
about
it
in
his
talk
and
I
forgot.
A
I'll
need
to
find
a
link
to
this
talk.
He
went
to
it
went
through
all
the
details
when
it's
Talking
Thomas.
What
they're
doing
here
with.
C
D
A
D
A
All
right,
that's
all
I
had
for
today
does.
F
C
F
E
C
B
Anyway,
this
isn't
where's
the
Prometheus
part
of
this
okay.
A
D
A
There's
also,
if
you
want
they
convert.
A
Yeah
here
we
go
so
it's
got
it
here.
So
the
if
you
want
to
also
put
this
into
griffona
and
you
can
even
see.
D
A
E
C
I've
also
put
the
metric
client
link
the
one
Rand
we're
showing
earlier
that
this
metric
are
being
gathered.
D
C
I
think
so,
during
the
coupon
time
we
might
be
seeing
those
fake,
vmis
and
and
work
on
that
being
picked
up
now
so
cool
that
yeah.
That's
another
thing
update
I
wanted
to
give
you
yeah
I,
don't
have
any
concrete
plans,
but
it's
just
something.
A
Yeah
that'll
be
another
one.
Another
topic
where
we're
gonna
find
interesting
collaboration
with
the
Cubase
six
scale,
because
they
have
Cube,
Mark
and
they've,
got
all
the
stuff
and
and
I
think
they
have
a
lot
of
opinions
as
to
what's
good
and
bad
about
it.
So
it
would
be
interesting
to
see
like
I'm
sure
they
can
give
us
all
the
secrets
as
to
like
what
what
they
like
and
don't
like
about
it
and
and
sort
of
what
what
are
the
well.
You
know,
what's
the
treachery
with
going
this
journey
and
who
we
can
yeah
so.
C
We
did
some
experiments
last
week
and
we
had
presented
this
in
in
other
call,
and
what
do
we
found
out
is
that
the
the
fake
objects?
It
really
just
beefs
up
the
number
of
objects,
but
it
does
not
create
all
the
watch
call
or
the
list
call
that
a
regular
cubelet
would
would
create
right
and
not
only
a
regular
Cube
lid
along
with
cubelet.
There
are
a
lot
of
demon
sets
that
are
running
in
the
on
the
Node
itself,
so
all
of
those
load
generators
are
missing.
So
in.
C
Yes,
so
we
have
to
be
very
selective
about
what
how
we
use
that
tool
and
then
so.
That's
one
thing
as
of
now
and
the
second
thing
is:
we
can
do
some
sort
of
approximation
on
what
is
the
number
of
like
what
is
a
rough
amount
of
load
generated
by
a
running
cubelet.
That
includes
all
of
the
running
services
and
running
cubelet
on
top
right
and
include
those
load,
generators
in
the
the
Quark
utility.
C
So
that
will
get
us
little
bit
closer
to
the
actual
state
of
the.
A
World
you
know,
and
now
I'm
thinking
about
this
project
like
I,
like
the
idea
of
kubernetes
with
no
cubot,
but
there's
the
the
problem
of
like
we're.
Seeing
like
you
just
say
here
with
the
fan
out
like
we,
we
need
the
cube
in
order
to
like
I
like
we
need
the
cubelet.
What
we
don't
need
is
the
actual
container
clean
kubernetes
without
containers
is
what
we
need
right.
It's
like
it's
like
it's
like
one,
more
layer
down
to
sort
of
get
the
to
get
that
full
effect.
Correct.
C
Actually,
even
without
the
container,
it
would
still
be
the
same
problem
right,
because
I
think
the
amount
of
load
that
cubelet
generates
on
the
API
server
would
be
one
part
of
it,
but
then
the
running
containers
that
are
infrastructure,
Services
right,
for
example,
in
the
case
of
cubeboard,
the
word
Handler,
which
is
actually
your
running
container
on
the
Node.
It
generates
another
set
of
load
for
it,
then,
let's
say
maltas
or
some
kind
of
network
plugin
that
generates.
That
is
the
third
load,
generator
storage.
C
Let's
say
if
you
have
some
kind
of
CSI
that
has
power
node
component,
so
that's
the
fourth
load
generator
so
I
think,
even
without
the
containers,
there
are
all
these
load
generators
and
what
this
fake
tool
is
missing
is
a
context
of
these
load
generators.
I
mean
you
need
to
build
some
kind
of
notion
of
per
node
load
generator
in
in
this
tool.
That
can
get
us
a
little
bit
close
to
what
a
real
node
can
generate
on
on
API
server.
A
What's
that,
so
that's
what
I
that's
kind
of
what
I
meant
or
I
meant
what
the
Menace.
We
still
have.
The
pot
we'd
sell
the
containers,
the
low
generating
containers,
but
we
wouldn't
have
the
the
workload
I
guess.
E
A
Like
imagine
imagine
this
imagine
if
the
CRI
layer
and
the
CRI
layer
had
the
concept
of
like
a
fake
container
where
it
like,
where
all
it
did,
is
it
basically
created
the
pause
container?
You
know
just
like
that's
it
and
whenever
you
created
a
VM
and
no
matter
what
you
did,
it
created
a
pause
container
that'd
be
like
that
would
be
like
what
that
would
be.
That'd
be
like
what
I'm
talking
about
like
so
you'd
have
word
Handler
there
and
everything
and
all
those
pots
running
it
just
it
creates
a
pause
container
now.
C
A
Yeah,
actually,
then
we
don't
have
to
do
much
right,
yeah,
we
just
we
just
create
the
POS,
a
bunch
of
pause
containers
yeah,
maybe
that
maybe
that
would
work.
Okay,
interesting.
C
Yeah
yeah
yeah
I
think
it
should
do
some
more
brainstorming
around
that,
but
that
that
is
the
conclusion
tidr
of
what
we
discussed
this
week.
Okay,
when
cube
one
was
going
on
yeah.
A
Okay
sounds
good:
that's
cool
yeah!
Let's
continue
to
discuss
that
I!
Think
that's
just
an
area!
You
know
we're
going
to
get
a
lot
more
interesting
ideas
from
the
kubernetes
six
Gallery.
So,
let's
see
what
we
see,
we
can
find
out
more
brain,
sorry,
yeah,
okay,
cool
all
right
guys.
If
there's
nothing
else,
we'll
call
it
a
meeting.
Thank
you
thanks
for
coming
this
week,
we'll
talk
to
you
guys
later
thanks.
Everyone.