►
From YouTube: SIG - Performance and scale 2021-05-12
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Great
okay,
all
right
so
welcome
to
the
the
second
sixth
scale
meeting
the
add
your
name
to
the
attendees
list
for
the
doc.
If
you
can
so
I'm
going
to
recap:
first,
real
quick:
can
you
post
the
dot.
A
Yeah
you
got
it.
Give
me
a
second.
B
A
Here
that
should
do
it,
okay,
all
right
so
to
to
kick
things
off.
So
I'm
gonna
just
recap
very
briefly
what
we
talked
about
last
week
so
basically
like
since
it
wasn't
recording,
I
think
we'd
just
go
over
it
again.
Just
very
very
briefly.
So
last
week
I
talked
a
little
bit
of
kind
of
introduced.
The
concept
of
what
sixth
scale
is:
what's
it
trying
to
do
what
we
the
goals
we
can
accomplish,
and
I
mainly
highlight
a
lot
of
what
we
talk.
A
What
I
talk
about
here
at
the
top
of
this
throughout
this
document,
some
of
the
goals,
scope
and
different
topics
that
we
can
go
into
and
explore
and
investigate,
and
so
basically
to
kind
of
recap,
the
scope.
We
want
to
define
and
drive
the
scalability
and
performance
goals
with
the
keyboard.
We
want
to
document,
we
want
tests
and
we
want
to
measure
scalability
of
cuber
and
performance
across
releases.
A
So
to
kind
of
to
kick
you
off
at
this
meeting,
you
know
what
the
first
step
I
want
to
look
to
take
in
this
direction
is:
is
the
steam
of
establishing
a
baseline
one
of
the
things
to
measure
we
need
to
know
where
we
are?
We
need
to
understand
like
okay.
What's
where
are
we
at
today,
and
where
can
we?
Where
can
we
go
so
we
have
these
these
goals
with
where
we
can
go
with
perf.
A
So,
let's
see
what
you
know,
what
our
delta
is,
so
the
kind
of
two
things
that
that
kind
of
pushed
toward
this
goal
that
I
could
see,
at
least
in
the
immediate
term,
and
one
of
them
we're
going
to
look
at
today.
So
the
first
is
a
tool
to
measure
and
report
performance.
A
There's
a
mailing
list
thread
that
I
just
wanted
to
call
attention
to
that
was
recently
posted
that
that's
looking
at
at
doing
doing
this,
creating
a
tool
that
we
can
measure
performance
and
report
it
upstream,
and
then
we
also
had
some
folks
last
meeting,
who
talked
a
little
bit
about
how
they
wanted
to
add
some
metrics
around
this
and
add
it
to
ci.
A
So
we
can
have
that
discussion,
definitely
where
we
should
have
a
discussion
in
the
mailing
list
and
something
we
can
even
call
to
more
attention
to
next
meeting
next
time
if
needed,
but
since
it
started
yesterday
I
figured
we
can
just
I
call
attention
to
and
we
can
discuss
it
in
the
mailing
list
for
now
so
today.
What
I
wanted
to
do
with
everyone
here
is:
I
want
to
do
a
little
bit
of
an
exercise
kind
of
and
how
we
can
establish
a
baseline.
A
I
wanted
to
look
at
how
we
could
build
a
sequence
diagram
for
what
happens
when
you
create
a
virtual
machine
instance.
I
think
this
is
it's.
It's
a
there's,
a
lot
of
things
that
happen.
There's
a
lot
of
code
paths
that
it
goes
through
and
we're
talking
about
like
trying
to
figure
out
what
are
the
bottlenecks
with
with
perfect
scale,
it's
good
to
have
at
least
a
diagram
that
we
can
all
reference
as
a
something.
That's
when
we're
talking
about
a
concept
or
an
area
of
the
code,
we
can
at
least
know
okay.
A
You
know
when
ryan's
talking
about
there's
this
bottleneck
in
this
area.
We
can
go,
look
at
our
diagram
and
say:
okay,
here's
kind
of
where
he
was
talking
about
it
and
we
can
get
a
sense
of
you
know
what's
going
on
in
that
area,
so
I
think
this
would
be
a
way
we
can
kind
of
at
least
streamline
our
communication
and
at
least
get
a
better
understanding
of
what's
going
on.
So
I
wanted
to
spend
a
few
minutes
on
that,
and
then
we
can
also
talk
about
any
open
items.
A
A
I,
if
you
guys,
can
open
up
a
terminal
if
you
want
to
participate
in
kind
of
looking
through
the
code
or
if
you
want
to
even
help,
do
some
work
or
even
look
in
the
diagram
or,
if
you
just
want
to
follow
along
that's
great.
What
so
pretty
much?
What
I'm
asking
here
is:
there's
a
there's,
a
link
here
for
the
sequence
diagram
and
what
we're
going
to
do
is.
Oh,
I'm
gonna
move
my
tab
here
there
we
go.
A
What
we're
gonna
do
is
we're
going
to
look
at
the
code
and
we're
gonna,
try
and
find
some
paths
like
I
have
here
and
some
function
calls
and
and
we're
going
to
put
some
numbers
to
them
and
kind
of,
and
and
talk
up
a
little
bit
about
what
they're,
what
they're
doing,
just
like
a
one-line
phrase
or
something.
So
I
did
the
api
ahead
of
time
since
the
this
is.
A
I
figured
we'd
skip
this
and
go
right
to
the
controller
and
the
handler
and
see
how
far
we
can
get
so
just
kind
of
walk
through
like
what
I'm.
What
I
did
here
with
like
how
I
put
these
boxes
in
and
again
also,
this
is,
if
you
want
to
click
the
link.
You
can
join
this,
that
that
draw.
io
session
yourself
and
you
can
actually
do
edit
it's
a
little
bit
slow.
A
So
if
we're
doing
them
together
like
if
I
move
this
box,
it'll
take
a
few
seconds
for
you
to
see
it,
but
it'll
eventually
show
up
there,
but
so
what
I?
What
I
did
here
and
kind
of
the
criteria
for
these
boxes
is
whenever
I
see
something
happening
like
okay,
we
have
a
call
from
the
user
to
create
a
vmi.
It
goes
to
the
api
server.
A
Our
second
step
is
going
to
be
that
there's
a
mutating
web
hook
that
comes
into
play.
You
know,
what's
the
location
of
that
of
that
it
starts
right
here,
so
we
have
a
starting
point
where
it
is
in
code
and
then
a
function
that
the
that
it
gets
that
gets
called
and
then
in
that
function
there's
a
bunch
of
things
that
are
gonna
happen
like
we
apply
presets.
We
set
some
default
values.
A
So,
like
things
maybe
like
you
know,
like
you,
probably
don't
think
about
like
the
your
clock
or
something
for
the
for
the
vmi
is
going
to
get
set
automatically
in
some
place
like
this
or
some
other
stuff
like
that
that
you
don't
probably
want
to
do
every
single
time
you
create
a
vmi
are
going
to
be
set
automatically.
A
So
this
is
where
that
happens,
and
so
all
these
presets
get
applied.
We
set
some
defaults
and
then
we
end
up
returning
all
the
way
back
to
a
valid
vmi.
So
this
isn't
necessarily
every
step,
but
it's
it's
shows
like
the
the
gist
of
what
happens
and
when
I
have
three
boxes
here.
What
I'm?
What
I'm
saying
is
like
there's,
multiple
values
that
are
being
set
here
so,
like
you
know,
maybe
we've
set
the
clock.
A
Maybe
we
said
you
know
some
other
something
else
is
but
there's
a
whole
list
of
them
and
in
the
code
you'll
see
that
and
here's
another
example
like
we
validate
some
fields
like
we
want
to
validate
all
the
virtual
machine
instance
fields.
So
there's
I
set
three
boxes
here,
because
we
we're
gonna
validate
a
bunch
of
things
and
then
so
the
step
three
is
the
validating
web
hook
and
then
we
have
our
object
creation.
A
So
this
is
where
I
wanted
to
start
as
first
place
that
we're
gonna
go
when
we
create
this
vmi
object.
So
I'm
going
to
go.
Let
me
move
to
my
terminal
here.
What
I'm
going
to
do.
Let
me
see
if
I
can
split
into
the.
A
C
B
B
A
We're
pinging
ryan
on
our
end.
A
Oh
I'm
back,
okay,
I
I
lost
internet
connection
yeah.
Thank
you.
I
lost
internet
connection.
I
don't
know
what
happened.
I'm
tethering
for
my
phone
right
now.
Well,
let's
see
how
this
works.
Well,
let's
see,
if
I
can
even
share
here,
give
me
one.
Second,
I
don't
know
what's
happening,
I'm
happy
to
restart
my
home
internet.
A
Okay,
let's
see
I'm
well
my
home
internet
reboots,
I'm
gonna.
Let
me
try
to
share
my
screen
again.
A
Okay,
I
can't
get
to
my
home
network
right
now,
so
I'm
not
going
to
be
able
to
use
my
terminal.
I
don't
think
unless
I
have
kubert
locally,
I
don't
know,
but
oh
I
knew
okay,
so.
A
Oh,
we
can
try
this
see
if
this
works
well
hold
on.
Let
me
go
back
to
diagram
okay,
so
I
don't
know
where
I
got
cut
out,
but
so
the
first
step
that
we
wanted
to
go
through
is
the
package
for
controller
watch
via
my.go.
So
if,
if
anyone
can
shout
out
kind
of
what
the
next
step
is
here,
I'm
gonna
I'll,
try
and
see
if
I
can
find
it
in
my
at
the
same
time,
but
if
you
all
have
your
terminals
open
see
if
you
can
find
if
this?
A
B
So
what
happens
is
we're
we're
watching
the
vmis?
That's
going
to
cue
a
vmi
onto
the
work
queue
which
is
then
processed
by
the
sync
function.
A
B
B
Call,
or
is
it
just
like
an
if
check
it's
a
if
check
that
results,
let
me
see
exactly
how
yeah
it's
more
of
an
f
check
right
now.
So
just
a
word
of
caution
here,
a
lot
of
this.
I
have
it,
at
least
in
my
backlog,
to
begin
simplifying
some
of
these
code
paths,
which
is
going
to
involve
taking
these
giant
functions
that
we
have
with
all
this
branching
logic
and
try
to
make
smaller
functions
out
of
it,
so
anything
that
we
have
in
this
diagram.
A
B
D
B
A
C
B
C
C
And
then
it's
execute
calls
sync:
once
conditions
are
satisfied
and
the
sync
creates
the
pod
if
it
needs
to.
B
A
Yeah
well,
so
I
does
this
make
sense
so
like
I
we
it
doesn't
need
to
be
like
it
doesn't
mean.
You
know
super
precise
and
like
we
just
I
want
to
know
like
okay,
if
I
want
to
see
what
is
going
to
like,
if
I
want
to
understand
like
when,
when
I
want
to
create
this,
this
be
my,
you
know
what
how
does
the
launcher
pod
gets
created?
I
can
see
it
right
here
like
I
just
want
just
a
general
gist
of
like
what
what
happens.
I
think
this
this
covers
it
like.
C
We
might
want
to
include
the
execute
because
it
calls
the
update
status,
which
is
what
drives
the
updates
through
the
different
phases.
Until
we
get
to
schedule
and
okay.
A
C
Execute
is
the
the
work
queue
worker
function,
so
it's
execute
that
calls
sync
so
it'll
be
execute
that
receives.
C
C
That's
right,
that's
what
I
was
saying
is
the
execute
ultimately
calls
the
update
status,
which
you
know,
drives
all
status,
updates
and
watches.
D
And
also
before
creating
a
pod,
the
vert
controller,
also
using
pause,
accept
exception
or
mechanism
to
do
a
lot
of
creed
and
validated
exceptions.
B
Yeah,
so
that's
being
done
as
kind
of
a
ref,
for
you
guys
probably
are
familiar.
It's
it's
ref,
counting
to
make
sure
that
we
don't
process.
We
don't
call
sync
until
we've
observed
the
change.
That's
already
occurred,
so
if
we
create
a
pod,
we're
never
going
to
sink
that
vmi
again
until
we've
observed
that
our
informer
has
detected.
Basically,
the
round
trip
has
occurred.
We've
both
created
the
pod
and
been
informed
that
that's
that's
kind
of
stuck
and
the
api
server
has
told
us
that
it
exists.
B
If
we
don't
do
that,
for
example,
this
execute
function
would
happen
and
we
wouldn't
find
the
pod
if
it
executed
before
our
former
caught
up.
So
then
we
could
create
like
multiple
pods
for
the
vmi.
D
We
expect
on
the
expect
station
stored
in
another
cache
cash
store
right,
so
there
there
should
be
different.
There
should
be
a
different
synchronization
assigned
with
the
informers.
The
vmi
informers.
A
Do
we
well,
I
I
understand
so
I
understand
some
of
the
discussion,
but
I
wanted
before
we
even
dive
into
some
of
the
what
we
how
we
should
look
at
this,
though,
like
I
I
want,
I
kind
of
just
want
to
get
like
the
high
level
path
like
that,
and
then
we
can,
because
I
I
can
see
like
what
you're
saying
like
there's
some
important
details
in
each
of
these,
and
I
could
see
it
exploding
out
into
some
of
them.
A
Even
do
that
if
we
want
to,
but
I
want
to
if
we
can
just.
B
A
It
down
a
little
bit
or
kind
of
bring
it
up
a
little,
bring
it
up
kind
of
a
little
bit
and
then
and
then
we
can.
We
can
expand
these
out
even
more
like
I
can
just
copy
and
paste
it
out
some
things
so,
like
the
gist
here.
What
I'm
getting
is
that
we're
we
go
into
this
vmi
go
function,
we're
going
to
run
an
execute.
A
That
starts
like
our
processing
loop,
like
we're
going
to
so
we're
going
to
like
we're
going
to
call
this
like
process
vmi,
because
we've
we've
noticed
a
vmi
we're
going
to
process
it.
So
then
we
run
execute
execute's
going
to
first
call.
Sync
sync
is
going
to
notice
that
the
pod
doesn't
exist,
so
we're
going
to
create
it.
So
now
we
have
a
pod
over
launcher
pod.
So
what
what
happens
now
like
are?
We?
Are
we
looping
somewhere
here
we're
watching
because
we're
we're
gonna
watch
this
state
and
that's
gonna
happen
continuously
now?
B
Everything
so
we're
watching
vmis
we're
watching
pods
watching
pvcs
all
kinds
of
stuff,
but
the
state
we're
specifically
waiting
for
is
for
that
bird
launcher
pod
to
come
online.
B
A
C
A
C
As
I
could
say
that
that
execute
is
triggered
for
both
a
vmi
update
and
also
for
a
pod,
we
look
up
the
controller
associated
with
the
vmi
and
then
tickle
the
same
key
and
go
through
the
same,
execute
loop.
So
for
every
update
event
on
the
pod,
we'll
we'll
file
a
vmi
event
to
execute
and
go
through
that
same
executed
port,
and
that's
that's
how
we
monitor
as
the
live
as
as
the
pod
makes
progress.
D
I
think
it's
not,
I
think
it's
another
watching
part.
So,
once
the
part
is
created,
the
the
event
handler
will
add
this
part
into
the
queue
the
worker
queue
of
the
controller.
So
the
controller
will
pick
up
the
key
from
the
queue
and
ask
as
quick
exactly
again
in
the
loop
right.
Okay,.
A
Yeah
so
then,
I'm
just
trying
to
think
of
how
I
can
illustrate
that
yeah
we
are,
we
have
our
informer
and
we're
getting
we're
picking
up
events
from
it.
A
So
this
is
where
it
happens,
so
we
we
could
be.
I
mean
I
could
just
call
it
watch
for
events
watch
for
vmi
event,
watch
for
pod
event,
and
then
this
would
so.
This
is
probably
reconcile.
Then
right.
That's
what
we're
doing
here
or
maybe
a
sync
is
reconcile.
A
Okay,
so
we
go
to
execute
so
we
have
two
things
that
can
trigger
or
reconcile
and
execute
execute
creates
pods,
so
we're
okay.
So
we
have
events
that
are
being
processed
okay.
So
what
about
the
event?
It's
like
what
else
happens
in
here
execute
we
have
sync,
whatever
detail,
could
we
say
like
like
I'm
trying
to
get
to?
She
said
we
hand
over
the
pod
to
the
vert
handler
what
else?
So?
What
else
am
I
missing?
So
this
part
launcher
has
to
go
to
a
certain
state.
We're
watching
for
that
state.
C
B
B
Sync
is
the
thing
that's
going
to
create
the
pod
we're
going
to
observe
all
these
things
that
are
going
on
so
the
pod
and
the
vmi,
and
all
that
and
update
status
is
what
ultimately,
I
believe,
does
the
handoff.
So
once
all
the
right
conditions
are
met
to
hand
off
the
vmi
to
vert
handler,
then
that's
where
that
part
is.
B
Okay,
the
way
most
of
our
controllers
are
created
is
that
there's
a
there's
two
parts
to
them,
so
the
execute
is
going
to
have
a
an
action
part.
So
the
sync
part
that's
where
we're
actually
performing
an
action
and
then
observing
part
where
we
observe
the
results
of
those
actions,
which
is
the
update
status
part
and
that's
where
we
are
going
to
observe
the
cluster
state
and
write
the
status.
A
Okay,
so
I'm
going
to
do
observe,
not
exists.
Yeah
I
like
the
characterization,
so
now
we're
going
to
do
action.
I
don't
know
hand
off
to
handler,
so
our
state
is
at
this
point.
A
C
Well,
I
guess
it's
the
thing:
that's
driving
us
through
the
phases
from
unset
to
pending
and
scheduling,
scheduled
and
so
on.
So
it's
also
looking
for
failure
to
launch
effectively,
if,
if
you
don't
get
to
the
expected
state
or
if
something
external
deletes
the
vert
launcher
part
or
something,
it
picks
all
those
things
up,
so
in
addition
to
driving
it
towards
scheduled
states
that
vertana
can
take
over.
It's
also
looking
to
mop
up
the
failure
cases.
A
Okay,
so
we
have
like
sunset,
there's
scheduling
kind
of
illustrate
whoops
that
we're
going
through
a
bunch
of
phases
here.
A
Scheduling
and
then
I'm
just
pending,
didn't
I
pending.
A
Pending
scheduling
and
then
running
scheduled,
scheduled,
scheduled
first
okay.
C
C
C
That
is
update
status.
All
of
that-
and
I
think
that's
possibly
what
that
you're,
not
sure
who
the
other
speaker
was
are
saying
it
could
do
to
be
broken
up
into
some
smaller
functions,
because
it's
a
giant
at
the
moment.
A
A
A
And
we're
just
doing
some
work,
okay,
so
we're
setting
state
on
the
so
I'll.
What
I'll
do
is
I'll
come
back
I'll
make
these
all
these
arrows
need
to
go
out
to
the
drip
launcher
pod,
so
I'll
change
that
after
okay,
so
we
got
scheduled
okay,
so
you
said
now
we
the
handoff
point.
So
the
moment
we
set
to
running
so
you
said
this
update
status
is
not
the
one
that
sets
to
running
someone
else,
so
the
vert
handler
is
the
one
that
does
it.
So,
what's
the
what's?
A
B
Handler
doesn't
observe
all
vmis
there's.
The
handoff
is
specifically
when
we
set
the
label
on
the
vmi,
the
node
name
label,
and
that
node
name
label
we're
using
a
former
on
handler.
So
it
only
looks
for
vmis
that
have
the
node
name
label
that
matches
its
specific
node,
so
the
handoff
for
when
vert
handler
will
begin
processing
a
vmi
is
at
the
point
where
we
set
that
label
on
the,
and
that
happens
after
what
we
have
it's
in
is
scheduling
condition.
A
A
A
C
The
vmi,
because
the
the
vert
hand
is
it,
has
an
informer
on
the
vmis
that
have
its
name
on
it.
C
Oops
and
in
addition
to
adding
the
label,
we
fill
in
the
status,
node
name
field,
so
yeah,
that's
how
you
you
know
it's
transitioned
to
that
pointer.
A
And
then
what
is
it?
You
said
something
on
the
status.
A
Okay,
okay,
so
we're
watching
for
okay
cool,
so
that's
looks
good,
so
we're
through
the
controller,
then,
okay,
that
looks
good.
So
we
do
so
we're
doing
some
washing
or
reconciling
we're
syncing
we're
doing
some
observation
and
then
we
look
to
hand
off
to
our
controller
by
processing
each
of
the
phases
and
then
we
label
we
say
we're
going
to
be
on
this
node
and
then
now
overhandler
picks
us
up.
So
we
watch
a
pod
event
on
a
specific
node.
So
now
we're
probably
the
same
thing
right.
B
B
This
is
where
things
get
complicated:
it's
it's
another
execute
function,
but
there's
just
nested
and
nested
versions
of
this
execute,
I
think,
to
boil
it
down
in
the
most
simplest
terms.
What
it
does
is
it's
going
to
reach
out
to
the
vmi
pod
through
a
ipc
connection
on
the
first
execute,
and
it's
going
to
call
a
sync
virtual
machine
api,
that's
executed
within
the
vert
launcher
pod,
which
then
calls
liv
vert
to
start
the
domain.
C
B
So
vert
handle
yeah.
This
is
where
it
gets
icky.
So
vert
handler
is
going
to
hand
off
the
vmi
spec
just
exactly
the
way
it
sees
it
to
vert,
launcher
and
vert
launcher
does
a
transform
on
that
bmi
spec.
So
it's
converting
the
bmi
spec
to
the
domain
xml,
which
is
ultimately
posted
to
lipvert.
A
Okay
and
this
sync
virtual
machine,
so
this
is
executed
on
the
invert
launcher.
B
B
A
A
B
A
All
right
now
we
have
our
entry
point
into
here.
B
Okay,
yeah
and
that's
going
to
be
on
the
cli.
Sorry,
the
server,
the
sync
virtual
machine.
Really,
that's
just
a
wrap
server.
There's
a
server
running
invert
launcher
so
vert
handler
is
a
client
to
divert
launcher
ipc
server.
So
it's
listening
for
these.
These
connections.
A
B
The
sync
virtual
machine
function
on
the
back
end.
Let
me
see
what
it's
actually.
B
B
A
A
B
Yeah,
the
sync
vmi
that
that's
really
where
it
all
starts.
As
far
as
the
entry
point
for
converting
the
bmi
spec
to
domain
xml
posting
it
and
all
that.
A
Okay,
then
it's
sync
bmi
okay
and
that's
where
we
get
into
the
complex
stuff.
B
A
Is
going
to
be
brought
over
this
wire
okay,
then
we
do
a
sync
vls.
How
could
we
simplify
some
of
what
happens
in
the
launcher
here.
B
So
I
would
say
that
there's
a
conversion
between
the
bmi
spec
to
domain
xml,
I'll
just
say
all
the
flow,
real,
quick,
there's
conversion,
there's
going
to
be
a
set
of
actions
that
occur
to
generate
local
data,
so
we're
talking
about
like
cloud
and
knit
disks
and
things
like
that
and
then,
after
all,
those
preconditions
are
met.
So
everything's
set
up
that
needs
to
be
locally
set
up.
The
conversion's
done
with
the
domain
xml
we're
going
to
post
the
domain
and
start
it
in
the
invert.
B
B
I'm
not
sure
it
validates.
The
right
word,
I
would
say,
convert
generate
local
data
whatever
that
means
like
we
can
give
some
examples,
but
there's
a
lot
and
then
and
then
post
the
domain
xml
to
live
or
whatever
word
we
want
to
use
for
that.
C
And
then
what
I
looked
this
up
the
other
day,
but
already
what
feeds
the
domain
in
format.
B
There's
another
communication
channel
between
vert
launcher
and
invert
handlers
called
the
event
channel
and
every
time
livert
has
an
event.
So
liver
has
the
ability
to
watch
for
events
every
time.
One
of
those
events
gets
popped.
We
send
it
back
to
vert
handler.
So
that's
the
domain
and
former
part
so
yeah
after
we
post
the
domain
xml
to
libvert
and
the
domain
get
started
and
everything
we're
getting
all
these
events
from
livbert,
which
ultimately
make
them
way
back
to
work
handler
and
vert
handlers
watching
those
just
like
a
normal
informer.
D
B
B
Thing
it
is
a
unix
socket
and
just
in
the
future
we'd
like
for
there
to
be
a
single
unix,
socket.
There's
some
history
here
we
didn't
use
grpc
to
begin
with,
for
the
vert
handler
to
vert,
launcher
client
and
we
didn't
have
the
ability
to
like
long
pull
and
send
it.
That's
back
so
we
had
two
channels
when
we
first
architected
this
and
now
that
we
have
that
ability.
So
there
is
a
grpc
way
of
doing
this,
we'd
like
to
reduce
it
to
a
single
connection.
A
A
Okay
cool,
so
yeah,
we've
posted
to
deliver
events
happen,
so
we
can
figure
out
when,
when
things
are
running
so
now
we've
gone
back
to
the
domain
former
so
like
now,
we,
the
os,
is
booted
right.
What's
the
when
we
reach
running
state,
this
is
where,
like
it
back
to
that
question
where
it,
where
are
we
going
to
say
where
we're
going
to
change
the
vmware
running?
Is
it
after
this?
After
this
event
comes
back
on
the
wire
to
the
domain?
Former
saying
like
os
is
booted.
A
And
then
handler
goes
and
sets
the
vmware
running.
C
I
thought
I
don't
know
this
for
sure,
but
I
thought
it
just
meant
that
running
just
meant
the
domains
created
successfully
and
look.
You
know
it
hasn't
necessarily
completed
boot
or
even
successfully
booted,
it's
just
routine.
Okay,.
A
C
And
then
I
know
the
domain
informer
gets
because
it's
getting
these
events
from
the
vert.
That's
where
we
get
the
additional
events
for
when,
like
an
interface,
gets
an
ip
address
and
those
various
things
we
get
those
from
attention.
So
I'm
not
sure
if
that's
the
virtual
guest
agent
at
that
stage,
that's
feeding
those
and
then
and
then
the
vert
handler
adds
those
to
the
vmi
stages
as
it
learns
during
the
boot
of
the
client.
A
B
B
A
B
The
act
like
the
physical
guest,
like
it's
sorry,
I
don't
know
how
to
describe
it.
It's
it's
running
in
the
guest
os,
that's
not
to
describe
it,
but
it's
not
required
for
any
of
this
to
work,
so
you
can
get
to
a
running
state
and
all
that
it's
the
guest
agent's
supplemental.
So
if
it's
there
great,
if
we're
going
to
get
some
more
information
that
gives
us
insights
into
when
the
guest
has
actually
launched
and
in
the
future,
we
have
like
a
a
pr,
for
example,
where
we
can
do
probes,
guest
agent
probes.
B
That
the
vmi
is
actually
running
just
the
running
state,
so
it's
not
ready
necessarily
depending
on
what
our
probes
are
running
is
when
we
get
the
domain
informer
reporting
that
the
domain
is
running.
So
it
translates
directly
just
to
what
libert
is
telling
us
about
the
the
domain
from
an
external
point
of
view,
so
not
what's
happening
within
the
guest,
but
that
the
community
process
is
online.
That's
pretty
much!
It.
A
B
A
Okay,
cool
and
then.
A
A
Okay,
that
looks
pretty
good,
so
yeah
now
we're
in
running
and
yeah.
That's
pretty
much
what
we
do
well,
there's
like
watching
other
stuff,
but
this
at
least
gets
us
to
create.
Okay.
That
looks
pretty
good,
so
there's
even
some
more
info
in
here.
So
I
heard
like,
like
the
way
that
we
do
the
informers.
That's
also
another
thing
that
we
can
even
dive
into
some
more
but
okay.
This
looks
pretty
good,
nice,
okay.
A
So
what
I'll
do
is
I'll
clean
this
up
a
little
bit
just
to
make
like
the
arrows
a
little
bit
more
intuitive?
But
this
gives
me
like
the
information
I
need
to,
and
I
think
and
I'll
take
this
and
post
it
as
like
a
an
image
and
we
can
publish
it
in
the
in
the
github
repo
yeah,
and
that
way
we
can
kind
of
get
an
idea
of
like
kind
of
what
we
what
we
have
right
now
and
then
and
then
the
informer
side
of
things
we
could
even
explore.
A
We
could
take
another
meeting
if
you
want
to
do
that
because
that's
a
whole
nother
area,
okay,
so
I'm
gonna,
oh
actually
before
we
stop
sharing,
were
there
any
open
items,
it
looks
like
no
one
had
anything.
They
wanted
to
add
we're
pretty
much
at
time,
though
so,
but
we'll
just
take
a
second
and
does
anyone
wanna
anything
they
want
to
bring
up
before
we
close
for
this
week's
meeting.
B
Great,
I'm
really
excited
about
it.
I
would
like
to
hear
more
about
it
and
I
think
that's
something
that
I
would
like
to
collaborate
on,
and
I
think
that
there's
other
people
across
other
companies
that
are
interested
in
it
as
well.
It's
been
a
topic,
that's
come
up
several
times
and
the
idea
of
setting
a
baseline.
I
think
that's
that's
great
as
well.
B
I'm
curious
did
we
look
at
so
what
was
that
tool
cube
mark?
Is
that
right,
ryan.
B
That
been
investigated,
like
the
equivalent
of
what
that
might
look
like
for
kefir.
I
don't
think
that
to
be
clear,
I
don't
think
that
solves
the
problem
of
establishing
a
baseline
with
a
real
performance
tool.
That's
going
to
work
in
a
real
cluster
or
anything
like
that.
There's
something
good
we
can
run
in
ci.
Perhaps.
A
Yeah
yeah
so
yeah,
I
I
agree
yeah
so
that
that'll
be
something
yeah
that
we
can
look
at
to
doing
when
we
want
to
when
eventually
we
when
we
try
to
do
horizontal
scaling,
we
just
we
just
don't
have
the
capacity
for
it
so
yeah
like,
but
I
haven't
I.
That
was
something
that
I
was
that
I
was
interested
in,
but
I
haven't
looked
at
it
yet
to
get
a
gauge,
but
eventually
what
I,
when
I
do,
hopefully
in
the
next
week
or
two
I'll
post.
A
What
I
find
on
the
mailing
list
that
we
can
talk
about
on
the
next
meeting,
just
to
get
an
idea
of
what
is
possible
and
what's
what's
not
and
not
using
that
tool.
A
Cool
okay,
yeah.
Well
again,
like
david
mentioned,
we
can
we
can
that
thread.
We
can
let's
definitely
we'll
collaborate
on
that
and
if
we
want
to
design
document,
that'd
be
great.
Let's,
let's
make
one
and
we
can
love
to
keep
pushing
that
forward
and
then,
like
I
said,
with
the
sequence
diagram
I'll
I'll
post
in
the
main
list,
so
everyone
is
aware
of
it
as
well.
So
thank
you,
everyone
for
for
coming
and
participating.
This
was
really.
This
is
awesome.
We
got.
We
got
a
lot
done.
So
thank
you
very
much.
B
We'll
see
you
in
yeah.
A
Thanks
everybody
we'll
see
so
next
meeting
will
be
two
weeks
on
thursday
same
time.
So
we'll
see
we'll
see
you
all
in
two
weeks
and
if
we're
on
the
mailing
list
or
on
just
as
a
reminder,
if
you
want
to
bring
up
topics
for
for
scale,
I've
been
using
the
sig
scale
header
as
like
a
way
to
kind
of
draw
attention
to
it.
If
that
makes
sense
and
we
can
use
it
as
kind
of
like
our
way.
We
talk
about
these
topics.
A
So
if
you
do
have
something
you
use
that
header-
and
we
can
kind
of
you
know
to
filter
throughout
some
of
the
things
that
we
want
to
talk
about,
and
we
can
use
something
less
to
focus
on
some
of
the
topics
that
we're
bringing
up
today
and
then
also
on
keyword
dev.
I
think
of
most
people
are
there
we
can.
If
we
want
to
talk
more
in
real
time,
we
can
use
the
million
we're
using
the
slack
there.
Okay
have
a
good
day.
Everybody
thank.