►
From YouTube: KubeVirt Community Meeting 2021-05-19
Description
Meeting minutes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#
A
Okay,
hello,
everybody
welcome
to
the
weekly
kubert
meeting,
I'm
your
host
chris
caligari
and
we
hold
this
meeting
to
discuss.
Issues
relating
to
the
cooper
project
share
my
screen,
so
we
can
all
see
the
meeting
notes.
A
A
First
item
is:
we
usually
fill
in
our
our
names
under
the
attendees?
You
could
do
that.
I
appreciate
it.
We
always
like
to
see
who
is
attending
and
we
have
some
agenda
items.
So
how
about
we
wait?
We'll
wait,
one
minute.
First,
for
everybody
to
fill
in
their
agenda
items.
A
And
while
we're
waiting,
let's
see,
do
we
have
anybody
new
with
us
this
week?
That
would
like
to
say.
A
So
nobody
knew
then
let's
proceed
into
the
agenda.
David
fossil
has
the
first
item.
B
Go
ahead,
david,
hey,
okay,
so
I
want
to
gain
more
deep
insights
into
our
keyboard
controllers,
so
some
sort
of
profiling
and
the
kinds
of
things
that
I'm
looking
for
aren't
they're
kind
of
custom
things.
They
aren't
things
that
you
would
typically
necessarily
think
about
with
the
normal
like
cpu
or
memory
profile
there.
So
I'm
thinking
about
I
want
to
understand,
for
example,
like
what
keys
are
popped
for
what
controllers
within
vert
controller
and
how
often,
perhaps
even
how
long
the
execution
takes
for
those
keys.
B
I'm
thinking
about
what
apis
are
called
to
the
the
kubernetes
api
server.
So
are
we
calling
a
bunch
of
get
requests
in
our
birth
controller
for
some
reason,
and
if
so,
I
I'd
like
to
see
why
and
then
just
kind
of
the
typical
things
of
like
where
we
spend
the
most
time,
so
that
would
be
the
normal
cpu
and
memory
profiling
just
to
determine
like
a
heat
map,
and
things
like
that.
So
I'm
looking
at
this
and
I'm
curious.
B
If
anyone
else
has
thought
about
this
yet
and
has
any
ideas
on
how
they
would
want
to
have
a
workflow
for
this.
If
that
makes
sense
anyway,
so
that's
the
topic.
Does
anyone
have
any
thoughts,
my
thoughts,
but
go
ahead?
Hello?
Hello.
Can
you
hear
me?
I
can't.
C
Laura
just
didn't
know
if
I
was
talking
to
myself,
I
apologize
I
was
wondering,
are
we
talking
about
just
hooks
or
are
we
talking
about
something
that's
built
into
the
into
the
system
that
we
enable
or
disable
at
runtime.
B
Or
something
that's
enabled
and
disabled
at
runtime,
so
it
would
be
something
that's
come
in
the
background
that
kind
of
is
tied
into
all
of
our
controllers
and
how
we
do
requests
like
I
was
looking
like
I've
done,
some
investigation
we
can
wrap.
I
can
wrap
our
round
trip
like
http
round
trip
logic
in
a
way
where
I
can
see
what's
going
out
and
what's
coming
in,
and
I
can
trace
that
if
I
want
so,
I
could
have
these
wrappers
and
everything
and
then
just
have
some
dynamically
flip
on
all
right.
B
Let's
start
actually
profiling
this
when
we
want
it
same
thing
when
I
was
thinking
about
like
work,
cues
and
stuff,
like
that,
I
could
wrap
the
work
queue
and
then
begin
understanding
what
keys
are
popped
and
how
often
and
again
have
that
as
something
that
we're
tracing
dynamically.
If
we
want
or
don't
want
it,
so
it
wouldn't
be
necessarily
a
performance
penalty
just
to
have
it
all
wrapped,
but
it
would
be
performance
penalty
once
we
actually
decided
to
start
profiling.
C
Cool
and
so
my
keys
pop,
so
one
thing
that
came
to
mind
when
I
read
your
initial
email
was
how
often
we
see
in
our
logs
where
a
an
update
couldn't
occur
because
the
wrong
resource
version
blah
blah-
and
it
reads
exactly
yes,.
B
D
So
my
thoughts
on
this
are
that
a
lot
of
such
things
are
already
collected
with
prometheus
and
from
a
previous
experience
where
I
used
histrix
a
lot.
D
That's
my
normally
my
channel
approach
to
this
so
like
defining
first.
What
what's
an
interesting
question
to
ask
last
flag
the
collisions
then
checking?
Is
there
already
a
magic
there
for
that?
Or
can
we
interpret
it?
Can
we
distill
it
from
the
collected
metrics
already
and
then
add
some
something
like
this
if
it
gets
too
chatty
something
like
a
debug
level
or
something
may
make
sense
to
not
always
collect
them?
B
About
just
in
general,
with
the
idea
of
exporting
these
metrics
with
prometheus
is,
I
wasn't
convinced
they'd
provide
the
fidelity
that
I
was
looking
for.
So
I
can
understand
if
you're
one
to
monitor
something
from
an
operations
perspective
and
gain
some
insight
into
how
it's
performing
prometheus
is
pretty
great
over
a
long
period
of
time.
B
I
wanted
to
run
a
very
specific
set
of
tests,
and
then
I
wanted
to
stop
the
profile
and
I
wanted
to
export
exactly
what
occurred
over
that
time
frame.
How
would
I
do
that
with
prometheus?
B
But
there's
only
like
you're
talking
about
the
fidelity
that
you're
only
getting
the
maximum
amount
of
fidelity
that
prometheus
scrapes
at
so
let's
say
it
scrapes
every
minute
or
every
30
seconds,
or
something
like
that:
you're
not
you're,
not
going
to
get
so
yeah
you're
not
going.
D
A
B
The
value
in
prometheus
and
something
kind
of
long
term-
and
I
even
said
this
in
the
email
there
may
be
exporting
a
set
of
metrics,
maybe
just
a
subset,
depending
on
on
how
detailed
this
debug
information
is.
I
don't
see
that
as
being
a
replacement
for
what
I'm
looking
for,
though
at
least
it's
not
obvious
to
me.
D
Yeah,
for
me,
the
thing
is
normally
that
there
you
get
a
lot
of
things
and
if
you
try
to
debug
specific
scenarios,
I
personally
found
it
more
interesting
to
to
locally
try
to
reproduce
it
like
with
tests,
unit
tests
or
so
on
and
for
the
other
part,
it's
more
like
how
many
collisions
are
happening
and
so
on.
I
think
that
prometheus
is
normally
good
enough,
but
I'm
not
against
adding
such
provider
things.
I
just
just
from
my
experience.
It's
very
valuable
if
you
have
really
clear
measurement
points
which
can
be
compared.
E
I
saw
ryan's
question
if
it
can,
if
it
has
to
restart
you,
can
enable
the
profiling
on
the
fly
and
go
that
gives
a
quite
different
picture
of
the
inner
working
of
the
process
to
optimize
like
which
function
uses
how
many
much
memory
and
stuff-
and
I
I
really
would
love
to
see
that
because
I've
enabled
it
by
hand
and
removed
the
code
again
a
few
times,
and
it's
would
be
easier
to
get
it
this
way.
F
Maybe
there's
like
a
few
here
like
there's
like
to
me
what
I'm
hearing
is
like
we
have
like
rowan
mentioned
some.
We
can
important
metrics,
I'm
and
then
david
talks
a
little
bit
about
the
profiling,
I'm
even
thinking
some
other
stuff
like
we
could
like
you.
What
about
like
transitioning
time
phases
and,
like
you
know
how
do
we
track
that
as
well?
F
Maybe
some
of
them
exist
should
exist
on
the
object,
like
the
one
I
mentioned
like
when
an
object
changes
phase
like
we
don't
know
when
that
happens,
there's
no
trend
last
transition
time,
like
that's
another
one
that
I'd
want
to
know.
D
Yeah
approaches
are
definitely
not
mutually
exclusive.
It's
just
for
the
profiling.
I
really
just
see
some
very
corner
case.
Debugging
cases
more
I
mean,
except,
if
you
only
have
that,
then
you
would.
You
would
try
to
use
profiling
for
everything,
but
for
the
things
we
tried,
for
instance,
mentioned
and
stuff
like
collisions.
How
many
rest
calls
we
do
and
all
this
stuff?
I
would
prefer
to
see
that
improvements
more
because
it's
easier
to
see
there
from
quite
interpret
from
my
perspective,
if
you,
but.
E
The
differentiation
I
I
took
so
far
is:
I
use
prometheus
metrics
to
measure
over
a
long
frame
of
time
and
normalities
like
to
see
if,
if
I
run
a
load
test
on
like
I
did
with
the
ssh
endpoint,
if
I
end
up
with
more
glue
routines
in
the
end
or
I've
had
resource
leakages
or
how
performance
happens
in
an
actual
test,
and
I
use
profiling
for
looking
into
a
specific
case.
I
want
to
see
more
details
on
why
this
happens
or
I
want
to
inspect.
D
D
E
B
How
about
this
as
a
path
forward?
I
think
the
metrics
collecting
whether
it
goes
to
a
profiler
or
prometheus
or
whatever.
I
don't
think
it's
mutually
excuse
exclusive
for
how
it's
presented.
So,
for
example,
if
I'm
interested
in
seeing
how
many
times
the
queue
is
popped
for
a
specific
key
or
something
like
this,
I
think
creating
these
tracing
packages
that
allow
us
to
gain.
These
insights
is
kind
of
the
first
step.
Perhaps,
and
then
there
can
be
multiple
ways
of
exporting
that
it
can
be
explored
through.
B
E
B
Talking
about
both
okay
yeah,
so
there
yeah
that
would
be
part
of
it
as
well,
enabling.
E
B
The
go
profiling.
I
guess
I
would
consider
this
like
a
debug
profile
package
of
some
sort
that
allows
you
to
enable
different
tracing,
whether
that
be
profile
and
cpu
memory.
I
consider
that
kind
of
part
of
this,
maybe
some
other
custom
tracing
that
we
want
with
inside
our
controllers
are
very
specific
to
what
we
do
and
anything
else
that
you
might
not
want
enabled
all
the
time
because
of
perhaps
a
performance
head
or
something
like
that.
D
Yeah
I
mean
having
more
fine
grains,
deeper
metrics,
even
if
you,
which
you
may
not
always
want
to
collect,
that's,
also
easy
to
do
anyway.
It's
not
clear
to
me
what
you
mean
with
so
that
you
enable
the
memory
and
cpu
profiler
somehow
and
disable
it
somehow.
I
guess
that's
out
of
question
something
useful
for
the
measurement
points.
D
I'm
I'm
not
too
sure
if
we
need
something
else
to
improve
this
right
now,
because
I
really
think
the
most
important
part
is
getting
getting
the
pictures
regarding
person
tools
and
so
on
and
less
about
debugging
individual
cases.
B
At
least
from
my
point
of
view,
so
what
I'm
thinking
about
is
again
these
really
tight
time
frames
of
if
I
want
to
run
a
ci
test,
for
example-
and
I
want
to
let's
say
we
come
up
with
some
sort
of
minimal
stress
test-
I
don't
know
logic
10
dm.
D
So
maybe
just
one
question
for
upfront,
so
it's
not
like
I
mean
prometheus,
just
scripts
scripts,
every
five
seconds
or
10
seconds
or
every
second,
whatever
you
configure
it,
but
that
doesn't
mean
that
the
metrics
which
are
collecting
are
missing
data.
It's
like
you,
still
get
accurate,
percentiles
and
accurate
counters
and
everything.
D
B
Yeah,
but
it's
the
aggregate
data
of
the
entire
run
time.
Not
when
I
wanted
to
start
profiling
versus
I
mean
I
guess
I
would
have
to
do
some
sort
of
calculation.
If
we're
saying
that
that
I
don't
know
when
I
mean
you
need
to
know
the
start
time
and
the
end
time
of
the
test
right
and
that
has
to
coincide
with
when
prometheus,
exactly
scraped
it
and
what
so,
when
we're
talking
about
tight
time
frames
and
actually
trying
to
measure
something
that
we.
D
You
just
have
to
ensure
that
you
don't
do
anything
afterwards,
so
so,
if
it
just
takes
one
second
and
you
script,
all
10
seconds,
you
would
choose
a
frame
of,
I
don't
know,
wait.
10
seconds,
first
run
the
test.
The
workload
wait
10
seconds
afterwards,
and
then
you
have
the
metrics
within
that
time
frame
which
are
for
that
test.
That's
how
I
normally
do
stuff
like
this,
for
instance,
that
could
work.
D
D
E
D
E
E
B
Metrics,
so
right
now
our
ci
does
not
deploy
a
prometheus
stack
by
default.
We
can,
of
course,
enable
something
like
that.
We
have
the
pr
open
it's.
It
should
be.
D
B
Are
so
this
is
something
that
we're
considering
enabling
for
all
ci,
probably
so.
D
It's
I'm
not
sure
yet
if
we
want
to
end
it
at
it
for
all
end-to-end
tests,
but
what's
right
now
planned
is
at
least
to
enable
it
in
the
periodic
tests
which
run
every
day
and
collect
the
data
there
and
have
a
specific
test
lane,
which
runs
scale
tests
like
starting
100,
vms
and
so
on
and
collecting
the
data
for
that
pretty
much.
What
what
I
explained
before,
let's
see.
B
Okay,
so
that
would
give
us
a
baseline
understanding
of
at
least
the
performance
metrics
that
we
export
today,
which
that's
not
a
whole
lot.
Really,
I'm
not
even
sure
what
we
would
be
measuring.
D
Yeah,
so
you
can
get
you
get
api
calls
and
all
this
kind
of
stuff
only
but
from.
D
A
Hey
david,
I
have
a
question
from
a
community
perspective
and
collaboration
perspective.
Do
you
plan
on
reaching
out
to
nvidia
and
collaborating
on
them
with
them
for
this
kind
of
work?
A
Sure
ryan?
Do
you
want
to
collaborate,
yeah,
yeah
yeah?
Well,
I
I
saw
the
email
that
came
from
nvidia
this
past
week
and
it
it
seems
like
they
have
an
a
deep
interest
in
in
collecting
performance.
Metrics.
B
F
Yeah,
like
from
from
our
perspective
like
we
want
to,
we
want
to
increase
as
much
visibility
as
possible
and
having
and
one
of
the
things
that
the
milling
thread
that
fan
was
looking
at
was
just
having
a
tool
to
to
measure
things,
and-
and
I
even
mentioned
one
of
the
one
of
them
earlier,
like
you
know,
like
we've
been
looking
at
it
from
the
perspective
of
phases
and
the
way
things
have
been
moving
through
phases
and
bmis,
the
some
of
the
stuff
we've
looked
at
is
profling,
but
profiling,
but
just
was
very,
we
haven't
had.
F
A
Okay
sounds
good,
maybe
you
guys
can
take
this
over
to
the
performance
sig
as
well
and
and
talk
about
it.
E
No
okay,
just
one
point:
I
also
made
an
email
like
I'm,
not
sure
I
haven't
read
about
anything
else,
having
done
this
with
the
sub
resource
yet,
but
I
really
like
the
idea
of
maybe
talking
to
the
community
or
getting
a
standard
practice
around
this,
that
we
have
a
tool
that
is
plus
able
to
tell
different
controllers
different
sub
resources,
hey
trace
now
and
is
able
to
collect
that
it
could
be
something
not
only
cooper
could
use,
because
that's
the
idea
of
triggering
a
sub-resource
sounds
pretty
cool
and
something
other
people
could
use
as
well.
B
When
I
was
talking
with
or
looking
at
what
ryan's
team
is
doing,
I
think
one
of
the
things
that
led
me
to
the
path
of
using
a
separate
source
and
and
trying
to
gather
stuff
on
a
very
tight.
E
B
When
we're
viewing
things
say
our
prometheus,
metrics
or,
however,
we're
we're
gauging
performance,
maybe
from
a
long-term
stress
test
or
whatever,
like
that.
If
I'm
wanting
to
begin
trying
to
understand
how
to
improve
efficiency
of
controllers,
and
things
like
that,
I
like
being
able
to
run
very
tight
tests
and
see
what
changed
in
a
very
like
controlled
manner
and
being
able
to
see
that
in
like
file
output
and
things
like
that.
B
Maybe
that's
just
the
way
I
visualize
things
and
the
way
like
I
visualize
my
workflow
working
for
this
as
a
developer.
So
that's
why
I've
been
a
little
bit.
I
guess
opens
the
idea
at
prometheus,
but
then,
at
the
same
time,
kind
of
like
that
feels
like
a
burden
to
me
to
have
to
require
prometheus
use
prometheus
to
gather
all
these
things
when
it's
really
from
a
developer.
D
D
I
think
it's
not
mutually
exclusive.
So,
and
I
mean
like
with
it's
pretty
normal,
like.
D
As
I
said,
I've
been
using
historics
and
other
stuff
like
this
in
the
past,
and
it's
pretty
normal
to
also
have
measurement
points
in
the
code
which
help
you
with
performance
measurement
in
general.
You
can't
go
down
to
tracing
level,
but
this
is
normally
fine,
because
from
my
experience
you
normally
need
to
find
out.
Oh
there's
a
difference
in
I
don't
know
from
from
going
going
from
phase
a
to
phase
b.
If
you
run
the
test,
there's
a
huge
delay,
which
is
unexpected.
B
B
Something
to
kind
of
have
my
vision
and
we'll
see
more
discussion
from
there.
E
E
A
Thanks
david,
okay,
moving
right
along
to
ashley
and
ctl
command,.
G
Hey
yeah
so
sometimes,
or
some
bugs
have
come
up
where
vms
get
stuck
in
a
terminating
state
waiting
for
their
termination
grace
period
to
finalize
so
for
windows.
Vms,
that's,
you
know
an
hour
so
we're
where
I
was
looking
into
some
kind
of
escape
hatch.
G
So
if
the
user
doesn't
want
to
wait
that
termination
grace
period,
that
would
specify
a
way
to
get
rid
of
the
vm
doing,
vert
ctl
destroy
or
maybe
a
vert
ctl
stop
dash
dash
force,
which
would
just
instantly
terminate
the
vm
versus
waiting
for
that
graceful
shutdown.
G
G
Okay,
and
is
there-
I
was
kind
of
leaning
towards
adding
an
extra
command
for
ttl
destroy
just
because
that
kind
of
lines
up
more
with
the
vert
delivered
commands,
and
also
in
doing
that,
we
would
need
to
add
some
kind
of
status.
D
D
H
D
E
A
This
graceful
period.
D
Yeah
we
could,
could
we
just
edit
grace
period
equals
one
or
zero
or
whatever.
D
D
Yeah,
but
I
mean
at
the
end,
if
we
added
force
or
destroy
at
the
end,
it
has
to
kind
of
deal
with
the
part
where
we
have
the
grace
period,
so
you
would
have
anywhere,
even
if
no
matter,
if
you
have
explicitly
specified
grace
period
on
the
command
line
or
if
we
have
a
destroy
command
at
the
end,
we
have
to
fiddle
around
with
the
grace
period
and
the
pot
deletion
at
the
end
to
really
get
rid
of
it.
At
some
point,
I
guess.
G
G
D
G
D
G
D
B
So
I
think
the
question
is
destroy
versus
force
force
seems
more
consistent
with
the
kubernetes
api
today.
What
does
destroy
mean
again
on,
like?
B
E
What
what's
the
situation
now,
if
you
force
and
grace
periods
here
with
a
a
pod
with
a
stuck
or
running
vm
and
it
gets
removed
from
kubernetes,
but
the
container
is
stuck
running,
do?
Does
it
block
anything
on
word
handler
or
like?
Do
we
have
any
way
of
getting
rid
of
the
container
running
in
the
back.
E
D
Data,
so
I
would
therefore
not
go
yeah
dash
dash
force
is
really
just
with
that
for,
as
kevin
said,
you're
really
deleting
the
api
object,
so
that
it's
really
deleted
no
matter
if
the
container
still
exists.
So
I'm
not
sure
if
we
have
this
in
mind
with
this
force
here,
so
I
maybe
we
should
not
use
the
force
here.
Try
not
to
confuse
it.
B
Well,
I
so
with
destroy,
and
I
think
the
point
that
I
was
about
to
make
was,
if
we're
using,
destroy,
because
it's
consistent
with
lymphoid
liver
undefines
the
domain.
I
think,
therefore,
we're
we're
just
wanting
to
stop
the
virtual
machine
not
like
delete
the
actual
virtual
machine
object
as
a
result
of
destroy.
E
D
Yeah,
but
you
don't
have
what
you
don't
yeah
I
mean
the
domain
doesn't
have
time
to
shut
down.
It
just
is
immediately
stopped,
but
the
difference
is
that
there
is.
There
are
no
data,
races
or
anything,
so
liberte
succeeds
in
destroying
or
not,
and
but
afterwards
you
know
it's
down,
whereas
with
dash
dash
force
on
cube,
ctl
district
force
delete
the
part.
B
D
B
Definitely
wrong.
Okay,
so
verse,
like
the
levert
client
destroy,
is
going
to
delete
the
domain.
It's
gone,
vert
ctl,
destroy.
If
we're
trying
to
follow
the
same
pattern
as
libvert
wouldn't
be
doing
the
same
thing,
it
would
just
be
for
stopping
the
virtual
machine,
so
yeah
a
ctl
kill
vm
might
be
more
accurate.
I
think.
D
C
B
I
think
that
the
grace
period
thing
is
the
thing
that
we
could
probably
all
arrive
at
and
agree
on
if
it
doesn't
cause
any
problems
with
the
pod
grace
period
or
anything
like
that,
like
somebody
can't
if
they
try
to
set
that
to
a
higher
period
than
the
pod,
I
don't
know
like.
We
talked
about
yeah.
A
Here,
yeah:
let's,
let's
come
up
with
some
kind
of
pull
and
take
it
to
the
mailing
list.
So
we're
not
just
going
around
in
circles
here.
B
Yeah
paul
or
ashley
did
you
want
to
just
start
a
mailing
list
thread,
and
maybe
we
can
just
sort
it
out.
Okay,
yeah.
A
A
Okay,
ryan.
You
have
the
next
item:
bmi,
create
diagram.
F
Yeah
so
the
last
thursday,
in
the
sixth
scale
meeting,
we
spent
the
time
building
a
diagram
that
will
show
what
happens
when
you
create
a
vmi.
So
all
the
different
steps
that
occur.
There's
references
to
code
functions
everything
so,
and
the
goal
for
this
is
so
that
we
can
kind
of
level
set.
F
We
can
all
get
to
a
point
where,
when
we're
talking
about
you
know
different
areas
of
the
code
and
where
we
think
there
are
bottlenecks,
we
can
at
least
refer
to
a
diagram,
so
we're
going
to
have
an
easier
time,
communicating
about
the
different
areas
that
that
we
think
are
bottlenecks
so,
and
it's
also
good
just
to
reference
in
general
because
of
I
think
in
the
process
from
end
to
end
knowing
and
getting
it
all
down
in
your
head
is,
can
be
hard
to
visualize
it
at
one
time,
so
it
has
a
bunch
of
different
uses.
F
A
Nice,
this
is
really
awesome.
This
really
helps
david.
When
he's
does
his
code
walk
through
with
a
community
member.
B
B
Presentation
with
that,
because
it
just
gets
out
of
date
almost
immediately.
F
Yeah,
I
know
I
I
could
see
like
over
time
like
a
week,
especially
because
we
have
function
calls
we
have
code
reference.
I
think,
like
I
tried
to
make
it
at
least
somewhat
general,
with
the
notes
so
that
we
have
an
idea
of
like
okay,
what's
happening
during
the
transitions
and
stuff,
but
yeah
at
least
for
at
least
for
now,
it'll
be
precise,
and
we
can
always
always
consider
updating
it
or
you
know
where
we
end
up
what
it's
longer
term,
but
at
least
for
now
it's
a
good
conversation
point.
B
E
What
did
you
build
this
with?
Can
we
move
that
into
a
collaborative
document
like
I
don't
know
what
what
lucid
chart
or
or
I.
F
Use
the
draw.io
it's,
the
link
is
in
the
sig
scale
document
it's
it's
shared
with
favorite
dev.
I
can
also
put
it
in
here.
If
you
want
to
see,
of
course,
we
have
it
in
the
notes.
Yeah.
F
A
A
Discuss
no
okay,
we
are
at
7
45,
so
david.
Do
we
want
to
do
a
bug
scrub
this
week
when.
A
We
did
almost
10
last
week,
I'm
fine
with
skipping.
If
everyone
else
says.
A
Yeah,
I
think
I
think
we're
all
set
here
and
we
can
return
10
minutes
to
everybody.
H
A
Okay,
all
right,
we'll
conclude
this
meeting
then
and
we'll
see
you
next
week
or
I'm
on
the
mailing
list
or
the
slack.
Thank
you
have
a
good
week.