►
From YouTube: KubeVirt community meeting 2020-09-30
Description
This is the recording of the KubeVirt community meeting held on 2020-09-30.
Meeting minutes and details: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#
A
Before
starting,
I
just
wanted
to
well,
there
is
a
new
engine
item.
Let's
say
that
we
will
try
to
run
weekly
is
introductions.
If
anyone
is
new
to
the
call
and
wants
to
introduce
themselves,
I
don't
quickly
see
anyone
new
anyone,
new,
no
okay,
anyone
wants
to
say
hi.
A
Also
another
reminder
that
to
add
a
gender
items
to
the
document
just
join
the
keeper
dev
and
you
will
have
access
edit
access
automatically
and
with
that
said,
let's
get
started.
We
have
two
topics
in
the
agenda.
B
So
hello,
I'm
arthur
someone
brought
this
subject
about
out-of-the-box
monitoring
and
alerting
for
virtual
machines
on
keebler.
B
So
I
tried
to
work
on
that
and
I've
noticed
that
we
are
missing
some
metrics,
some
really
important
metrics,
for
if
some,
if
someone
is
properly
wanting
to
monitor
a
virtual
machine
like
swap
massage
and
disk
usage
and
some
other
things
and
then
I've,
I've
asked
it
at
the
keyboard
dev
channel.
If
someone
knows
how
to
export
this
metric
from
live
from
livford
and
it
looks
like
it's
not
possible.
B
So
what
I?
What
I
wanted
to
ask
is:
should
we
even
implement
this
alerting
if
we
do
not
have
enough
metrics
like
just
create
alerts
for
network
monitoring
and
memory
usage,
it
doesn't
look
like
it's
enough.
Should
we
do
that
or
should
we
just
leave
vmi
monitoring
to
the
users.
C
B
C
B
Yes,
we
have
some
really
good
metrics
for
network,
but
disk
usage.
We
did
not
have
like
a
total
amount
of
disk
space.
The
total
amount
use
it.
We
just
have
like
a
total
amount
of
I
o
and
some
other
operations,
but
not
really.
D
I
guess
what
that
you're
getting
here
in
an
area
where
it's
difficult
to
say
what
is
good
and
bad,
and
if
we
should
throw
an
error
or
not,
because
if
you
look
at
the
disc
I
mean
there
is
the
partition
on
it
for
multiple
partitions
and
it's
formatted
in
some
way
so
from
outside.
You
do
not
even
necessarily
see
how
full
it
is
or
what
it
means
for
the
vm
if
it's
full
or
not
right.
So
I
mean
I'm
not
sure.
If
you
understand
what
I
mean
and
the
same
probably
applies.
D
And
cpu
to
the
degree,
because
is
it
not
good
or
bad?
It's
if
a
if
a
vm
uses
all
the
cpus
which
you
give
to
it,
do
you
do
you
expect
that
it's
used
up
to
80
percent
or
90,
then
this
is
exactly
what
you
want,
because
you
want
it
to
be
used
to
do
the
task,
or
is
this
already
a
concern,
so
I
guess
there
may
be
some
patterns
which
makes
sense
for
broad
audience.
I
personally
just
don't
know
that.
Did
we
consider
something
like.
D
C
Website,
or
maybe
in
the
examples
directory-
add
some
example
with
the
vmi
and
monitoring
rules
that
we
can
deploy
just
to
let
the
user
know
that
he
can
do
it
if
he
wants
to
maybe
documentation.
That
explains
how
to
do
it,
but
I
agree
with
roman
here:
we
we
can't
decide
for
the
users
what
is
good
or
bad
for
their
specific
workloads.
B
Okay,
I
was
just
thinking
because
there
is
some
pod
like
q.
I
think
open
shift
already
creates
alerts
for
pods
and
I
think
vmi
could
follow
the
same
pattern
and
we
could
create
alerts
for
vmis
as
well.
I
just
I
don't
know
if
we
should
create
if
the
metrics
we
have
is
are
not
enough,
so
I
think,
while
we
are
talking,
I
think
the
we
decided
that
we
should
not
right.
D
So,
if
openshift
does
it,
for
instance,
on
cpu
and
memory,
and
I
think
over
tests,
at
least
in
which
we
can
also
manage
virtual
machines
has
at
least
the
overview.
Some
hints
that
this
that
the
ram
uses
a
lot
of
cpu
memory.
It
may
make
sense,
I'm
just
sure.
Maybe
maybe
we
should
just
go
to
keep
the
devil
there
and
ask
people
what
to
expect
and
make
some
suggestions,
which
we
think
makes
sense.
D
B
Yeah,
okay,
I
think
I'll
ask
on
convert
mailing
lists,
what
the
others
think
about
it.
It
may
also
make
sense
to.
D
D
So,
regarding
to
openstack
vladik,
who
is
a
maintainer
on
keyboard,
2
is
a
pretty
good
source
when
it
comes
to
tracking
down
code
in
open,
stack
and
checking
what
openstack
does
so.
Maybe
you
can
try
a
pinging
valley,
romanovsky,
okay,.
B
A
A
I
also
see
that
someone
added
a
note
here
in
the
notes
on
this
topic
about
prometheus
note
exporter.
I
don't
think
we
mentioned
that.
B
Oh
yeah,
I
think
it
was
not
me,
though,
who
added
it,
but
it
is
possible
for
users
to
deploy
prometheus
another
exporter
on
their
vms
and
they
do
the
monitoring
themselves.
So
we
they
can
monitor.
We
don't
need
to
do
it,
but
yeah.
D
Yeah,
I
think
the
advantage
advantage
with
this
is
that
you
can
definitely
get
all
the
metrics
regarding
to
disks
and
so
on,
right
disadvantages
that
users
have
to
configure
them
to
then
to
their
needs.
It
may
also
make
sense
that
we
provide
some
examples
on
how
to
create
nice
integrated
alerts
if
they
use
the
node
exporter
in
the
context
of
people.
It's
also
a
possibility.
I
don't
know.
A
Okay
thanks
so
the
next
topic.
The
agenda
is
from
daniel
sorry,
yes,
daniel,
reducing
image
size.
C
Yeah,
it
is
actually
not
exactly
my
topic.
It
is
something
that
I
was
requested
to
raise
here
by
shaol.
I'm
not
sure,
if
he's
in
this
meeting
right
now,
but
he
asked
me
to
raise
it
and
I'm.
B
B
E
Great,
so,
basically,
in
a
previous
place
that
I
was
working,
we
looked
for
ways
to
reduce
the
cost
when
going
up
the
cloud
with
a
certain
product,
and
one
of
the
things
that
I
found
was
the
ability
to
use
docker
slim,
which
reduced
the
size
of
the
image.
E
So
the
docker
slim
is
it's
an
industry
level
tool
that
allows
you
to
to
use
it
in
order
to
reduce
the
size
of
the
image
by
reducing
certain
dependencies
that
are
not
required.
E
The
next
version
report
performance
testing
production.
Of
course,
all
of
this,
when
going
to
the
cloud,
will
cost
a
lot
of
money
for
the
storage
and
even
reducing
10
device.
I'm
pretty
sure
that
maybe
we
can
reduce
more,
can
reduce
the
cost
of
the
product,
and
this
is
the
the
main
idea.
D
So
what
I
can
say
here
is
that
for
most
binaries
we
would
not
need
to
use
fedora
base
image
there.
I
would
just
go
for
the
distress
images
which
come
with
bazel
anyway,
like
for
virtual
api
with
controller
and
so
on,
and
then
we
have
returned
under
launcher
where
especially
word
launcher
needs
delivered,
and
this
means
that
we
need
to
need
to
install
delivered
via
rpm
packages
with
all
its
one
and
with
all
its
dependencies,
and
maybe
docker
slim
can
save
the
same
thing.
D
D
What
I
always
see
is
a
huge
disadvantage
about
all
this
is
that
rpms
just
changed
all
the
change
all
the
time
and
for
me
this
just
adds
another
unpredictability
layer
where
it
does
something
where
the
result
is
not
good
to
me,
and
I
would
instead
prefer
if
we
would
just
really
install
the
rpms
which
are
needed
and
and
don't
even
have
all
the
other
rpms
inside
which
come
with
it
out
with
or
out
of
the
box
yeah.
So
I'm.
F
Looking
at
docker
slim,
it
seems
like
one
of
the
well.
I
could
be
wrong.
I
literally
just
googled
it
while,
while
we
were
talking
about
it,
one
of
the
advantages
it
looks
like
maybe
squashing
layers
together,
which,
depending
on
how
the
image
was
built,
could
really
offer
some
some
reduction
of
the
size,
but
we're
we're
pretty
good
about
that
already,
where
we
aren't
introducing
multiple
layers
and
just
making
little
changes
to
large
files
or
anything
like
that.
F
So
I'm
not
sure
if
it
would
benefit
us,
given
how
knowledgeable
we
already
are
of
how
this
layering
works
and
how
we've
kind
of
done
our
best
to
minimize
anything
that
might
cause
a
bloat
today,.
D
By
the
way,
there
is
an
interesting
there
can
be
an
interesting
side
effect
with
this.
That,
because,
basically,
we're
handling
launcher
and
the
cluster
components
would
require
different
components,
which
would
mean
that
we
may
end
up
with
completely
different
layers
and
at
the
end,
we
may
even
pull
more
to
the
nodes
than
otherwise.
D
C
Yeah
and
another
problem
that
can
be
with
the
docker
slim
is
that
well
I'm
not
an
expert
with
this,
but
from
what
I
read
it
is.
It
requires
some
interactions
with
the
during
the
build
time.
So
you
build
your
container
and
then
you
run
some
interactions
with
your
component
and
then
it
analyzes
which
packages
are
needed
and
which
ones
are
not,
and
then
we're
risking
it
deleting
some
packages
that
we
do
need,
because
unless
we
actually
statically
compile
everything
and
then
we're
going
back
again
into
this.
D
Honestly,
I
would
more
go
into
the
direction:
try
to
get
predictable
images
for
the
containers
where
we
need
delivered
and
the
rest
just
go
to
to.
This
relate
to
the
display
space
image
where
we
just
have
the
golan
static,
binary,
which
you
could
do
any
moment.
B
D
Just
said,
I'm
not
sure
if,
since
we're
it's,
the
benefits
are
not
entirely
clear
and
we
can
accept
from
word
launcher
already
used
and
returned
to
less
base
images
which
basically
just
contain
the
static
go
binary.
D
D
D
We
can
also
completely
statically
combine
and
go
binary,
including
the
chilepsy
dependency,
and
then
we
can
have
complete
stretch,
but
there
is
not
much
difference
because
then
the
binary
is
bigger,
right
and
so
having
glypc
in
the
distrelas
image
or
just
compiling
in
all
dependencies
with
the
go
binary
is
not
so
much
difference.
G
And
yeah
there's
only
one
one
point
that
I'm
a
little
bit
afraid
of
is
that
if,
if
we'll
do
it
from
scratch,
or
something
like
that
and
not
from
a
distro,
then
we
have
a
problem
to
we
debugging
and
troubleshooting,
because
currently
we
use
to
connect
we
are
connecting
and
to
the
pods
and
trying
to
walk
there
from
time
to
time
to
debug
problems,
and
if
there
it
will
be
no,
no,
nothing
there
except
a
binary.
Then
this
is
really
challenging.
D
D
D
It's
definitely
more
an
issue
for
the
10
when
with
launcher
and
there,
my
preferred
way
would
be
to
ensure
that
tar
is
in
the
binary
in
the
container,
because
when
tara
is
there,
you
can
just
use
oc
or
cube
cdl
cp
to
copy
in
a
static
bash
or
something,
and
then
you
can
just
run
a
bash
into
the
debugging,
I'm
anyway,
not
sure
if,
if
we
have
livered
in
a
container,
if
we
can
then,
for
instance,
remove
bash
at
all,
so
I
would
leave
that
here
on
the
note
level,
I'm
not
so
much
interested,
for
instance,
you
know
also
not
having
bashing
there.
A
D
I
mean
with
scratch:
images
are
from
my
perspective.
If
you
don't
need
to
enter
for
debugging
the
best
and
with
docker
what
docker
slim
can
give.
You
is
that
for
other
cases,
where
you,
for
instance,
need
to
use
a
package
manager
first,
like
dnf
or
or
yarn
or
whatever,
whatever
you
want,
you
most
of
the
time,
need
some
base
operating
system
in
the
container
already
to
install
packages
and
prepare
it
some
other
way,
and
then
you
always
have
the
issue
of
getting
rid
of
it.
I
mean
there's.
A
This
sure,
maybe
I
didn't
make
it-
I
think
that
my
question
was
more
for
for
sure.
In
terms
of
you
know
the
original
proposal,
in
terms
of,
if
so,
if
you
can
help
clarify
what
would
be
the
advantage
of
using
docker
slim
over
that
approach
of.
E
E
I
I
cannot
state
it.
I
think
something
like
this
will
require
some
proof
of
concept
in
order
to
understand
exactly
what
we
are
getting
and
if
it
will
work
other
than
that,
it's
it's
a
guess
same.
Like
roman
says,
when
we
are
running
with
the
docker
slim,
we
don't
really
know
what
is
getting
removed,
unlike
going
out
from
sketch
and
adding
only
the
things
that
we
need.
E
I
think
we'll
ask
you
was
to
by
reducing
the
size
of
the
images,
so
both
options
for
my
site.
Sorry,
I
really
repeat,
I
said
both
options,
for
my
side
are
talking
about
the
same
idea,
eventually,
which
is
reducing
the
cost
by
reducing
the
size
of
the
images
and
going
to
the
cloud.
This
is
something
that
impacts
the
customers,
and
this
is
what
we
want
to
to
see
if
we
can
achieve
other
than
that,
I
think
only
poc
can
provide
the
answers.
H
Hi
everybody,
I
don't
want
to
go
into
the
gory
details
about
documentation
retooling.
H
I
just
want
to
bring
awareness
that
it
is
happening
and,
if
you'd
like
to
help
out
there's
a
github
issue
that
I
linked
to
in
the
meeting
notes
and-
and
so
far
feedback
has
been
good
from
covert
dev
and
thumbs
up
and
the
issue.
A
Okay,
I
I
will
actually
add
I'll,
take
the
opportunity
to
add
something
more
to
that
topic,
because-
and
chris
correct
me,
if
I'm
wrong,
but
I
believe
the
the
issue
so
far
has
been
focused
on
on
the
tools
for
the
documentation
and
the
rendering
et
cetera,
but
there
is
like
a
a
bigger
or
potentially
bigger
issue
to
to
discuss,
which
is
the
multiple
sources
of
documentation.
A
So
this
the
issue
is
focused
on
the
user
guide,
but
one
of
the
challenges
to
resolve
is
that
there
is
documentation
about
different
components
in
different
repos.
There
is
a
mix
of
end
user,
focused
documentation
with
contributor
focus,
documentation
in
multiple
places
and
that
that's
something
that
would
need
to
be
addressed.
This
is
how,
to
you
know,
put
some
order
into
this
some
organization
in
the
the
overall.
You
know
the
whole
documentation,
not
just
the
user
guide.
H
Yes,
that
is
going
to
be
a
problem
and,
at
some
point
we're
gonna
have
to
address
it.
A
Okay,
I
guess
we
will
look
at
it
in
more
detail
and
come
up
with
a
proposal.
A
A
That,
okay,
next
topic
april.
G
An
ip
address
thing
that
we
expo
were
exposing
through
the
vmi
status,
the
ip
addresses
of
the
interfaces
with
or
without
the
the
guest
agent,
and
I
think
in
the
case
of
of,
if
we
don't
have
a
guest
agent,
only
the
primary
the
pod
network,
ip
address
is
reported
and
it
is
reported
without
the
prefix
and
the
masks.
G
D
Well,
I
think
I
personally
agree
with
you
that
the
mask
is
an
important
information.
If
you
have
multiple
piece
for
for
all
the
cases
like
where
you're
thinking
about
the
usual
pod
cases,
I
mean
you
mentioned
it
already
with
the
pod
network
right
you,
you
would
use
their
p
and
their
p's
field
and
expect
that
the
ipv4
and
ipv6
address
to
be
there.
The
question
is
now:
if
we
want
to
diverge
from
that
on
the
vmi
with
also
adding
the
mask
there
or
not
so
far,
we
actually
did
not.
D
It's
more
a
question
now,
if
we
should
separate
it
out
in
a
new
field
or
report,
everything
with
the
mask
the
mask
has
is
not
important
for
many
cases
in
keyboard,
like
for
the
port
network
ipv46
for
multis,
where
you're
not
passing
through
the
bridge,
so
where
you're
just
using
other
c9,
plugins
and
yeah.
If
we
use
masquerade,
we
would
have
to
report
it
from
the
mask
from
somewhere
else.
I
don't
know,
but
it's
more
a
question
for
me
on
how
to
best
express
this.
I
completely
agree
with
you
that
the
mask
is
important.
G
So
so
there
was
like.
I
think
that
the
proposal
was
one
I
mean
one
proposal
was
to
have
a
an
additional
list
of
addresses
that
is
in
another
place,
like
maybe
things
that
are
coming
explicitly
from
guest
agent
and
reporter
everything
like,
including
the
mask.
But
the
question
is
I
don't
I
personally
like
it
less
that
I
we
need
to
report
something
twice.
It's
like
duplicating
the
data
and
then
it
can.
It
can
confuse
even
more
the
api,
but
but
can
you
can
you
just
clarify
in
what
which
cases
it's
less
important?
G
D
For
everything
where
we
don't
use
our
own
scene
or
the
there
are
the
cni
plugins
with
the
which
are
hosted
inside
keyword
right
with
the
cni
plugin
operator
with
the
mouse
operator,
and
so
there
exists
some
cni
plugins
which
pass
through
layer,
2,
networking,
bridges
or
whatever,
and
we
support
it
like
that.
It
just
supports
the
bridge
and
doesn't
sign
in
ep.
In
this
case,
the
net
mask
is
important
for
the
others.
D
G
But
I
think
that,
like
the
kubernetes
api
is
not,
if
I
understood
correctly,
the
kubernetes
api
of
the
pod
only
exposes
the
the
port
network,
that's
that's
it,
and
the
reason
for
that
is
just
to
have
access
to
the
pod
itself,
but
the
the
motors
one
I
mean
secondary
networks
are
exposed
in
some
hacky
way,
using
annotations
and
even
in
the
status
they
are.
I
don't
know
where
exactly,
but
they
also
use
the
notation.
D
I
G
Can
be
it's
great,
I
guess
it's
also.
It's
also
interesting,
like
the
fact
that
I
think
when
I
look
at
the
so
this
is
like
maybe
a
philosophical
question.
When
you
look
on
the
vmi
object
itself,
the
question
is:
what
do
you
see
there?
Do
you
see
what
what
is
in
the
vm,
what
it
reflects?
What
what.
D
What
I
can
tell
you
what
it
tries
to
do
if
the
guest
agent
is
a
good
source
to
add
values.
There
is
another
question,
but
it's
not
supposed
to
be
a
guest
agent
information
place.
It's
supposed
to
be
a
place
where
you
see
the
information
so
that
you
can
create
controllers
or
other
objects
based
on
like
just
as
an
example.
D
A
virtual
machine
instance
service
would
be
such
an
example
where
which
could
then
like
a
service,
you
need
is
the
same
thing
for
you,
so
it's
mostly
meant
for
kind
of
automated
processes,
or
I
mean
can
also
be
used
for
humans,
but
it's
meant
to
be
kind
of
a
source
which
you
can
which
gives
you
value
for
establishing
common
communication
paths
like
the
api,
I,
like
the
fpm
parts,
which
is
used
for
services,
the
cubelet
and
the
cubesat
and
all
the
others.
D
I
I
rendered
the
need
for
net
masks
lower
is
that
the
api
shows
ip
addresses
that
are
like
for
a
specific
network.
So
say
I
request
secondary
networks
called
blue.
Then
the
api
tells
me
on
the
this
network.
Blue.
The
ipad
use
of
this
vm
is
that
so,
if
you
assume
that
these
vms
are
on
the
network,
blue
are
on
the
same
subnet,
because
you're
on
the
same
network.
If
you
make
this
assumption,
then
you
don't
need
the
subnet,
because
you
know
that
it's
on
this
specific
interface
on
this
specific
network.
D
Yeah,
I
mean
it's
perfectly
fine
to
have
switches
which
support
the
subnets
right
and
you
can
configure
them
on
their
ps
and
so
on.
So
it
could
happen
still,
so
I
think
it
makes
sense
to
expose
it
somehow.
I'm
not
sure
if
here
is
the
right
place,
that's
my
thing.
At
least
it
wasn't
intended
to
be
so
much
like
this
as
well.
It
was
more
intended
to
be
like
the
kubernetes
part.
G
Okay,
so
a
side
question
is
the:
is
there
if,
if
a
cni
sets
an
ip
address
on
the
on
secondary
api
or
secondary
interfaces,
are
they
is
there
a
mechanism
to
pass
them
into
the
vm
as
well?
Yes,.
A
I
Yeah,
but
it
depends
if
you
have
a
binding
mechanism
like
masquerade
that
hides
the
internal
ip
address.
We
just
expose
the
one
that
was
set
on
the
bot,
the
bot
id,
but
for
other
bindings
like
the
bridge
one
or
srv.
We
look
inside
the
cast.
D
Yeah
because
we
simply
don't
know
it
from
the
outside
right,
so
we
try
to
enrich
it.
There
I
mean
it's,
it
can
probably
discuss.
If
that's
a
good
thing,
maybe
we
should
just
not
show
these
and
have
them
in
an
extra
section
which
clearly
shows
that
this
is
coming
from
the
guest
agent
and
maybe
less
reliable.
I
don't
know.
G
Yeah,
that
was
that
was
what
I'm
I'm
trying
to
understand.
If
that
is
the
better
solution
than
to
have
I
mean
you
could
say
that
we
have
a
gas
station
section
and
that
one
will
report
you
the
interfaces
that
the
guest
agencies
and
if
there
is
not,
then
we
will
report
something
else
and
in
a
different
place.
Maybe
even
that
will
show
you
the
color
length
between
them,
because
I
think
that
the
guest
agent
information
usually
is
for
the
is
also
valuable
much
for
the
operator,
not
only
for
automation
for
services.
G
D
Yeah
yeah,
so
I
agree
with
you.
I
was
actually
feeling
that
separate.
I
mean
it
may
still
make
sense
to
enrich
ips
field
with
the
help
of
something
inside
the
guest.
Why
not
I
mean
if
we
think
it's
good
enough
or
it
doesn't
have
too
big
security
implications
or
something,
but
apart
from
that,
may
make
sense
to
clearly
also
have
a
separate
section
which
just
shows
what's
inside
the
guest.
I
D
G
D
G
The
only
thing
that
I
think
we
should
continue
on
the
discussion,
or
maybe
after
is,
is
it
valuable
just
to
understand
if
it
is
really
valuable
to
have
the
list
of
ip
addresses
in
the
vmi
without
the
the
muscle?
If,
if
the
answer
is
yes
for
the
automation
staff
services
stuff
like
that,
then
fine,
this
is
the
only
thing
that
I'm
I'm
not
I'm
just
I'm
not.
I
don't
really,
I'm
not
sure
yet.
D
I
G
It's
like
it
feels
like.
We
are
in
a
in
a
junction
that
that
what
is
what
is
the
common
thing
that
helps
everyone,
so
I
would
argue
in
general,
like
I
would
even
never
thought
about
reporting
only
the
ip
without
the
mask
in
the
first
place,
but
because
there
is,
we
are
in
the
ecosystem
and
you
are
saying
that
someone
may
don't
just
don't
expect
it
there,
because
this
is
not
how
kubernetes
works.
Then,
then,
maybe
that's
the
correct
thing.
I'm
just
you
know.
D
I
D
A
Okay,
I
have
a
logistics
question.
Do
you
guys
have
a
link
to
that
pr
you're
talking
about
that?
If
you
could
add
it
to.
A
D
We're
never
fast
enough,
of
course,
yeah
just
wanted
to
to
summarize
what
also
wrote
to
the
kubrick
dev
list
that
the
pr
got
merged
last
thursday,
I
think,
and
since
then
tests
are
at
least
half
of
the
tests
that
you
run
are
executed
in
parallel,
as
hoped.
No
really
measurable
effect
happened
on
this
infrastructure.
D
There
was
due
to
the
change
test
order,
one
test,
which
was
failing
pretty
often
that
we
feel
the
real
bugging
keyboard,
which
is
fixed
now
too.
Since
friday,
it
takes
roughly
one
hour
per
lane,
but
if
you
see
anything
strange
or
sound,
please
definitely
report
it.
D
There
can
always
be
more,
which
is
related
to
this
change,
but
I
think
so
far
it
looked
good,
and,
apart
from
that,
after
it's
running
for
some
time,
I'm
happy
to
hear
if,
if
taking
away
one
hour
changes
anything
notably
regarding
to
the
developer
experience,
or
if
I
mean
there
are
much
more
tests
which
you
can
parallelize,
but
I'm
also
trying
to
get
a
feeling
what
would
be
a
good
baseline
so
that
people
feel
comfortable
with
that
or
if
they
even
feel
any
improvement.
D
At
all,
I
mean
we're
still
at
three
hours
right,
so
it's
from
four
to
three
hours,
at
least
for
most
lanes.
Maybe
that
doesn't
change
anything
from
the
feeding
into
euros.
It's
still
as
bad
from
the
research
yeah.
I
D
G
G
Number,
I'm
I'm
more
in
I'm
more
interesting
and
actually,
if,
if
people
get
to
debug
stuff
during
this
week,
this
partner,
I
would
like
to
hear
the
some
feedback
on
that
because
that's
where
I
feel
that
there
will
be
some
challenges.
So
if
someone
who
is
debugging
debugged
in
the
last
week
or
you
will
debug
in
the
next
weeks,
I
would
be.
D
A
Oh
sorry,
I
double
clicked
and
it
wasn't
mute
again.
Okay,
anything
else
about
that
topic
or
any
other
topic.
We
don't
have
any
other
on
the
list.