►
From YouTube: KubeVirt Community Meeting 2021-04-07
Description
Meeting notes - https://groups.google.com/g/kubevirt-dev/c/uZ05WRzxdBw
A
Okay,
hello,
everybody:
this
is
the
weekly
community
meeting
for
project
coopert,
I'm
your
host
chris
caligari
and,
let's
let
us
begin.
I
have
posted
to
chat
the
the
meeting
notes.
So
if
you
want
to
make
some
edits
to
the
notes
yourself,
you
can
otherwise
you
can
just
follow
along
with
my
screen
share.
A
The
first,
the
first
item
that
we
always
do
is
introductions
and
it
looks
like
we
have
a
non-redheader
joining
us
this
week.
That
would
like
to
talk
about
how
they
use
goober.
B
The
new
cell
c
is
completely
based
on
codeword
and
we
are
defined
in
150
000
worker
notes,
one
million
concurrent
users
for
you
understand:
okay,
we're
gonna,
let's
say
work
a
lot
with
comfort
now
on.
That's
why
we
are
part
of
the
community
already,
okay,.
A
That's
amazing:
we're
really
glad
to
have
you
if
you
would
like.
A
Would
you
mind
posting
that
to
chat
so
I
can
get
the
exact
spelling,
yeah
sure.
A
A
And
if,
if
you
would
like
to
email
me
on
a
profile
on
your
on
your
usage
on
your
use
case,
that
would
be
awesome.
I
would
be
very
happy
to
post
that
up
to
our
our
blog
and
or
maybe
you
can
create
a
blog
entry
on
on
how
you
use
google,
we
can.
We
can
talk
and
and
work
together
on
that
on
that
posting
date.
If
you
would
like
it's
on
the
chat
window,
thank
you.
So
much.
B
I
hope
to
send
what
we
offer
for
you
understand.
We
offer
not
only
the
desktop
in
the
cloud,
but
also
we
offer
a
terminal
that
is
a
think
line
in
a
notebook
format
that
has
a
3g
4g
data
modem
and
also
a
data
plan
that
works
in
these
180
countries,
we're
more
than
happy
to
ship
one
for
each
of
the
the
guys
here
to
test
it,
because
we
are
on
alpha
phase
this
new
version.
Okay,
the
bet
are
gonna,
be
released
in
the
next
30
to
60
days.
C
B
A
D
So
this
is
a
really
yeah.
It's
interesting
to
hear
you
all
scaling
this.
This
far,
I've
got
a
lot
of
questions.
I'd
love
to
get
some
feedback
on
any
usability
gaps
and
things
like
that
that
you've
encountered
with
q,
verts
and
and
the
kinds
of
things
that
you
would
like
to
see
improved
and
all
of
that,
especially
you
cannot
imagine
how
many,
how
many
things
we're
gonna
use.
B
B
E
What
did
you
start
and
how
did
you?
How
long
did
it
take
you
to
get
this
many
customers
for
evms.
B
We
are
in
the
market
since
2011,
but
one
of
our
clients
asked
for
oem
our
services
with
them
on
brand.
That's
why
you're
not
aware
about
bdesk!
Last
year
we
told
the
the
client
that
we're
gonna
release
a
new
version.
We
are
working
already
three
years,
but
now
we
are
on
with
our
own
brand
or
don't
screw
the
business
we
have
with
them.
We
change
the
business
ballot
from
fairmont
per
user
to
a
pre-paid
per
minute,
and
that's
what
you're
gonna
see
on
the
website.
Okay,.
D
F
B
D
Okay,
all
right-
and
that
gives
you-
how
do
you
feel
about
the
performance
with
gcp
and
and
nesting
how's-
that.
B
B
D
So
gcp
is
the
only
data
center.
It's
there's
no
other
cloud
providers,
I'm
just
curious
whether
other.
B
Version
of
our
solution,
gonna
be
installed
in
another
supplier,
for
you
know,
okay,
because
need
to
next
remote
desktop
next
need
to
run
on
apple
hardware,.
G
I
thought
I
heard
you
say
you
have
gpus
in
in
the
vms.
Is
that
right.
G
Correct,
and
are
you,
how
are
you,
how
are
they
being
used
when
you're,
when
you're
using
nesting
like?
Are
you
passing
them
through
the
vms.
G
Okay,
okay,
is
it,
is
it
mig?
Is
it
is
it?
Is
it
using
the
big
feet
like
what
kind
of
gpus
I'm
curious.
B
E
E
I
was
about
to
ask
actually
about
expenses
like
since
you're
running
vms
and
then
also
you're
referencing,
the
gpus.
I
think
you
have
to
have
a
hefty
premium
for
a
gpu
that
you
go
for
here.
So
do
you
divide
up
to
vc
vgpus,
using
this
plug-in.
B
B
D
Why
did
you
decide
to
use
q
vert
rather
than
directly
giving
access
to
just
gpu
or
gcp?
B
D
I
see
so
it
was
live
migration.
The
feature
that
drew
you
to
convert
all
right
and
you
were
able
to
do
live
migration
and
give
kind
of
stronger
guarantees,
rather
than
giving
like
a
gcp
instance
directly
to
your
user,
which
you
may
or
may
not
have
direct
control
over.
Is
that
accurate.
B
A
A
So
how
is
overt
fitting
into
openshift
dedicated
and
coobert.
B
A
B
C
C
Okay,
because
I
was
wondering
how
your
vm
connects
to
each
other,
you
probably
have
some
actual
networks.
Is
that
meant
by
open
shift
or
you
have
some
magic
there.
C
Well,
go
on
okay,
okay
and
so
nice.
Let
me
also
introduce
myself
because
I'm
also
new.
This
is
my
first
time
to
be
here
and
I'm
also
nat
from
red
hat,
and
we
have
an
interesting
crossing
here,
because
I'm
from
google.
C
And
so
I
I
am
working
in
google
on
kubernetes
and
I'm
working
on
I'm
thoughts,
I'm
not
sure.
C
Okay,
actually-
and
we
are
right
there
just
to
fill
in
this
gap-
to
support
kubot-
and
I
thought,
but
anthony
is
a
a
kind
of
solution
that
taking
this
managed
kubernetes
across
different
platforms
and
including
tcp
or
on
the
bare
metal
or
in
this
fair.
Basically,
I'm
going
to
aws
azure.
Basically
that's
about
answers,
so
we
are
I'm
working
in
south
experimental
team
too,
and
we
are
also
trying
to
get
cooper
working
because
we
have
custom
asking
for
vm
solution.
C
So
that's
why
I'm
like?
Okay!
This
is
interesting
crossing
here,
because
you're
you're
doing
this
on
gcp,
but
we
are
trying
to
wrap
it
into.
I
thought
something
to
decay
yeah,
but
but
we
just
started
so
we
are
not
working
on
that
skill.
Actually,
we
have
still
have
a
long
way
to
go.
We
just
started
to
try
it.
C
We
have
many
questions
and
issues
with
it.
So
that's
why
I'm
here
to.
B
Let
me
tell
you
another
another
issue
that
is
very
important
for
you:
how
we
are
handling
in
every
vpc
inside
google.
You
can
handle
up
to
15
000
vm
stops.
Okay,
we
have
one
vpc
with
tens
of
nets
and
we
have
in
every
subnet
up
to
1250
vms.
This
means
12,
500
yens
stops
on
a
single
vpc
and
after
that
we
create
a
new
vpc
with
everything.
B
C
H
A
Many
of
our
contributors
are
are
in
european
time
zones,
so
we
have
to
there's
only
a
couple
of
us
on
the
west
coast
and
we
just
have
to
balance
it.
Yeah
almost.
D
There,
what
if
we
shift
it
by
one
hour,
so
I'm
in
the
eastern
time
zone,
you
know
it's
10
now
11
in
an
hour,
does
that
still
overlap
with
our
european
time
zones
within
their
business
hours
or
is
that
yeah.
A
It
starts
it
starts
to
get
tough.
You
know,
I
I
don't
want,
I'm
I'm
still
fairly
new
with
with
running
things
with
cooper.
You
know
I
didn't
want
to
just
come
in
and
say
we're
we're
moving
it
the
time
of
the
meeting
and
so.
A
C
D
B
B
C
All
right,
yeah
andre,
so
that's
my
email
so
feel
free
to
email
me
and
I'm
definitely
very
interested
in
your
use
cases
there
so
just
feel
free
to
you
know
me
or
ping
me.
Finally,
right
now.
A
And
I'm
also
mazzy
star
on
the
slack
channels
should
be.
I
don't
remember
how
many
usernames
I
have
across.
A
Everyone,
okay-
that
was
some
pretty
awesome
introductions.
Thank
you
guys
and,
ladies
I'm
going
to
hijack
the
first
couple
agenda
items
here
with
just
some
quick
updates:
412
is
the
deadline
for
anyone
wanting
to
volunteer
for
attending
the
booth
at
red,
hat
summit
or
kubecon
eu.
A
If
you
feel
like
joining
us
and
hanging
out
in
the
booth
mind
you
that
it's
a
virtual
booth,
so
it
will
be
like
a
a
chat
platform
and
it's
super
easy
and
it's
fun
and
let
me
know
and
I'll
add
you
to
the
roster
and
get
you
engaged
with
working
on
the
platform
before
the
live
event.
A
A
gentle
reminder
to
please
use
the
community
meeting
the
mailing
list
or
github
to
talk
about
community
topics.
This
is
coming
from
our
our
sponsor
at
red
hat.
It
affects
us
negatively
when
we
when
we're
we're,
trying
to
go
to
the
effort
to
graduate
to
incubating
status
with
the
cncf.
B
D
So
the
history
of
this
is
that
we
we
got
together.
We
came
up
with
a
list
of
items
that
we
think
makeup
version
ones.
We
have
a
target
we're
working
towards
that
target.
We've
made
tremendous
progress
towards
that
target.
I'm
not
gonna
tell
you
when
we're
gonna
hit
that
target,
because
that's
out
of
my
control,
but
it's
something
that
we're
getting
pretty
close
to.
I
would
certainly
hope
this
year,
but
I'm
just
not
going
to
say
anything
until
we
actually.
B
D
The
means
this
year
or
next
year,
yeah.
I
know
you
want
that
answer.
I
want
that
answer
as
well.
There's
some
unknowns,
so
we
have
to
the
biggest
known
for
me
is
what
we
do
about
our
our
privileges
to
the
vmi
pod,
so
things
like
not
running
with
the
route
and
reducing
the
capabilities
there
and
arriving
on
something
that
we
can
maintain-
and
we
feel
is-
is
a
good
foundation
for
us
moving
forward.
D
D
E
D
That's
great
to
hear
thank
you
and
I
I
want
to
point
out
that
version.
One
is
just
a
name
as
well.
We
we
think
that's
where
we've
reached
all
our
our
goals,
but
as
far
as
when
we
look
at
compatibility
and
things
like
that,
our
api
is
version
one
right
now,
so
we
don't
anticipate
any
backwards
incompatible
changes.
We
already
have
guarantees
with
our
updates,
where
we're
testing
our
update,
pass
between
previous
releases
and
we
don't
anticipate
any
sort
of
changes
and
any
of
this
compatibility
moving
forward.
D
So
the
reason
I'm
saying
that
is,
if
somebody
adopts
what
exists
today,
there's
no
anticipation
that
that's
going
to
change
once
we
hit
version
one.
So
it's
it's
safe
to
adopt,
cubert,
we're
talking
about
additional
features
for
version.
One
and
changes
to
you
know
our
pods
security
policy
and
things
like
that.
But
api
is
what
it.
D
C
Okay,
I
I
do
have
a
question
here
so,
as
I
said,
we
just
started
to
support
co-work
and
we
are
trying
to
make
it
to
be
more
user-friendly
and
especially
consider
some
people
who
are
not
so
familiar
with
kubernetes
semantics
and
so
on.
So
I'm
thinking
to
you,
we
are
planning
to
add
some
cli
commands
to
facilitate
that.
I
understand
that
we
already
have
work
cartel,
but
most
time
it's
doing
some
operation
like
work,
I'll,
start
migrate
and
so
on.
C
So
I'm
thinking
to
add
some
simple
commands
like
work:
how
to
create
vm.
So
that's
just
it's
just
easy
for
user
to
have
a
yamo
which
works
and
then,
if
they
want
to
customize,
they
can
further
customize
or
we
just
create
a
very
basic
ubuntu
vm
for
them,
etc,
and
so
so
it's
it's
slightly
different
from
what
the
current
cli
does,
because
they
are
kind
of
all
those
real
operations
but
they're
here,
I'm
talking
about
like
create
delete,
updates,
it's
just
easier
for
user
to
adopt
it.
So
for.
C
It
depends
on
how
you
say
it
right
if
I
pack
it
into
word
curtail
then
it's
words,
api
api
right.
It's
like
cli,
it's
not
really
api
cli,
so
so,
for
example,
if
I
do
control
word
and
create
vm
and
then
give
some
parameters,
I'm
going
to
create
the
vm
for
you,
basically
that
data
volume,
the
the
the
vm
time
the
vm
yammo
and
apply
it
for
you,
so
it's
not
really
and
it
can
be
not
google
specific.
Basically,
my
point
is:
would
that
be
something
that
that
can
be
upstream?
C
D
Absolutely
we
totally
would
accept
that,
and
I've
been
looking
at
some
other
stuff
like
just
creating
like
a
skeleton,
a
vm
skeleton
with
our
cli
things
like
that
are
useful
as
well.
C
C
You
can
definitely
do
it
by
yourself,
but
it's
just
a
work
hard
to
make
it
easier
for
you
to
do
it
right,
because
at
the
end,
right,
if
you
think
about
customers
who
use
vm
right
there,
many
of
them
are
from
legacy
or
lately,
because
otherwise
they
can
move
the
container.
C
So
so
I'm
just
wondering
if
that's
something
that
upstream
the
possibility
to
accept,
and
if
that's
the
case,
if
you
feel
like
there's
value
there,
we
we
can
the
free
country
build
code
there
and
so
that
we
don't
need
to
maintain
local
local
force.
D
Yeah,
we
would
accept
something
along
that
those
lines
that
what
you've
described,
isn't
something
that
we
haven't
actually
talked
about.
So
there's
been
internal
like
we've
just
had
I've
had
one-off
discussions
with
people
community
members
other
other
people
about
the
usefulness
of
exactly
what
you've
described.
D
Essentially
just
nobody's
gone
off
and
done
it
yet
so
it's
it's
something
that
has
clear
value
anything
that
lowers
the
friction
for
somebody,
especially
within
their
first
five
minutes
of
using
kuvert,
and
perhaps
you
know,
always
lowers
the
friction.
We're
interested
in.
C
Yeah
yeah
exactly
it's
really
just
lower
the
friction
and
it's
really
not
specific
to
tcp
I
mean
you
could
definitely
get
a
skeleton
and
if
you
come
on
you
customize,
you
go
ahead
right.
You
post
process,
but
that's
basically,
okay,
great
great.
If
you
are
fine
with
it,
we
can
definitely
and
share
some
and
talk
later
just
about
how
we
do
that.
I
mean
it's.
C
It's
very
easy
think
about
it's,
just
like
a
template
builder,
but
so
that
you
can
just
easily
create
a
vm,
create
a
disk
command,
add
to
it
and
so
on.
D
Sure
that
makes
sense,
and
so
when
we
have
these
kinds
of
design
discussions,
usually
what
we're
going
to
do
so,
there's
some
expectation
here
we
look
at
what
the
precedent
is
in
the
cubing
cubenase
ecosystem
already.
So
what
kind
of
things
are
similar
to
this
for
for
deployment,
workloads
and
other
stuff
and
try
to
figure
out
how
that
maps
to
what
we're
doing
virtual
machines?
D
So
we
we
feel
kubernetes
like
and
I'm
sure
you
already
get
that,
but
that's
the
kind
of
thing
that
helps
the
discussion
to
come
up
with
precedence
of
here's.
What
we're
trying
to
achieve
it
looks
a
lot
like
this
thing.
That's
already
been
done
and
then
everyone,
it
just
kind
of
clicks
for
everyone.
So
finding
those
examples
and
things
like
that
really
helped.
The
discussion
as
well.
So
just
point
that
out
sure.
C
Sure,
but
so,
for
example,
if
you
have
some
talk
to
share
how
I
should
I
do
that,
just
send
to
the
word
community
or.
D
Yeah,
so
we
have
a
mailing
list,
it's
kind
of
funny
you
you
mentioned
this.
I
just
created
a
a
proposal
for
how
to
present
designs
to
this
mailing
list.
It's
not
finalized
yet,
but
I
have
a
document
and
I'll
I'll
try
to
post
it
in
the
chat,
real,
quick.
That
kind
of
gives
a
template
of
the
kinds
of
things
that.
D
Yeah,
I
would
kick
it
off
with
your
thoughts
and
let
that
be
a
starting
point
and
not
be
worried.
You
don't
have
to
put
a
lot
of
effort
into
maybe
proposing
something
I
wouldn't
like
spend
multiple
days
or
anything
like
that.
Just
what's
the
minimum
to
to
kind
of
get
your
initial
kernel
of
an
idea
out
and
then
we
can
kind
of
iterate
from
there.
C
C
D
C
A
I've
just
onboarded
two
new
users
into
cooper,
and
I
thought
for
sure
we
had
the
zero
to
cover
in
under
five
minutes
process,
just
nailed
down
perfectly
and
of
course,
like
it's
been
days
and
days
of
trying
to
get
these,
get
these
new
contributors
up
and
running.
A
C
Okay,
great
good
and
then
that's
this
one
thing.
Another
thing
I
want
to
ask
is:
I
know
that
I
start
with.
We
have
implement
or
someone
named
someone
or
they
implement
this
hot
plug
for
disk
right,
but
there's
no
cli
to
trigger
it
yet
is
there
is
that
in
the
plan
like
hot
plug
or
disk.
D
I
know
I
think
alexander's
working
on
it
yeah
we
we
have
hot
plug
yeah,
we
have
it,
it
landed.
We
just
haven't
really
talked
about
it
quite
yet,
because
there's
still
some
it
works
today
we're
trying
to
optimize
it
a
little
bit.
D
C
Okay,
because
I
saw
that
you
added
this
api
endpoint,
but
I
don't
see
a
trigger
from
from
work
cartel.
Actually,
maybe
that's
a
dumb.
I
need
a
company
to
trigger
it
or
you
have
some
other
way
to
trigger
it.
I
would
be
nice
to
to
to
have
something.
I
can
take
a
look
just
to
help
you
yeah,
there's
pr.
D
For
that,
that's
what
you
were
getting
at
michael
here,
I'll
post,
the
chat,
real
quick.
I
think
this
is
the
right
one
I
mean
yeah.
It
should
be
53,
51
yeah.
So
it's
it's
kind
of
my
fault.
I
need
to
review
this.
I
think
it's
I
think
it's
ready
to
go.
Maybe
that's
something
you
all
make
sure
that
it
gives
all
the
functionality
that
you're
wanting
or
at
least
it's
a
good
basis
of
what
you
were
looking
for
with
hot
plug.
C
Yeah
that
is
actually
very
helpful,
because,
right
now
you
have
to
restart
to
pick
up
a
new
disk
right
and
we
also
see
some
issue
during
restart.
Actually,
I
do
have
many
questions
on
there
like,
for
example,
how
do
you
keep
your
the
staff
when
you
restart
yeah,
for
example,
like
mac
address,
because
that
that
actually
could
be
causing
some
really
bad
effect
in
our
experiment?
So
I
actually
do
have
many
questions
there.
D
I
like
the
mailing
list,
because
it's
asynchronous
sometimes
I
get
lost,
I'm
just
speaking
for
myself.
Slack
moves
pretty
fast,
sometimes
and
we
have
different
time
zones
and
everything
if
you
need
like
immediate,
like
timely
feedback.
Slack,
I
think,
is
great
for
kind
of
deep
technical
discussions.
I
prefer
the
mailing
list,
that's
my
preference.
C
C
But
okay,
maybe
I
would
just,
but
I
guess
this
is
not
the
right
place
to
ask
those
details,
though.
D
It's
not
the
wrong
place.
What
you're
asking
is
it's
not
naive?
So
that's
that's
a
well
understood
kind
of
gap
in
our
ability
to
use
the
pod
network.
So
when
we
look
at
how
pods
work
in
the
on
the
pod
network,
they
receive
a
unique
ip
address.
Every
time
you
know
a
new
pod
starts,
it
gets
a
unique
ip
address
and
we
have
virtual
machines
just
running
in
pods.
D
So
every
time
you
restart
anytime,
the
new
pod
is
used
just
to
fit
within
this
ecosystem
and
the
way
it
works
today,
if
you're
using
just
a
default
pod
network,
then
yeah
you're,
totally
right,
you
get
a
different
ip
and
that
can
cause
some
problems
yeah.
I
would
definitely
encourage
you
to
reach
out
on
the
mailing
list,
there's
potential
solutions
for
that
they
may
or
may
not
work
for
you,
but
it's
a
it's.
A
worthwhile
discussion.
A
David,
I
think
that
we
really
need
to
get
some
networking
use
cases
into
the
website
or
user
guide,
but
it
seems
like
once
you
do
anything
more
advanced,
any
more
advanced
networking,
you're,
there's
nothing.
There.
C
Yes,
yes,
yes,
actually,
I'm
not
just
child
partner.
I
tried
to
attack
the
second
network
and
then
I
couldn't
see
the
ip
from
outside.
You
mean,
I'm
I'm
also
wondering
how
can
I
get
that
I
lost
this,
I
mean
there's
more,
but
they
definitely
affect
how
I
use
it
or
when
I
need
it,
I
mean
really.
A
E
I
was
just
gonna
say
I:
can
I've
used
the
overt?
Oh
sorry,
open
v
switch
the
plugin
for
keyboard
and
I've
gotten
virtual
machines
kind
of
off
the
pod
network.
I
can
maybe
write
something
up
if
that
would
be
of
interest
to
the
group.
C
E
F
I
think
this
may
be
the
issue,
but
we
have
some
resources
both
on
the
user
guide
and
on
the
cover,
slash
covert
repo
as
well
regarding
networking,
but
I
agree
that
for
these
advanced
use
cases,
it's
not
documented
really
well,
so
what
we
should
do,
I
guess,
is
to
collect
like
keep
track
of
these
questions
and
prepare
faq
or
like
improve
the
docs
overall
to
cover
this
there's
some
interest.
F
A
Go
ahead,
sorry
peter,
I
talked
with
the
cncf
regarding
faqs
on
the
website
and
the
end
user
guide,
and
there
are
big
thumbs
down
on
on
an
faq
and
what
happens
is
your
documentation?
Forks
and
now
you
have
two
two
streams
of
documentation
to
to
keep
up
to
date,
and
so
your
your
faq
becomes
stale
and
and
then
you
have
confusion
between
what
what's
the
correct
documentation
to
use.
F
I
shared
the
blog
post
in
the
chat.
It
covers
the
everything
from
the
host
network
configuration
for
secondary
networks
to
just
like
the
cni
multis
and
for
the
convert
definition
to
get
attached
to
these
networks.
So
it's
not
a
exhaustive
documentation,
but
it's
the
best
thing.
We
currently
have
I'd
say.
F
Bit
this
one,
the
thing
with
this
one
is
it's:
it's
a
blog
post
about
integration
between
multiple
components.
It's
not
only
converge,
so
the
question
would
be.
Do
we
want
to
keep
the
convert
documentation
pure
and
like
don't
cover
this
third-party
components
or
are
we
okay,
covering
like
the
integration
part
of
things.
C
Great
yeah
thanks
thanks
for
providing
the
links
and
the
offers,
I
would
definitely
and
take
a
look
there.
B
B
Can
I
talk
about
something
else?
Absolutely
right
now
we
are
not
using
it,
but
we
plan
to
go
back
to
that
today.
We
all
of
our
remote
desktops
over
in
html5.
This
is
based
on
today.
B
I
don't
know
why
red
hat
is
not
supporting
spice
anymore
and
we
plan
to
do
it
ourselves.
If
my
redhead
doesn't
do
okay.
B
D
So
what
about
the
spy
server
part?
How
would
you
be
exposing
that.
B
B
Not
only
the
hml55,
but
also
the
server
side
also,
they
have
enhanced
it.
A
lot
the
spice
protocol
to
handle
many
things
so
we're.
B
And
the
other
parts
that
we
are
also
working
and
red
hat
has,
let's
say,
move
away
for
sure.
You
know
about
the.
B
B
To
have
3d
working
para
virtualization
of
3d
network,
we
don't
use
vgpu;
instead,
we
use
spiral
virtualization.
This
is
working
well
over
ktm
on
linux
gas,
but
there
is.
There
was
a
working
process
inside
red
hat,
but
the
guy
I
talked
with
him.
They
are
not
doing
anything
more.
This
means.
B
B
It's
the
future
3d
only
support
guest
os
linux
guest
os,
and
we
need
to
have
also
windows-
guest,
yes
and
also
os
x,
guest
os,
and
we
we
have
everything
working
very
well
over
vgpu
on
top
of
kvm
right
now,
but
we
plan
to
have
power
virtualization,
because
then
what
is
the
goal?
The
users,
for
instance,
ask
for
one
gig
and
if
anyone
is
not
using
gpu,
he
can
use
more.
You
understand,
I
think,
like
over
booking
or
something
what
you
call
over
commit
that
makes
overcoming
for
gpus,
okay,.
B
D
Okay,
I'll
say
that
so
you
talked
about
mac
os
a
few
times,
that's
something
we
don't
have
a
great
amount
of
experience
with
and
just
the
key
vert
ecosystem
right
now
we
we
understand.
B
D
We
even
understand
other
things
as
well,
but
mac
os
isn't
one
of
the
ones
that
we
have
any
experience
with.
I
would
say
so.
A
I
didn't
know
that
that
application.
B
B
The
missing
part
we
need
to
run
on
top
of
mac
hardware
to
analyze,
okay,.
A
B
A
B
We
need
to
have
gpus
on
it
a
lot
of
memory,
600,
768,
gigabytes
of
ram
and
things
like
that.
D
Okay,
can
you
actually
assign
it
to
me?
You
just
make
a
comment
and
then
it
will
remind
me
to
actually
look
at
this.
A
A
Yeah
that
that
sounds
pretty
awesome.
B
I'll
put
on
the
chat
window,
the
youtube?
Okay,
please,
for
explaining
about
the
video
3g
and
the
gas
vm.
Okay,
it's
from
a
redhead
guy.
C
So
one
more
question
for
me
is:
do
we
have
any
published
docs
on
the
skating
performance
or
like
things
like
that.
F
C
Okay,
then,
how
about
like,
for
example,
because
console
access
do
we
have
the
some
some
numbers
there
like
how
many
concurrent
and
console
sessions
can
be
supported,
or
things
like
that.
D
We
know
that
it's
it's
limited,
so
one
of
the
limitations
with
the
console
and
the
vnc
access
the
way
we're
doing
it
is
we're
we're
proxying
it
through
the
the
kubernetes
api,
which
is
a
limited
resource,
so
the
more
connections
we
create,
especially
with
vnc
the
more
things
we
have
going
through,
that
that
single
point
and
or
I
mean
maybe
it's-
maybe
it's
horizontally
scaled,
but
we
can.
C
D
We
know
this,
I
won't
like
it's
all.
Gonna
depend
on
network
and
hardware,
how
limited
that
is
I'll,
say
this
limited
enough
that
we've
discussed
alternatives
so
coming
up
with
a
vnc
gateway
of
some
sort.
That's
dedicated
to
handling
these
connections
that
scales
independently
of
everything
else
just
to
handle,
console
and
vnc,
and
we
have
some
ideas
on
how
to
do
that.
That's
just
one
of
the
gaps
where
we
haven't
really,
we
haven't
had
enough
pressure
to
actually
execute
on
any
of
that
and
you
talked
about
scaling
just.
I
guess
virtual
machine
scaling
performance.
D
B
B
Can
you
make
the
topic
about
mac
os
another
topic
not
put
on
the
views?
I
send
you
here
a
link
how
to
do
it
on
top
of.
B
B
A
About
how
to
how
to
use
a
mac
os
under
cooper.
B
B
F
A
B
B
The
guys
say
here
we
need
to
have
the
doubled
up
the
team
for
help
and
then
okay,
because
I
bring
too
many
things
to
do.
A
Our
the
the
github
issues
have
been
really
ballooning
up
the
past
couple
weeks.
My
my
predecessor
gave
me
a
17-page
document
with
with
ideas
of
what
the
community
team
should
do
and
there's
no
shortage
of
of
work
on
the
engineering
side.
The
backlog
over
there
is
is
pretty
deep
as
well.
A
If
your
team
would
like,
if
your
teams
would
like
to
participate,
we
would
love
to
have
have
you
guys?
No,
it
would
be
fantastic.
We.
D
Yeah,
that's
really
helpful.
I
mean
I
know
that
you're
doing
that
for
your
own
self-interest
as
well.
Obviously,
but
what
we've
seen
and
one
of
the
reasons
these
backlogs
grow
is
we've
seen
a
lot
of
traction,
especially
in
the
past
year
with
cubert
and
we're
getting
more
contributors,
but
the
contributors
aren't
necessarily
keeping
pace
with
all
the
other
traction
that
we're
getting.
A
Yeah,
it's
been
really
amazing.
The
past.
A
F
A
I
took
over
community
organization
mid-january
and
I
think
I've
onboarded
six
new
contributors,
just
in
the
past
two
months,
getting
them
started
with
working
with
the
repos
and
and
getting
them
engaged
with
slack
channels
and
just
like
paperwork
kind
of
things
to
help
them
get
started
with
the
community
and
like
all
of
a
sudden,
I'm
just.
A
We
have
folks
coming
out
of
the
woodwork
to
to
help
out
and
and
this
new
ignited
interest
in
the
in
the
project
we
have
nvidia
and
arm
and
google.
We
knew
google
was
here
for
a
while,
but
we
it's
been
very
hard
to
get
any
communication
with
them
and
perishing
right
here.
I
I
can't
believe
it.
A
So
anything
I
can
do
to
help
you
help
you
get
get
deeper
engaged
with
the
community
just
reach
out
to
me.
It
feels
like
I
work
24
hours
a
day.
I
have
three
kids
so,
like
my
my
days,
look
like
swiss
cheese,
so
try
and
put
in
my
time
whether
it's
at
during
work
hours
or
10
p.m.
At
night,.
B
I
would
like
to
understand
exactly
who
is
involved
on
inside
red
hat
on
on
what
they
call
open
chief
virtualization.
This
is
cut
words
correct.
A
Well
doing
that
technically
most
the
red
hatters
that
are
in
this
meeting
are
under
openshift
virtualization,
but
from
as
a
community
organizer,
I
have
to
draw
a
strict
line
between
the
red
hat
work
and
the
community
work,
and
is
why
I
keep
reminding
everybody
to
stop
using
our
internal,
jarrows
and
and
email
for
community
affairs.
D
So
I
want
to
be
clear
about
something:
there
is
no
open
shift.
Virtualization
work,
that's
separate
necessarily
from
the
keyboard
ecosystem.
The
policy
here
is
everything
that
we
do
goes
in
the
cube
first
and
then
trickles
down
into
openshift
virtualization.
So
yeah,
I
don't
know
what
I
just
want
to
make
sure
that
nobody's
confused
that
there's
a
separate
team.
No,
no.
B
D
D
A
Hey
david:
is
there
something
more
official
on
that
we
can
publish
on
how
red
hat
handles
the
sponsorship
and
delineation
between
upstream
and
downstream.
D
I'm
not
sure
I
think
that's
always
been
our
I'm
not
aware
of
us
behaving
any
differently
than
what
I
just
described
across
the
entire
company,
so
it's
kind
of
a
company
policy
that
we
we
work
upstream
yeah
and
we
take
those
bits
downstream.
So
if
somebody
understands
what
our
downstream
product
is
and
what
projects
are
part
of
that,
then
by
contributing
to
those
upstream
products,
they
are
influencing
it.
A
Yeah,
oh
josh
is
actually
with
us
josh.
Do
you
have
do
you
have
something
you
can
talk
about
about
that.
I
No,
I
mean
I,
he
summed
it
up
pretty.
Well
I
mean
that's.
Essentially
it
right
is
that
that
red
hat
packages
open
source
projects
into
products
I
mean
not
exclusively
so
right,
because
the
I
don't
know
that
anybody
else
is
making
a
product
out
of
cooper
yet,
but
it's
a
cncf
project
right,
so
it's
entirely
possible
even
even
fairly
likely
that
somebody
else
like
say
suze
will
make
a
product
based
on
cooper.
I
And
so
then
you
know
if
you're
contributing
to
cooper
you're,
effectively
contributing
to
both
of
those
products.
How
open
source
works.
The
and,
and
you
know,
for
red
hat.
I
Unlike
a
few
other
vendors,
we
try
to
do
as
much
work
in
the
public
open
source
project
as
we
can
because,
from
our
perspective,
any
work
that
we
do
on
a
red
hat
product
only
is
extra
work.
If
you
follow
me,
because
it
often
means
you
know
maintaining
a
fork,
maintaining
a
patch
set
on
something
which
is
really
a
pain.
C
I
B
Let
me
show
you
how
we
are
doing
there
is
a
paper
on
google
how
to
do
it
exactly
because
when
you
log
off,
you
are
able
to
dump
it
from
the
the
ramp
to
the
to
the
regular
disc,
and
when
you
are
up
again
you
roll
back
from
the
the
disc
to
the
ground.
Oh
so
you're
restoring
it.
Oh
that's!
Yeah!
Yeah!
Let
me
chill
first
thing:
it's
this
one.
I
B
D
So
just
to
clarify
gcp
provides
an
in-memory
ram
disk
and
are
you
all
layering
on
the
functionality
that
performs
the
backup
and
the
restore.
B
D
When
the
vm
is
shut
down
that
the
contents
of
the
ram
disk
are
synced
to
a
performance,
oh
yeah,
but
I
have
a
guest.
D
D
D
I
haven't
seen
that
done
before
that's
interesting.
I
mean
I've
seen.
Certainly
people
use
local
storage
but
to
actually
back
that
up.
D
Faster
than
ssd
yeah,
and
what
would
be
the
use
case
for
this,
you
know.
B
The
iran
is
there:
we
need
to
use
that
specifically
performancy
and
it
came
with
600
gigs,
because
we
need
to
use
more
cpus
and
more
cpus
means
more
memory.
At
the
same
time,
we
need
to
use
sultanate.
This
means
that
I
need
to
use
the
entire
server
regarding
glasses
for
microsoft,
and
that's:
oh,
let's
use
that
memory
because
we
was
using
only
256.
I
We
tried
okay,
because
I
mean,
after
all,
when
you
access
a
vm.
If
you
have
a
lot
of
free
ram
available,
the
vm
gets
copied
into
memory
anyway,
although
not
necessarily
all
at
once.
The
but.
I
Right
yeah,
you
could
warm
it,
and
so
then,
the
only
thing
you're
doing
with
the
ram
disc
is
you're,
making
a
second
copy,
the
vm
in
memory,
so
that
the
vm
thing
when
it's
syncing
to
disk
it's
actually
just
making
a
second
copy
in
memory
yeah.
You
understand.
B
I
Yeah,
I
I
come
from
the
database
world
and
so
we've
made.
I
In
order
to
speed
up
ephemeral
data
yeah,
but
but
that's
why
I'm
asking
the
question
about
simply
not
f
syncing,
because
one
of
the
things
that
you
can
at
least
do
in
the
database
world?
Is
you
mount
a
file
system
that
does
not
fsync
and.
I
I
B
Still
using
hfs,
plus
plus
the
same
way
they
I
saw
that
they
work
red
hat
has
has
known
so
far
on
on
virtual
gl
to
windows
and
they
threw
that
away.
This
is
not
finished.
A
A
A
So
what
is
what's
the
mechanism
you're
using
to
sync
data
across
across
your
host.
B
A
You
joined
we're
we're
at
8
20.,
I'm
so
sorry
to
keep
everybody
for
so
long
yeah.
Thank
you
so
much
guys
for
your
time.
Do
we
want
to
do
a
bug
scrub
this
week
we
did
a
big
one.
Last
week.
D
Yeah
we're
pretty
overtime,
I'd,
say:
let's
table
everything,
and
I
mean
you
guys
can
continue
if
there's
still
more
of
it.
I
need
to
go.
A
Okay,
let's,
let's
close
out
the
the
meeting,
then
andre
and
jane,
it
was
great
to
have
you
look
forward
to
to
future
collaboration
and
thank
you
for
everybody
for
joining
us.
We'll
see
you
next.