►
From YouTube: KubeVirt, its networking, and how we brought it to the next level / Andrei Kvapil (Palark)
Description
HighLoad++ Armenia 2022
December 15 and 16, 2022
https://highload.am/2022/abstracts/9648
Short abstract
When choosing KubeVirt as our main virtualization solution, we were unsatisfied with the existing networking implementation. We developed and contributed some enhancements to simplify the design and get the most performance out of the network using KubeVirt.
...
A
We
have
two
talks
remaining
before
the
end
of
the
day
and
both
would
be
focused
on
different
aspects
of
building
the
infrastructure
and
type
of
kubernetes
and
in
the
first
of
those
two
Andre
krapil
from
palak
will
tell
us
about
virtualization
in
kubernetes
Cube
word
and
how
it's
networking
is
working
Andre,
please
your
time,
foreign.
B
My
name
is
Andre
cuapill
I
do
usually
architecture,
Solutions,
based
on
kubernetes
design
and
developing
Cloud
platforms
and
as
well
work
a
lot
with
software-defined
storages.
The
main
Technologies
I
worked
today
is
kubernetes
cubewild
and
linstar
I
work
at
bellerk
at
pellark.
We
do
devops
as
a
service.
We
provide
our
Technologies
and
help
to
other
companies
to
prepare
and
deploy
their
software
into
kubernetes
and
cloud-based
environments,
and
that's
the
thing
we
needed
a
cloud.
We
wanted
to
make
our
own
cloud
platform
which
we
could
use.
B
What
is
actually
the
cloud
platform
is
the
cloud
allows
you
to
take
the
resources
like
physical
resources,
CPU
time,
REM
and
disk
space,
and
use
this
those
resources
to
cut
the
virtual
machines
out
of
this.
So
we
can
have
many
virtual
machines.
We
can
spawn
them
and
scale
horizontally.
That's
the
features
are
provided
by
the
cloud
and,
of
course
we
wanted
this
Cloud
to
be
as
a
service.
So
anyone
could
go
to
our
API
and
use
it
to
deploy
their
virtual
machines
and
as
well.
B
It
should
be
some
kind
of
platform
which
could
work
on
premise,
so
you
can
install
it
on
your
own
servers.
This
Cloud
should
be
extendable,
so
it
can
work
with
all
Technologies.
We
have
right
now
we
use
kubernetes
a
lot
and
we
use
linstar
for
the
storage.
We
want
some
solution
which
would
work
fine
with
all
the
Technologies
we
have,
and
this
Cloud
should
provide
API
interface,
because
this
is
really
important
thing
to
every
cloud
provider
to
provide
well-known
and
understandable,
API
interface
to
your
customers
and
Integrations.
B
Okay.
So
what
kind
of
solution
we
consider
it?
We
choosing?
We
were
choosing
between
only
open
source
Solutions,
because
we're
contributing
to
open
source
a
lot
and
we
considerate
few
Solutions.
Before
we
start
thinking
which
solution
we
can
use.
B
That's
not
always
good
when
some
open
source
software,
it
can
be
also
it
can
be
open
source,
but
problem
that,
when
it's
controlled
by
the
single
company,
it
can
do
everything
with
those
software.
For
example,
opening
bullet
had
really
bad
case
when
they
closed
sourced
their
migrators
between
the
major
errors
of
the
cloud.
We
don't
want
to
have
such
problems,
and
we
don't
want
to
have
the
vendor
lock
in
and
the
other
problem
that
it
has.
They
have
poor
kubernetes
integration.
B
That
means
that
we
would
to
develop
a
lot
of
Integrations
like
CSI
plugins,
to
make
kubernetes
working
in
those
clouds
in
those
Solutions
fine,
the
next
Solutions
we
were
considering.
First,
we
consider
it
openstack,
of
course,
because
this
is
like
default
solution
for
making
your
cloud
the
problem
of
openstack
well
known.
It
has
over
a
complicated
architecture.
B
If
you
will
see
on
the
scheme
of
openstack,
we'll
see
a
lot
of
microservices,
every
microservice
has
its
own
database
and
you
need
to
configure
it
to
have
connection
between
each
micro
service
to
another
one
and,
as
we
know,
when
things
complicated,
they
can
be
broken.
Very
simply.
We
wanted
something
simpler.
We
like
this
case
principle
like
keep
it
simple
stupid,
so
the
next
solution
was
Cube
beard,
but
it
wasn't
also
a
very
good.
B
B
To
be
able
run
virtual
machines
inside
the
kubernetes,
but
in
result
we
have
this
difficult
stack
inside
every
port
to
connect
the
virtual
machine
to
standard
kubernetes
for
networking.
So
we
found
that
there
is
no
normal
Solutions.
Probably
we
need
to
develop
something
something
new,
okay,
how
we
would
do
that
we
would
take
kubernetes
because
we
love
kubernetes.
We
would
take
libyer,
because
this
is
main
tool
and
Main
Library
for
spanning
the
virtual
machines,
and
we
would
do
that
inside.
The
containers
like
why
not
the
virtual
machines
can
be
running
inside
the
containers.
B
It's
like
the
same
thing,
but
later
we
decided
that's
interesting,
but
convert
is
actually
doing
the
same
thing.
We
don't
like
the
networking,
but
can't
we
just
fix
it.
Okay,
let's
take
a
look
more
closer
on
cook.
Weird
convert
is
fully
kubernetes
based
solution.
It's
under
cncf
project.
It
has
growing
Community.
If
we'll
consider
all
this,
it
will
compare
all
these
projects.
B
I
was
talking
before
the
keyword
has
even
more
stars
on
the
GitHub
than
openstack
has
and
what
is
important,
the
big
players
it
will
take
if
we'll
see
who's
contributing
into
Cube.
Weird
we'll
see
such
companies
as
red
hat
Nvidia,
REM
and
opensuse,
and
this
amount
of
big
players
it's
growing
every
year.
B
That's
really
cool
as
and
as
can
we
see,
it
has
Community
Driven
development.
It's
like
kubernetes.
There
is
no
single
owner
of
this
project,
but
many
different
companies
contributing
to
this
project.
So
we
have
know
this
problem
when
someone
could
cut
off
some
piece
of
it
like
to
remove
that
database.
Migrators,
for
example,
as
it
was
with
open
nebula,
has
simple
and
understandable
architecture.
B
It's
like
standard
kubernetes
operator
how
it
looks
like
first,
the
qubit
is
weird
operator
who
is
deployed.
Developing
excuse
who
is
deploying
the
four
microservices
is
weird
API,
who
is
used
for
controlling
all
the
API
requests
to
the
virtual
machine
so
using
fit
API.
You
can
connect
to
the
content
through
the
virtual
machine
console
and
do
some
other
interesting
stuff,
the
weird
controller,
who
is
implementing
standard
controller
pattern
in
kubernetes.
B
That
means
that
you
have
custom
resource
in
kubernetes
kind.
Virtual
machine
and
weird
controller
will
manage
this
resource
and
the
next
one
is
Swift
Handler.
This
is
a
special
entity
who
is
deployed,
as
demon
said,
on
every
node
and
ensure
them
all
needed
permissions.
It's
actually
reconfiguring
the
Pod
to
be
able
running
the
virtual
machine,
and
the
virtual
machine
with
launcher
is
like
go
binary,
who
is
running
libbeard
and
virtual
machine
inside
the
pot.
Actually,
every
virtual
machine
in
Cube
Verity
is
running
inside
the
pot.
B
How
it
looks
like
if
we
have
virtual
machine,
we
need
to
take
care
about
few
things.
First
is
how
it
will
be
running
like
how
it
will
utilize
the
resources
CPU
time
and
RAM.
That's
what
the
docker
provided
us,
the
runtime
plugin.
The
next
thing
we
should
care
about
is
kubernetes
storage,
how
it
will
utilize
all
the
resources
of
the
storage
subsystem
of
kubernetes.
That's
what
the
CSI
interface
provides,
and
then
latest
thing
we
should
care
about
is
how
it
will
be
connected
to
the
networking.
B
B
I
can
remember
that
the
these
Services
deployed
as
demon
set
on
every
node
when
we
run
the
virtual
machine,
the
New
Port,
is
created
in
this
spot.
We
have
brick
launcher
service
and
witch
Handler
who
is
having
the
full
access
to
their
to
their
root
file
system
and
to
the
node
it
configures
the
virtual
machine
Port,
which
is
normal,
unprivileged
port
it.
It
is
actually
managing
the
c
groups,
the
network.
It
creates
network
interfaces
and
config
in
passing
through
the
devices.
B
If
you
need
some
GPU
inside
the
virtual
machine
or
I,
don't
know
SRI
or
V
device,
it
will
do
the
same
thing
before
the
virtual
machine
is
launched
later.
The
bit
launcher
is
running
the
libbert,
which
is
running
the
virtual
machine.
So
we
can
see
that
every
port
is
running
its
own,
really
weird
demon
who
is
controlling
just
the
single
virtual
machine.
B
B
When
life
migration
is
finished,
the
Old
Port
is
getting
destroyed.
So
in
this
way
we
can
see
that
Port
is
just
specific
entity
which
is
needed
to
run
the
virtual
machine.
B
Okay,
now
we
know
how
Cubit
creating
the
virtual
machines.
Now,
let's
see
how
it
is
operating
with
the
storage.
As
can
I,
remember,
I
told
you
that
cubeware
tries
to
reuse
all
the
kubernetes
entities
and
in
this
way
it's
reusing,
CSI
interface.
The
same
way
like
you
do
that,
usually
so,
when
we
have
a
node,
we
run
the
port.
We
can
specify
specific
volumes
which
should
be
attached
to
this
port.
B
B
Yeah
first,
let
me
let
me
explain
you
another
thing.
So
volumes
can
be
file
system
mode.
The
keyboard
also
changed
the
spec
of
the
seaside
a
little
bit,
so
you
can
use
block
volumes
inside
the
kubernetes.
It's
like
the
row
block
devices
will
be
mapped
directly
into
port.
In
this
way,
you
will
have
faster
access
to
the
to
your
blog
devices
and
better
performance.
B
In
the
same
day
way
we
can
do
the
next
port
and
we
can
attach
the
same
way
so
many
devices
as
we
want
to
but
good
question.
If,
when
we're
running
the,
when
we
run
virtual
machine,
we
need
to
expand
it
from
some
image
or
golden
image.
If
we
speak
about
openstack
terms,
it
says
like
golden
image:
it's
like
image
containing
the
operating
system,
predefined
and
pre-installed
the
operating
system,
how
we
can
upload
it
into
kubernetes,
because
for
now
it
does
not
provide
any
interface
for
doing
that.
B
Copyright
has
additional
microservice,
which
is
called
continuous
data
importer,
and
it's
like
separated
operator
who's
separating
this
specific
entity
with
specific
custom
resource,
which
is
called
Data
volume
and
in
Spec
we
can
specify
how
big
volume
volume
should
be
created
and
where
to
take
the
image.
Actually
it
can
be
some
HTTP
Source
the
same
way.
We
can
store
it
somewhere
in
Docker
registry.
B
So
when
we
create
the
volume
resource
inside
kubernetes,
its
pounds,
PVC
standard,
persistent
volume
claim
who
is
binging
to
the
persistent
volume
and
then
the
new
pod
is
created.
Who
is
taking
this
image
from
HTTP
in
our
case
and
downloading
it
to
the
to
our
persistent
volume?
Actually
so
later,
we
can
just
connect
it
to
our
virtual
machine
and
use
it.
As
is
this
way.
B
B
When
we
speak
about
Cubit
networking,
there
are
two
things
which
we
have
to
think
about
is
back
end
and
front-end.
The
back
end
means
that
we,
when
we
create
the
port,
how
the
interface
is,
how
interfaces
are
created
for
this
spot.
Usually
that's
the
our
standard,
cni
plugin.
B
When
we
create
kubernetes
cluster,
we
can
install
some
cni
plugin,
and
this
is
the
same.
Cdi
plugin
used
for
everything.
That's
called
pod
networking
and
the
next
option
is
motus.
Motors
is
kubernetes
extension,
which
allows
you
to
specify
another
cni
Plugin
or
as
many
as
you
want
and
attach
it
to
your
pods,
and
the
next
thing
when
we
create-
and
when
we
write
in
the
virtual
machine,
is
how
we
connecting
the
top
interfaces
of
the
virtual
machine
to
the
interfaces
of
the
pod,
and
here
we
have
few
methods,
the
basic
one
and
default.
B
B
Let's
see
how
it
works
inside,
for
example,
we
have
node
with
physical
interface,
then
we
have
Port,
which
running,
which
runs
the
virtual
machine
for
this
spot,
like
for
any
other
port
in
our
cluster.
The
batch
pair
of
interfaces
is
created.
It's
like
virtual
wire,
who
is
connecting
the
Pod
networking
namespace
with
the
node
networking
namespace,
and
there
are
some
there
is
some
C9
magic.
B
We
actually
don't
care
how
the
cni
plugin
is
acting
because
the
most
interesting
we
mostly
interesting
about
what
is
happening
inside
the
Pod
inside
the
Pod
we
have
bridge
this
bridge
is
created
by
the
build
Handler
and
the
virtual
machine
connected
directly
to
this
bridge
and
between
the
bridge
and
virtual
interface
of
the
Pod.
We
have
masquerade
rules,
which
means
just
standard
type
tables
rules
who
is
redirecting
all
the
traffic
between
those
two
interfaces,
kind
of
complicated,
isn't
it.
B
B
The
next
mode
is
even
more
I,
wouldn't
say
that
it
is
more
difficult,
but
it
is
even
less
performant,
because
masquerade
isn't
performant
as
well
and
slip
mode
is
standard,
camo
user
space
networking
mode
for
make
it
working.
B
We
can
see
the
scheme
same
scheme,
we
have
a
draw
machine
and
to
redirect
all
the
traffic
from
outside
port
to
the
inside
the
virtual
machine.
We
have
camera
process
who's
binning
to
the
TCP
ports,
which
you
also
have
to
specify
in
Virtual
machinesberg,
and
all
that
those
spots
will
be
redirected
to
the
virtual
machine.
B
Sleep
also
can
be
used
just
with
the
Pod
networking.
The
next
mode
is
more
Universal,
it's
called
bridge
mode.
That's
actually
the
mode
I
showed
you
in
the
beginning
of
the
presentation
how
it
works.
It
actually
just
connects
the
virtual
machine
with
the
interface
of
the
Pod
directly
through
the
bridge.
B
We
have
the
same
theme:
we
have
Port
batch
interfaces,
C9
plugin,
we
have
a
Tom
machine
and
then
the
bridge
who
is
connecting
the
pot
and
virtual
machine,
the
Pod
interface
with
the
virtual
machine
interface
in
this
way,
when
virtual
machine
running
it
getting
the
same
IP
address
of
the
port.
But
in
this
way
we
have
the
better
performance
of
all
existing
modes.
B
This
mode
can
be
used
the
same
way
like
for
Motors
and
for
standard
kubernetes,
networking
as
well.
B
The
next
mode
is
sriov,
yeah
I
was
little
lying
when
you
say
that
it
is
most
performant,
the
israeliov,
the
most
performant
mode,
but
the
problem
of
Easter
iov
that
it
works
just
the
physical
function
devices
in
case.
If
you
have
network
cards
which
can
be
split
to
multiply
virtual
cards
with
own
PCI
interfaces,
they
can
be
passed
through
to
the
port
and
then
virtual
machine
will
just
start
using
it
as
is,
but
for
using
this
mode,
you
need
specific
configuration
and
you
need
specific
networking
cards
who
will
support
this
they're.
B
It
allows
you
to
connect
the
interfaces
directly
to
the
physical
interface
of
the
node
so
to
make
MacBook
App
working,
you
need
to
use
specific
MacBook
apps
in
I
plugin.
You
can't
use
it
as
default
for
the
for
your
kubernetes
cluster.
You
can
use
it
like
just
for
additional
interface
to
your
virtual
machines
and
it
works
the
next
way
when
we
have
put
the
interface
micro,
tab,
interface
created
for
and
attached
directly
to
the
physical
interface
of
the
node.
B
And
since
this
is
the
same
tab
interface,
it
can
be
used
directly
by
the
virtual
machine
that
works
nice
and
it
works
everywhere.
You
don't
need
to
have
specific
hardware
for
that,
but
problems
of
this
mode
that
it
works
just
with
the
motus,
and
you
have
know
all
these
features
of
the
kubernetes
networking
as
we
get
used
to
have
like
kubernetes
services
and
policies.
B
So
we
decided
to
little
bit
enhance
this
mode
and
to
teach
Cube
leaders
using
the
just
a
second
using
Mac
web
app,
but
use
it
not
for
the
motors
use
it
with
spot
networking
and
we
actually
don't
use
multisettle.
B
So
how
we
did
that
we
have
standard
kubernetes
networking
scheme
with
better
interfaces.
We
have
cni
plugin,
who
is
controlling
the
everything
inside
the
node
and
when
we
create
Mac
web
app
device
for
the
virtual
machine.
It's
connected
not
to
the
physical
interface
of
the
node,
but
it
is
connected
to
the
batch
interface
of
the
airport.
B
So
when
we
run
virtual
machine
it
can
be
directly
used,
it
can
directly
use
the
same
veteran
interface
as
spot
have,
and
actually
it
has
the
same,
almost
native
performance
if
we
would
use
if
we
would
create
macvatap
interfaces
instead
of
wedge.
I
also
have
another
interface,
which
is
used
just
to
provide
gcp
to
run
DHCP
and
provide
correct
IP
address
to
the
node
to
the
virtual
machine.
B
So
we
changed
compute
in
this
way
and
found
that
it
is
working
and
it
works
really
nice.
We
use
cinem
C9.
So
we
have
policies
and
we
have
kubernetes
Services
networking
and
we
don't
need
to
use
motors
at
all.
The
isolation
is
performed
by
the
policies.
B
If
we'll
compare
these
three
mode,
we
have
two
Resources
with
funnel
and
with
psyllium
and
any
case.
We
can
see
that
those
this
macro
type
mode
we
developed
has
the
less
latency
than
the
others.
B
B
But
this
is
a
good
question:
if
we
trying
to
consume
the
C9
interface,
we
have
this
problem
that
we
can't
live
migrate,
the
virtual
machine,
because
every
node
has
the
Pod
cider
assigned
and
when
we
create
the
port
for
I,
don't
know
any
any
ports
we
create
on
this
node.
They
have
specific
address
from
this
pod
cider
from
this
subnet
who
is
assigned
to
this
node.
B
So
when
we
create
the
virtual
machine
it
is,
it
also
gets
the
IP
from
this
pod
cider.
So
we
can't
easily
live
migrate
it
to
another
node,
because
the
other
Port
will
have
in
other
IP
address
and
when
we
perform
the
live
migration,
the
virtual
machine
should
be
somehow
updated
or
noticed
that
the
IP
is
changed.
Hey
you
should
to
release
and
gather
new
IP
address.
B
B
B
So
when
you
run
the
virtual
machine,
the
Newport
is
created,
it
get
IP
address
from
this
vmc
either,
and
we
also
have
another
microservice,
who
is
routing,
slash
32
subnets,
just
for
this
port
through
the
correct
node.
So
when
we
have
the
virtual
machine
and
when
we
start
the
live
migration
of
it,
the
Newport
is
created.
This
spot
has
actually
exactly
the
same
IP
address.
B
So
in
one
time
in
the
same
cluster
we
have
pods
with
similar
AP
addresses,
but
the
rotors
routing
the
traffic
only
to
the
correct
node,
when
the
virtual
machine
is
migrated,
the
routers
also
changing
their
roads
to
Road
through
the
correct
node.
In
this
way
it
works
more
smooth
later
the
old
pot
getting
destroyed.
B
So
in
a
final
we
got
really
interesting
system.
We
got
a
native
pod.
Networking
works
with
cubbird
without
any
Motors.
Without
any
extensions,
we
can
use
native
pod
networking
for
virtual
machines.
B
We
have
all
the
features
enabled
like
psyllium
Network
policies,
Services
as
well.
You
can
use
standard
load
balancers
for
Road
your
traffic
to
the
virtual
machine
and
we
have
fastest
bending
Mac
wet
up.
Instead
of
this
big
and
ugly
change
chain
of
the
bridges,
we
have
working
live
migration
and
when
life
migrating,
when
virtual
machine
is
like
migrating,
it's
not
changing
the
APR
days
and
for
managing
all
of
this
we
have
specific
custom
resources
for
managing
the
vital
machines,
images
and
disks
for
it.
B
That's
actually
it
here
few
links
for
understand
this
concept
better.
You
can
see
my
previous
talk
about
what
the
difference
between
traditional
virtualization
platforms
and
Cloud
platforms
here
are
a
few
links
for
Community
channels,
capabilities,
open
project
and
you
can
join
to
the
community
meeting
actually
ex
every
Tuesday.
B
B
Thank
you,
my
colleagues
who
helped
me
to
prepare
this
nice
presentation.
They
were
listening
me
and
giving
some
suggestions,
and
here
it
is
now
I'm
ready
to
answer
your
questions.
Thank.
A
You
very
much
for
those
of
you
who
are
watching
us
on
remote.
We
accept
questions
to
the
telegram
chat
and
we
have
one
from
the
audience.
C
Thank
you
for
presentation.
Can
you
tell
real
life
cases
When
people's
use,
this
kind
of
virtualization.
B
We
have
interesting
clients,
we
not
start
working
yet,
but
it
is
quite
huge
client,
it's
one.
He
wants
to
implement
vdi
based
on
cubewirt.
He
wants
to
run
a
Windows
11
servers
he's
passing
through
the
GPU,
and
this
those
gpus
also
should
be
created
not
from
the
Nvidia
grid,
but
using
gvm
technology.
We're
allowing
to
split
physical
GPU
to
many
virtual
ones
and
run
the
virtual
machines
using
it.
B
Those
Technologies
can
work
with
classic
containers,
but
they
want
to
run
Windows
and
it
is
not
easy
to
run
Windows
in
containers.
I
guess.
D
Hi,
thank
you
for
your
speech,
so
deriving
from
your
from
your
first
slides
you,
your
plan
is
to
provide
a
cloud
for
customers
based
on
kubernetes
and
do
I
get
it
correctly
that
currently
there
is
no
way
to
provide
a
networking
service
based
on
kubernetes
and.
B
Got
your
question
and
that's
about
implementation
of
vpcs
yeah.
We
wanted
to
implement
vpcs,
but
currently
we
this
like
in
beginning
phase.
We
want
to
do
that
on
the
next
phase
and
the
vpcs
can
be
implemented
the
same
way
using
Motors.
So
you
just
specify
another
like
some
cni
plugin,
which
would
work
with
vix,
lens
and
you'll,
see
another
interface
and
the
vital
machine
will
be
connected
to
it
directly
and.
B
Have
any
plans
actually
what
I
mean
by
VPN
for
customers
or
yes,
yes,
like
for
connect
interconnecting
their
networking?
We
were
thinking
about
this
and
I
think
that
everything
works
inside
the
virtual
machines.
The
work
can
work
in
the
same
way
like
with
traditional
virtualization
platforms,
so
you
can
run
your
own
VPN
server
inside
your
virtual
machine.
You
can
expose
it
through
the
kubernetes
network
load
balancer.
Then
user
can
connect
to
it
from
another
from
his
own
infrastructure
and
can
access
everything
inside
of
it.
D
E
Hi,
thank
you
for
the
talk.
I
had
a
question
other
normal
ports
and
services
that
are
available
from
within
this
virtual
machines
or
is
just
isolated
completely.
Yes,.
B
B
E
B
B
G
Related
to
our
task
and
comparable
with
openstack
world
and
the
first
one
could
you
explain
is
that
this
solution
could
build
I
mean
already
to
provide
the
customer
platform.
I
mean:
do
we
have
permission,
control
roles,
projects,
security
groups.
B
G
B
Yeah
I
would
do
that
like
this
way
like
every
namespace
can
be,
it's
can
have
its
own
tenant
and
for
every
tenant.
You
can
give
namespace
for
run
the
virtual
machines
for
this
namespace.
You
can
add
resource
converters,
some
codes
and
anything
everything
like
that
about
production,
ready
I
would
say
that
it
is,
but
you
should
understand
that
cubewirt
is
like
openstack
is
not
ready
project.
You
can
use
it.
Yes,
it
is
really
easy
to
install
in
kubernetes
and
to
try.
B
But
if
you'll
see
the
spec
of
this
interfaces,
you
should
understand
all
the
aspects
like
which
type
of
front-end
backhand
for
networking
you
want
to
use.
If
you
want
to
use
block
volumes
or
shared
like
Mount
points,
so
it
can
be
really
difficult
for
the
user
for
end
user,
but
after
all,
I
see
that
it
works
and
it
works.
Nice
and
I
see
in
many
aspects.
It
can
already
replace
openstack.
B
B
Yes,
API
can
ensure
that,
like
actually,
you
have
a
command
line
tool
with
CTL
and
you
write,
which
CTL
console
virtual
machine
and
you
got
directly
into
the
command
line.
Interface
of
the
virtual
machine
and
extending
your
question:
is
there
any
GUI
for
cookware?
Yes,
there
is,
but
it
works
just
with
openshift.
You
can
use
VNC.
You
have
some
charts.
There.
G
B
Do
you
use
this
is
done
by
the
I
would
show
you
the
scheme
again,
but
it's
too
many
slides
to
click.
You
have
weird
launcher,
it's
like
go
binary,
who
is
running
libyer
D
and
it
is
also
running
the
gcp
and
this
gcp
is
built
in
and
it's
like
gold.
It
uses,
go
Library.
G
Yeah
and
the
question
about
the
IP
addresses
that
you've
shown
that
you
create
some
new
new
resource
Cedar
for
the
for
the
virtual
machines
am
I
right,
that
is
from
the
openstack.
It's
analogy:
it's
a
floating,
KP
Network
external
network
or
something
like
that.
B
There
are
no
such
entities.
You
can
create
some
kind
of
external
networking
using
Motors
like
if
you
specify
Cena.
If
you
use
motors,
you
can
specify
cni
Bridge,
which
should
give
you
eyepiece
from
specific
range,
and
that
will
work.
That's
actually
about
the
red
hat
guys
and
trying
to
do
they
trying
to
implement
the
same
design
of
networking
the
openstack
has.
B
But
in
terms
of
kubernetes
you
don't
need
everything
you
have
just
anything.
You
just
have
the
Pod
networking
and
when
you
need
to
load
traffic
external
traffic
to
the
kubernetes,
you
use
load
balancers
in
case,
if
you
use,
if
you
have
on
premise,
if
you
have
a
bare
metal
infrastructure,
you
can
use
metal,
be.
B
Yeah
I
would
say
that
container
is
just
specific,
just
necessary
entity
to
run
the
virtual
machine
process
and
keyword
is
doing
this
thing.
Yeah.
F
Yeah,
what's
the
best
tool
to
prepare
images
for
use,
this
could
beard.
B
Actually,
could
beard
can
reuse
all
the
images
for
openstack
and
all
the
things
you
need
to
have
inside
is
gcp,
client
and
cloud
in
it
because
cubic
also
provide
you
opportunity
to
specify
Cloud,
init
config,
so
for
openstack
we
can
use
Packer
and
later
that's
a
good
question
how
the
images
can
be
stored
in
keyword.
You
can
store
them
standard
in
dock
registry
and
how
the
image
for
cubabilities
looks
like
it's
like
a
directory
inside
the
image,
it's
Docker
image
with
Slash
disk
and
inside
this
directory.
You
just
store
one
image
file.
A
I
think
we
are
running
out
of
questions
at
the
moment.
Thank
you
very
much
and
we
need
a
choice
of
the
best
question.
A
B
Know
I'm
telling
the
question
about
the
console
and
how
to
manage
it,
because
this
question
allowed
me
to
explain
how
you
can
operate
whiskey,
weird
from
the
user
side.
Okay,.