►
From YouTube: KubeVirt Community Meeting 2021-06-16
Description
Meeting notes: https://docs.google.com/document/d/1kyhpWlEPzZtQJSjJlAqhPcn3t0Mt_o0amhpuNPGs1Ls/edit#heading=h.stbrmqkhufl2
A
Hello,
everybody:
this
is
the
weekly
meeting
for
project
kubert,
I'm
your
host
chris
caligari
and
let's
begin.
A
Okay
sharing
my
screen
now,
I'm
glad
to
see
folks
filling
out
the
attendees.
A
Oh,
we
have,
we
have
a
couple.
New
people
feel
free
to
introduce
yourself
now.
B
A
And
do
we
have
anybody
else
that
would
like
to
say
hello?
No,
okay,
let's
get
into.
A
Okay,
let's
get
into
the
agenda,
then
daniel
has
the
first
bullet
point.
C
Hi
everyone
just
a
quick
heads
up.
I
hope
everyone
can
hear
me
by
the
way
we
are
going
to
rename
the
default
branches
to
main
for
the
repositories.
Cue,
build
and
project
infrared
we're
planning
to
do
this
on
friday.
C
Before
we
are
starting
this,
we
will
of
course,
also
give
another
heads
up
to
the
cupola
death
mailing
list,
just
just
as
a
heads
up
that
this
should
happen
on
friday.
This
friday.
A
Sounds
good,
thank
you
for
that
notice
daniel.
Could
you
also
help
us
with
user
guide
and
cooper.github.io.
C
Yeah
sure
I
think
you
are
talking
about
what
john
herr
is
trying
to
tackle
right.
Yes,
I've
contacted
him
and
first
I've
written
with
him
and
and
we
made-
or
I
explained
to
him
my
strategy
about
how
to
rename
branches
and,
but
I
think,
he's
on
he's
out
this
week.
I
think
and
yes.
A
Great
appreciate
that
just
so
you
know,
cooper.github.io
still
uses
travis
ci
as
our
as
our
merge
strategy,
so
I'm
working
hard
to
get
to
get
that
switched
over
to
to
prowl
this
week.
So
you
should
see
some
pull
requests
into
project
infra.
D
Hey
so
this
is
an
issue
I
posted
on
the
on
the
mailing
list
last
week
and
we
even
talked
about
it
last
thursday
in
sixth
scale.
This
is
kind
of
interesting
there's,
if
you,
if
you
do,
want
to
open
it
up.
Chris
said
for
sure
just
to
give
folks
a
flavor
of.
D
What's
there
there's
in
releases
other
than
or
less
than
zero
four
zero
there's
the
fert
handler
creates
a
ton
of
list
api
requests
and
you
don't
actually
see
it
until
you
reach
large
amounts
of
scale,
I'm
still
working
on
exactly
what
the
numbers
are,
because
it's
kind
of
varies
from
what
I'm
seeing
for
different
releases,
and
I
can
talk
about
that-
probably
more
in
the
sixth
scale.
D
Next
thursday,
when
I
get
the
numbers,
but
basically
what
happens
is
vert
handler,
creates
tons
and
tons
of
these
list
requests
and
you
hit
like
700,
800,
900
vms,
in
a
zone
or
in
a
data
center
or
whatever,
and
eventually
the
lists
requests
add
up.
They
overwhelm
the
api
server
and
the
latency
goes
from
milliseconds
to
almost
a
minute
fairly
quickly
and
this
issue
was
spotted
and
there
is
a
pull
request
that
was
merged
in
march.
That's
that's
listed
there
and
it's
it's
from
what
I've
seen
testing
so
far.
D
It's
solving
the
issue
it's
hard
to
tell
how
how
much
scale
this
sort
of
adds,
considering
that
different
releases
are
affected
differently.
There
could
also
be
other
issues
at
play,
but
so
far
this
issue
does
look
good.
So
the
reason
I
wanted
to
bring
it
up
is
because,
if
you
are
using
a
release,
that's
less
than
zero
four
zero.
D
Your
scale
I
mean
it's
roughly
like
600
700
vms-
is
right
now
right
around
the
limit.
Where
you'll
start
to
see
this
and
and
basically
every
time
you
do
a
get
request,
you'll
notice,
how
long
it
takes
for
you
to
get
any
sort
of
list
of
the
vmis
back.
F
E
I
think
the
concerning
thing
to
me
is:
you
know
it's
pretty
bad,
that
this
got
introduced
and
it's
great.
They
got
it
fixed
and
it's
great
that
we
know
what
happened,
but
we
don't
have
any
sort
of
mechanism
to
protect
us
from
this
in
the
future.
Any
sort
of
auditing
of
the
api
calls
we
make
and
seeing
when
an
increase
occurs.
It
just
kind
of
we
just
find
out
whoever's
testing
this
in
production.
D
Yeah,
absolutely,
I
think
you
know.
One
thing
that
I
was
thinking
about
with
this
issue
is
that
it's
actually
with
the
sort
of
the
framework
that
we
can
kind
of
think
of
here
that
we
can
test
in
ci.
This
would
actually
be
really
valuable
that
we
actually
have
something
that
we
can
reproduce
like.
D
If
we
were
to
go
back,
we
should
we
should
be
able
to
protect,
detect
this
precisely
like
the
and
figure
out,
like
a
number
of
things
like
what
the
latency
is,
the
precise
scale
like
the
tool
that
we
want
to
come
up
with
that
measures
this
stuff.
We
can
actually
use
this
issue
to
to
kind
of
gauge
its
success.
D
So
that's
something
that
I
sort
of
I
see
as
a
silver
lining
for
this,
that
we
could
that
we
could
use
to
to
measure
in
the
future,
but
yeah
I
mean
I
totally
agree
david
like
it
would
be
good
to
have
and
per
commit
per
release,
whatever
we
want
to
know
like
what
these
numbers
are.
They're,
both
the
scale
numbers
and
as
well
as
the
performance
numbers.
E
I
think
we
could
make
a
even
a
pre-submit
to
catch
well,
I
think,
there's
several
layers
to
this.
It's
possibly
we
can
even
create
pre-submit
but,
for
example,
let's
create
20
virtual
machines
in
ci
as
pre-submit,
and
we
expect
this
type
of
outcome.
You
know
give
or
take
a
little
bit
like
maybe
we'll
get
2x
even
and
if
we
get
like
5x
api
recalls
and
during
that
creation
period,
then
fail
it
and
say
hey.
E
This
needs
to
be
investigated
what's
occurring
here
or
if,
if
new
api
calls
get
introduced
regarding
our
api
that
weren't
expected
to
occur
during
this,
that
would
be
another
signal.
But
something
is.
D
Yep
yeah
definitely
there's
like
yeah.
This
is
just
one
like
this
is
this
is
list
there's
more
to
this,
and
this
is
just
something
that
that
we've
seen
on
our
side
with
some
of
our
production
systems,
is
that,
like
the
update,
call,
gets
called
a
ton
like
when
we're
at
large
scale.
There's
a
lot
of
update,
calls
that
happen,
and
it's
like
vm
vmis
are
being
updated
and
they're
in
running
state
and
they're.
D
Just
there's
a
lot
of
calls
and
eventually
like
when
they're,
when
you
reach
a
certain
scale,
ncd
calls
or
they
the
calls.
Eventually,
you
know
the
update
the
objects
get
into
ncd
std
has
to
compress
the
data,
and
eventually
you
know
that
costs
a
lot
of
cpu
and
memory
and
watch
watches,
get
closed
and
things
happen,
and
it
does
limit
things
so
there's.
So
I
completely
like
we.
D
E
I'll
try
to
follow
up
on
this
a
little
bit.
I
think
I
have
an
idea
of
how
to
begin
auditing
this
stuff,
because
we
have
in
the
cube
kubernetes
config
the
data
structure.
We
use
to
create
our
clients
there's
a
way
to
hook
in
to
the
round
tripper,
so
we
can
inject
some
tracing
logic
there
and
begin
auditing
it
that
way.
D
Yeah,
okay,
that
sounds
cool
yeah
like
basically
the
the
way
I've
observed.
This
was
like
just
looking
at
the
the
prometheus
or
look
just
actually
looking
at
the
previous
data
and
then
building
something
in
grafana
that
that
scrapes,
the
endpoints,
based
on
what
endpoints
that
that
cube
exposes
on
the
api
server
so
yeah.
If
there's
something
we
can
hook
into
permit
programmatically
yeah
that'd
be
awesome.
D
Cool
okay,
I'll
update
the
so
what
I'll
do
is
like
with
this
issue?
Well,
actually,
that's
actually
another
thing.
I
want
to
ask
so
this
this
issue,
so
in
less
than
zero,
four
zero.
D
Anything
that
that
that
is
using
the
prometheus
will
be
affected.
What
I,
what
I'll
do
is
like
I
want
to
get
the
numbers
roughly
per
like
a
few
of
the
releases
like
you
know,
get
the
general
gauge
for
the
scale
and
I'll
post
them
on
the
main
list.
So
folks
are
aware:
do
we
wanna,
like?
Is
this
some
issue
that
we
never
want
to
consider
back
porting?
Or
is
this
something
that
like
we
should
just
inform
folks
about,
and
you
know
record
it.
E
It's
pretty
loose,
we
do
have
a
formal
policy
and
the
formal
policy
is
essentially,
you
can
backport
a
bug
fix
as
far
back
or
anything
that
falls
under
our
backport
policy,
like
what
criteria
that
has
to
be
met
in
order
to
back
work,
you
can
back
for
as
far
back
as
you
want.
As
long
as
ci
still
runs
so
ci
is
running
and
we
can
validate
that.
It
worked
then
good.
A
Okay,
there
you
have
it
ryan,
back
port
to
20
30,
different
versions.
E
E
What
would
people
think
about
the
ability
to
snapshot
a
virtual
machine
upload
that
as
a
container
image,
I'm
not
necessarily
talking
about
container
discs,
I'm
talking
about
a
container
image
that
contains
like
the
disks,
as
well
as
the
vm
spec
and
any
other
sort
of
metadata
or
whatever
associated
with
that
vm
and
then
be
able
to
restore
from
that
container
image.
So
it's
almost
like
we're
using
containers,
the
packaging
and
delivery
of
snapshots.
G
This
just
came
up
recently,
and
I
really
like
this
idea,
something
that
my
mind
immediately
extended.
This
with
is
well.
If
it's
in
a
container,
then
you
can
use
scopio
and
move
it
around
potentially
to
an
offline
cluster
that
and
you
can.
E
Begin
versioning
it,
but
you
naturally
get
versioning
just
with
container
tags
and
things
like
that,
it's
pretty
portable
once
you
get
into
a
container
image.
E
Possibly
somebody
I
mean
I
don't
I
don't
know
how
pr
I
know
that
the
storage
team
was
looking
at
bolero
yeah.
F
E
Kind
of
like
a
production
standard
for
a
lot
of
people,
so
maybe
in
some
cases
people
might
consider
this
for
disaster
recovery.
I
C
I
Similar,
whereas
you
know
valero
backs
up
your
resources
to
like
an
object
store
like
you
know,
s3
or
something
this
will
just
use.
The
container
registry.
I
Seems
like
the
big
kind
of
theoretical
difference.
H
E
A
And
david
we
have
a
hook
into
the
virtual
machine
to
crease
memory.
I
So
we're
working
on
that
now
implementing
well
so
we're
implementing
right,
currently
fs,
freeze
implementation
for
snapshots,
so
we'll
get
the
file
systems
consistent.
I
So
we
have
online
snapshots
now:
they're,
just
not
they're,
not
integrated
with
their
guest
agent
and
fs
screens.
Yet
so.
D
E
A
I,
to
be
honest,
I
don't
think
snapshots,
but
because
all
right,
I
gotta
tell
you
what
my
my
work.
History
is.
I
was
a
with
hewlett-packard
global
services
and
their
data
archiving
service.
A
So
when,
when
we,
when
we
talk
about
doing
backups,
like
I
immediately
think
of
like
data
archiving
for
an
entire
data
center
and
like
at
the
petabyte
range,
so
when
you,
when
you
get
down
into
the
individual
virtual
machine
like
in
today's
day
and
age,
there's
an
expectation
that
you
have
a
a
solid
deployment
process,
you
have
a
configuration
management
mechanism
of
some
sort
and
then
basically
you
should
be
able
to
to
bring
up
your
your
app
from
ground
zero
within
seconds
or
a
few
minutes.
A
And
then
it's
a
matter
of
restoring
data,
restoring
your
your
data
sources
and
so
like
when
you're,
when
you
think
about
the
the
backup
of
an
individual
virtual
machine
like
why.
F
A
You
have
this
entire,
this
entire
production
process
to
to
build
out
virtual
machines.
There's.
A
E
I
see
what
you're
getting
at
when
you're
coming
out
from
that
angle.
Snapshots
may
hold
less
value
but
think
about
the
scenario
where
somebody's
building
their
own,
like
an
aws,
for
example,
sometimes
people
take
a
ami
standard
image
off
the
marketplace.
Let's
say
they
take
a
centos
8
image
and
they
want
to
build
an
application
inside
of
that
image
and
then
essentially
snapshot
that
and
then
make
like
a
thousand
copies
of
it.
So
they.
A
H
Not
the
creation
from
a
running
vm,
that's
more
complicated,
okay
and
another
part
on
the
snapshot
thing
like
you're
right,
but
there
is
also
more
and
more
workloads
that
are
like
memory
workloads
and
if
you
have
proper
disk,
if
you
have
a
disaster
and
you
want
to
recover-
and
you
have
a
lot
of
in-memory
workloads,
you
might
want
to
have
memory
snapshots
because
it
can
take
longer
to
rebuild
all
that
stuff.
H
So
there
yeah.
I
I
mean
it's
the
whole,
that's
the
membership
I
mean
there
are
gonna,
definitely
be
cases
where
you
have
you
know,
especially
with
virtual
machines
that
are
special
and
but
I
think,
for
your
probably
what
suits
your
history
is,
like
the
you
know,
bolero
backup,
which
is
a
real
enterprise
disaster
recovery
thing
that
is
really
works
best
at
backing
up
your
whole
name,
space
or
backing
up
a
whole
set
of
machines
and
restoring
a
whole
big
set.
So
I
think
this
is
more.
I
The
more
bespoke
use
case.
A
A
They
they
have
their
installation
mechanism
and
they
don't
like
to
try
and
bother
with
a
restoring
of
the
of
the
insulation
of
the
binary
finals.
E
Maybe,
regardless
of
what
we
think
about
snapshots,
they
they
exist
and
they're
going
to
continue
to
be
developed
on
maybe
with
or
without
our
agreeing
that
they
are
the
most
useful
mechanism
for
different
things.
E
So
gomez's
original
thought,
if
we
have
snapshots,
are
we
if
we
back
these
things
up
to
container
images,
are
we
just
inventing
something?
That's
clever
or
our
is
there
value
here?
That's
what
I'm
trying
to
determine.
Is
this
something
that's
worth
pursuing
or
is
it
something
that
would
just
be?
Oh,
that's
neat.
Nobody
would
ever
use.
J
So
one
other
real,
quick
thought
on
that.
I've
been
working
with
some
people,
who've
been
using
kubevert
and
for
various
reasons,
they've
been
rebuilding
their
clusters
a
lot
and
each
time
they
would
sort
of
get
a
virtual
machine
that
was
running
and
then
they'd
have
to
rebuild
their
cluster,
and
you
know
complete
wipe
start
over
and
there's
no
real
way
to
just
simply
export
a
virtual
machine
or
a
vm.
That's
been
defined
inside
kubevert,
so
the
other
thing
that
would
be
really
interesting.
J
You
know
I
had
actually
taken
a
quick
look
at
the
cdi
importer
to
see
if
there
was
a
way
to
flip
that
on
its
head
and
use
it
as
an
exporter
as
well
the
same
type
of
workflow,
where,
instead
of
doing
a
you
know,
vert
ctl,
you
know
upload
image
to
kind
of
import
something
into
a
pvc
use
that
same
type
of
concept
to
spit
one
back
out
again,
and
then
you
know
once
you
had
it
spit
out,
then
you
could
obviously
use
it
to
re-upload
it
in
you
know
and
re-import
that
same
image
again
in
it's
not
quite
as.
J
Fancy
as
what
you're
talking
about
as
far
as
using
the
container
images,
I
think
there's
a
real
value
there
too,
but
just
having
the
ability
to
you
know
do
a
simple
export
of
a
virtual
machine
image,
possibly
from
the
vert
control
tool.
I
think,
would
be
very
powerful
or
very
useful
as
well.
E
So
I
think
the
container
image
part
is
just
a
delivery
mechanism,
so
it's
just
a
place
that
we
all
have
when
we're
talking
about,
like
kubernetes
deployments,
to
store
and
retrieve
data.
If
we
had
an
object
store
that
was
external
to
the
cluster,
that
would
make
sense.
It's
just
something
that
exists
everywhere.
That's
the
only
reason
why
I
was
considering
it.
I
Yeah,
I
mean
definitely
when
working
with
containers.
The
registry
makes
a
lot
of
sense
and
I
think
yeah.
Maybe
we
should
come
up
with
like
a
standard
format.
For
I
I
do-
and
this
is
an
idea
of
kicked
around
a
bit-
is
with
the
vert
control
export,
but
like
come
up
with
the
standard
format
for
exporting
a
vm
and
then
have
ways
to
upload
that
to
different
endpoints
and
and
yeah
registry
seems
like
natural
first.
That.
J
Yeah,
I
I
think
there
is
a
standard
for
like
exporting.
I
don't
know
how
much
of
a
standard
it
really
is,
but
it's
like
the
ovs
or
ovf
you
know.
Ideally
that
would
be
the
way
to
go.
If
we
were
going
to
do
something
that
was
a
full
export,
that's
kind
of
a
standards-based
thing.
E
I
see
that
it's
called
the
ovf
open,
virtualization
format.
Yes,
that's
were
talking
about
that
a
little
bit
earlier
in
the
company
and
it's
a
standard
kind
of
it
sounds
like
it's
a
like
a
loose
standard
to
the
point
where
none
of
the
like
okay-
if
I
back
something
up
in
the
ovf
standard,
it's
not
necessarily
going
to
be
able
to
be
restored
by
anything
that
can
restore
the
obs.
E
Platform
specific
still,
so,
if
I
back
up
something
ovf
on
key
vert
and
try
to
restore
it
and
over,
that's
not
gonna
work.
Probably
unless
we're
like
really
careful
about
how
we
do
it.
J
Yeah,
no,
I
I
agree,
it's
it's
sort
of
a
wishy-washy
format
for
lack
of
a
better
way.
To
put
it,
but
I
figured
I
would
just
mention
it,
as
you
know,
a
possibility
at
least
there's
something
out
there
that
we
can
try
to
work
towards
again
to
your
point.
It's
it
definitely
is
kind
of
a
a
non-standard
standard.
E
Since
it's
a
non-standard
standard,
do
we,
I
guess
I'm
trying
to
understand
the
value
of
it
then
like
what's
the
value
of
us
using
this
standard,
rather
than
coming
up
with
something
that's
natural
for
key
verts,
I
guess
I'm
thinking
about
a
bit
from
the
way
perspective
of.
Why
not
take
the
easiest
and
simplest
path
forward
for
us
than
try
to
fit
within
a
standard
that
doesn't
have
any
value.
J
Yep,
the
only
the
only
thought
there
would
be,
you
know
some
applications
and
some
vendors
will
distribute
a
virtual
application
as
an
ovf
or
an
ova
and
so
being
able
to
import
those
might
be
useful,
and
so
that
was
where
the
thought
of
you
know.
If
we're
gonna
be
able,
if
it
was
something
that
we
could
import,
then
being
able
to
export
it
as
well,
there
might
be
some
value
there.
I
H
I
Yeah
I've
heard
that
there
are
ways
you
can
like
in
a
container
there
are
labels
or
annotations
or
something
where
we
could
stuff
the
m
yaml
or
something
some
very
similar.
I
Yeah
yeah,
but
if
yes
yeah.
F
I
I
have
just
a
question:
if
we
start
using
container
images
for
snapshots,
so
container
images
at
least
on
kubernetes
are
not
namespaced.
G
Well,
the
the
it
depends
on
the
use
case.
If,
if
it's
for
security,
that
would
cause
all
sorts
of
confirmation.
If,
if
you
actually
wanted
to
move
name,
spaces
that'd
be
pretty
convenient,
but
I
imagine
the
what
would
be
on
the
table
would
be
actually
putting
the
entire
manifest
of
the
vm
into
the
container.
So,
for
instance,
the
namespace
would
be
saved
there
if
you're
attempting
to
restore,
but
you
you're
very
correct
about
this
being
suddenly
globally
visible.
F
I
F
E
A
Yeah
right,
I
did
work
with
some
lux
based
encrypt
file
system,
encryption.
F
A
It
works
and
it
works
as
expected,
you
have
to
counsel
into
the
virtual
machine
and
enter
your
password
at
the
very
basic
configuration.
A
So
if
you
take
a
snapshot
of
that,
you
probably
expect
the
same
behavior
from
a
virtual
machine
based
on
that
snapshot.
E
Okay,
I
I'll
take
this
with
me
for
a
little
bit
and
I'll
try
to
write
something
up
a
little
bit
more
crisp
on
the
mailing
list.
I
think
I'm
going
to
restructure
this
instead
of
talking
about
snapshots,
talk
about
import
and
export
virtual
machines
to
container
images
and
we'll
we'll
see
where
it
goes
from
there
does
anyone.
H
One
quick
note
on
the
globally
visible,
like
the
snapshot,
would
only
be
visible
on
the
node
that's
sent
to
or
right
it's
it's
not
always
globally
visible.
It's
like
if
you
limit
your
namespace
to
a
certain
set
of
nodes.
Only
people
on
the
same
notes
could
read
the
snapshot.
I
think
so,
but
it's
still.
E
That,
maybe
that's
kind
of
abstracted
away.
If
we're
talking
about
uploading
to
a
container
registry,
then
yeah,
the
security
of
the
container
restroom
was
visible
there
like
it's
the
same
thing
as
any
other
exporting
to
any
other
data
store
of
some
sort.
You're
you
have
to
be
aware
of
where
you're
exporting
something
and
where
it's
visible.
E
So
I
don't
know
of
trying
to
carry.
Maybe
I'm
misunderstanding
this,
but
trying
to
carry
the
same
visibility
of
what
was
visible
for
this
virtual
machine
in
the
cluster
like
main
space
wise
and
carrying
that
into
an
external
data
store.
H
F
I
would
just
replace
snapchat
with
images,
but
it's
pretty
nice
yeah.
K
E
H
E
I
think
that
there
was
a
technical
limitation
on
the
client
side
and
I
think
that
that's
gone
now,
because
we
hit
that
with
our
ci,
our
keeper
ci,
because
we're
we're
packaging
up
our
ci
node
images
that
are
used
for
building
our
like
kubernetes
clusters,
that
we
run
ci
and
container
images,
and
we
actually
hit
some
sort
of
limit.
This
is
back
in
like
2017
at
around
20
gigabytes.
I
think
I
don't
think
it
exists
anymore,
though,
because
it
kind
of
magically
disappeared.
At
one
point,
I
can't
remember
the
details
there.
A
Yeah
at
credit
suisse
bank,
we
used
to
have
a
20
gig
rpm
for
installing
oracle.
A
A
Okay,
are
we
about
all
done
with
this
topic,
then?
Do
we
have
everything
that
we
want
to
say
about
it.
A
Thanks
david
and
thank
you
again
for
posting
the
the
deep
dive
into
cooper
with
siam
yesterday,
it
was
quite
awesome.
The
I
attached
a
link
to
the
the
video
archive
for
those
that
that
missed
the
live
stream
yesterday
and
thank
you
everybody
for
showing
up
in
chat.
Also,
I
saw
a
couple
folks
there
and
even
dan
k
showed
up
so
that
was
really
neat
to
see
him
present.
E
Yeah
just
for
future,
I
didn't
realize
how
helpful
it
was
to
have
community
members
in
the
chat
for
these
types
of
events,
because
it's
very
intimidating
to
be
trying
to
pay
attention
to.
You
know
the
conversation,
everything
and
then
keeping
up
with
the
chat.
At
the
same
time,
I'm
just
seeing
things
fly
down
on
the
chat
and
then
I
started
seeing
community
members
answer
questions.
E
It
was
like
a
relief,
so
thank
you
all
for
joining
and
we
need
to
in
the
future
make
sure
that
maybe
we're
even
intentional
about
backing
each
other
up.
I
mean
I'm
really
glad
everyone
came
out.
I
want
to
make
sure
nobody
in
the
future
gets
stuck
in
a
situation
where
they
don't
have
support,
because
it
was
so
helpful.
A
Yeah
we
we
totally
missed
that
in
the
planning
and
I
was
going
to
be
present
anyways
and
I
actually
missed
siam's
invitation
to
the
live
stream.
A
He
wanted
to
get
me
up
there
and
I'm
actually,
I'm
actually
really
glad
that
you
that
you
volunteered
to
take
it
over
david,
because,
despite
despite
everything,
I'm
not
a
very
good
public
speaker
and
I
can
get
up
there
and
become
a
deer
in
the
headlights
and
just
not
know
what
to
do
and
lose
lose
my
thoughts,
but
you
got
up
there
and
I
was
hanging
out
in
chat
and
all
of
a
sudden,
I
saw
the
the
chat
start,
moving
pretty
quick
with
questions
and
just
jumped
right
in
there
and
of
course,
others
were
there
also
and
helped
out
with
that.
A
Well,
thank
you.
I
appreciate
it.
I
thought
it
went
well.
I
did
too
can't
wait
to
do
the
next
one
we
should
have.
I
really
I'm
excited
about
doing
a
another
coupe
vert
summit
that
worked
out
so
awesome,
so
josh
and
I
have
been
talking
about
what
we
want
to
do
for
the
next
summit.
A
We
were
talking
about
doing
it
more
often,
but
we
didn't
want
to.
We
didn't
want
to
over
burden
everybody
and
and
like
dilute
the
the
momentum
like
we
just
had
a
red
hat
summit,
just
got
split
up
into
two
different
parts,
and
it's
actually
going
on
right.
Now.
Yesterday
was
the
first
day
of
part.
Two
and
attendance
was
miserable,
as
so,
we
we
lost
the
event
lost
a
lot
of
momentum
from
the
from
part.
One
last
month,
there's
probably
going
to
be
some
fallout.
E
Yeah
consider
an
annual
thing,
because
that
gives
everybody
a
single
target
and
I
think
we'll
get
better
attendance.
A
A
Attendance
to
to
coover
summit
was
really
good.
We
were
all
surprised
at
the
turnout
and
even
the
the
number
of
of
submitted
papers
was
surprising
and
we
had
to
create
a
committee
to
review
all
the
papers
and
had
to
turn
some
down.
Even
we
didn't
expect
that.
A
So
lots
more
to
talk
about
in
the
near
future
and
of
course
we
have
the
all
things
open,
demo
being
built
out
right
now
and
again,
if
you
want
to
volunteer
for
that,
please
let
me
know
we're
building
out
an
internet
distributed
kubernetes
cluster
running
kubevert
on
raspberry
pi.
A
So
that's
that's
pretty
cutting
edge
for
us.
I'm
I'm
passionate
about
super
computing
con.
So
I'm
going
to
be
driving
that
one
there's
a.
I
have
a
colleague
at
at
nasa
that
is
present
at
this
convention.
So
I
want
to
try
and
hook
him
into
using
kubert.
A
So
I
do
have
some
some
intentions
there,
and
it
will
be
the
first
time
I'll
I'll
ever
have
swayed
his
technical.
A
And
if
anybody
has
any
other
event
suggestions,
please
let
me
know:
oh
kubecon,
n,
a
and
kvm
forum
we
still
haven't
haven't
heard
any
word
back
about
whether
or
not
our
papers
have
been.
C
Yeah,
actually
I
forgot
about
that-
I
would
have
done
it
otherwise,
in
the
put
it
into
the
agenda
but
yeah.
Just
a
quick
heads
up
that
we
have
now
basic
rest
api
coverage
available.
C
It's
still
kind
of
preview,
so
no
filtering
on
unnecessary
api
calls
or
something
we
need
to
improve
on
that,
of
course,
but
yeah.
C
If
someone
wants
to
have
a
look
at
that,
we
there
is
a
tool
that
is
used
to
create
the
coverage
from
the
kubernetes
audit
logs,
using
the
open
api
definition,
and
then
it
generates
the
api
coverage
report,
which
is
just
a
json
file
and
and
the
prowl
lens
displays
that
if
you
want
to
just
a
quick
click
at
the
example
link,
then
then
you
can
just
see
what
what
I
mean
with
that.
Maybe
chris,
oh
sure,
no
the
example
link.
Sorry
yeah,
it's
fine,
it's
fine,
that's
great!
C
Just
so
that
you
see
how
that
looks.
There
is
the
rest
api
coverage
report
at
the
top
of
the
page.
You
see
it
at
the
moment.
It's
8.58,
which
is,
of
course
a
great
like,
like
I
said,
the
filtering
still
misses
and
you
can.
You
can
expand
the
details
on
the
right
of
the
rest
api
coverage
report.
Please
call
back
up
chris,
thank
you
and
then
you
see
where
the
rest
api
coverage
report
is
that
you
can
click
on
the
details.
C
And
then
you
see
the
the
detailed
reports,
what
urls
and
what
combinations
of
verbs
and
and
fields
were
hit.
So
yeah,
that's
just
a
quick
heads
up
so
that
everyone
knows
that
we
have
this.
So
the
report
needs
to
get
triggered
manually
at
the
moment,
which
also
you
can
you
can
see
how
we
do
that
at
the
pr
and
yeah
it's
not
not
on
by
default,
but
yeah
we're
still
working
on
that
thanks.
A
Okay,
we
have
some
pull
requests
that
need
attention.
F
Okay,
this
is
from
me,
so
actually
I
have
a
specific
problem
to
this
pr
and
that
I
would
love
to
have
some
feedback,
so
basically,
I'm
adding
a
new
command
to
versatile
and
this
command
is
deploying
a
container
image.
So
I
would
like
to
avoid
to
hard
code,
the
registry
and
the
tag
of
this
image.
F
The
problem
is
that
regular
user
cannot
get
cuber
crt.
So
what
I
did
I
added
a
new
crd
that
holds
the
image
information
so
but
that's
of
course,
introduced
in
your
crd
in
cuber.
So
I
would
like
to
ask
you
if
this
is
fine
for
you
or
if
you
have
any
other
suggestions.
L
F
Yeah
the
problem
is,
I
mean
there
is
a
flag
where
you
can
override
the
the
image.
The
thing
is,
maybe
you
release
new
images
more
often
than
their
ctl
version,
so.
L
L
I
H
L
F
L
F
Okay,
yeah,
I
mean
I
can
hardcode
query
and
that's.
I
think,
the
standard
register
where
we
push
the
images
and
then,
if
somebody
wants
to
use
this
but
a
special
image,
it
can
always
overwrite.
So,
okay.
L
H
L
H
L
I
would
still
recommend
that,
like
kevin
said,
take
the
dirt
carriage
history
and
variable
during
build
time
and
read
the
tag
dynamically
by
by
the
end
point
with
which
the
bridge
cattle
version
command
uses.
L
Okay,
we
we
can
think
about
improving
it
even
further
later,
but
I
think
that
unblocks
you
and
makes
the
future
yeah.
H
H
Could
also
provide
the
registry
pulse
from
I
don't
know
if
that's
something
we
want
to
do.
L
It
calls
it
brings
you
the
word
cattle
version,
but
also
caused
calls
word
api
to
get
the
server
version,
and
this
is
something
you
could
use.
A
Great
thanks,
alice
you're
moving
along
pretty
nicely
on
that
one.
A
H
Me
again
with
ssh:
I
just
want
to
get
more
like
more
feedback
from
the
community
about
this
discussion.
We
have
I'm
I'm
building
the
word,
ctl
ssh
command
and
up
until
yesterday
I
think
I
was
wrapping
the
locally
available
ssh
client
and
using
that
for
establishing
the
final
ssh
connection
and
roman,
very
validly,
mentioned
that
we
should
have
a
go
native
ssh
command,
but
I
I
I
implemented
that
and
it's
very
basic
right
now,
like
it
doesn't
read
your
ssh
config.
F
F
L
L
Yeah
and
what
I
also
like
about
the
poor
forward
is
that
you
can
use
any
sshk
eggplant
then,
instead
of
wrapping
one
specific
one
and
it's
pretty
clear
what
you're
doing
regarding
to
build
in
one,
I
wouldn't
suggest
that
if
go
wouldn't
have
such
a
great
ssh
library,
I
should
see
this
an
opportunity.
Yeah.
H
Which
yeah
look
at
that?
It
didn't
look.
I
maybe
I
looked
at
a
different
one,
but
it
look
didn't
look
so
fleshed
out
or
I
I
I
don't
know
if
I
wanted
to
add
the
dependency
but
yeah.
I
still
have
the
goal
of
publishing
a
weird
ctl
image
that
just
contains
a
binary,
so
you
can
use
vertical
on
your
cluster
and
then
ssh
wouldn't
work
if
it
wrapped
openssh,
because
that's
not
there
so
yeah.
I
like
that.
What.
A
H
H
Yeah,
so
I
would
still
go
for
adding
a
port
forward
command.
That's
what
I'm
working
on
right
now
and
keep
the
ssh
and
right
now
I
have
the
default
on
the
go
client
and
a
flag
called
local
ssh.
I
think
right
now
that
does
what
it
did.
Does
it
right
now
on
on
my
pr
wrapping
existing
ssh
with
port
forward
proxy
command.
A
I
think
this
looks
pretty
cool
and
I
just
think
back
of
all
those
environments
that
we
we've
all
been
working
in
with
these
massive
jump
boxes.
H
Yeah
something
I
still
have
to
add
to
the
docs
or
something
that
roman
and
I
talked
about
in
private,
was
like
this
is
great
for
getting
access
to
a
vm.
But
if
you
are
in
an
environment
where
there's
a
lot
of
ssh
traffic
and
it's
like
your
primary
traffic
to
vms,
you
might
still
want
a
solution
that
it's
not
this,
because
it
still
goes
over
the
api
server.
It's
just
the
traffic
you
put
on
there,
you
put
on
the
control
plane
and
that's
not
always
what
you
want.
L
A
H
H
L
F
L
A
Yeah
definitely
easier.
I
had
some
trouble
getting
into
my
virtual
machines
via
ssh
when
I
first
started
with
googvert,
so
even
with
the
basic
functionality,
it
looks
much
easier
to
deal
with.
A
It's
been
around
for
a
bit:
okay,
well,
that
takes
us
to
8
a.m,
and
we
ran
out
of
time
for
mailing
list
review
and
bug
scrub.
A
A
Lots
of
size
of
relief
here
so
I'll
I'll
end
the
meeting,
and
we
will
see
you
all
next
week
or
talk
to
you
on
the
mailing
list,
sound
good.
Thank
you.
Everybody
all
right
have
a
good.