►
From YouTube: KubeVirt Community Weekly Meeting - 2018-09-19
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
C
So
so
far,
so
good
the
progress
is
steadily
progressing.
We've
been
able
to
work
with
CI
TM
to
stabilize
their
part.
They
figured
out
with
our
hell
that
they
have
a
buck
in
the
Daugherty
daemon,
so
they
are
working
going
to
fix
it
from
our
part,
I'm
now
finishing
the
stabilization
of
my
old
test
that
will
serve
as
a
template
for
everybody
else
to
write
tests
once
this
is
finished
and
merged.
I
will
write
a
proposal
email
for
everybody
else
to
pick
their
emails.
Email
sorry
fix
their
tests,
so
they
follow
the
guidelines.
B
Awesome,
thank
you
and
next
up
the
I/o
threads
work.
I
just
wanted
to
go
over
some
changes
that
have
been
recent.
We
switched
to
a
policy
based
approach
a
couple
weeks
ago
this
week.
The
big
change
was
that
we
were
now
using
automatic
instead
of
dedicated,
because
we
found
that
asking
for
dedicated
disks
automatically
would
tend
to
ask
for
too
many
I/o
threads.
B
If
you
had
a
large
number
of
disks
in
the
small
number
of
CPUs,
so
instead
we
used
a
more
reasonable
approach
where
it's
sort
of
a
quasi
shared,
basically
round
robins
the
I/o
threads
and
creates
a
pull
of
reasonable
number
of
threads.
Based
on
how
many
CPUs
you
have,
if
you
ask
for
a
dedicated
disk
that
still
works,
but
it's
something
that
you
do
with
malice,
aforethought
for
a
specific
disk
and
one
more
thing
on
that.
Just
quick!
B
It's
changed
to
the
cpu
count.
I
used
the
wrong
setting
at
one
point
and
somewhere
in
the
codes.
I
need
to
fix
that
and
otherwise
I
think
that
PR
is
going
to
be
good,
so
that
should
definitely
hit
this
week
and
next
up
of
Marcin.
Would
you
like
to
talk
about
the
secret
and
config
map
disks
weren't,
okay,
okay,.
A
D
So
few
words
from
me
about
secret
and
config
map
disks,
so
the
feature
is
mesh,
I
mean
implementation,
so
we
are
able
to
use.
We
are
able
to
use
config
map
and
secrets
now,
but
there
is
one
thing:
I
created
a
back
in
kubernetes
about
open,
API
validation,
and
this
is
sorry
is
to.
Let
me
did
this
is
the
back.
D
D
E
Yes,
that
was
a
really
interesting
issues,
so
the
problem
lost
when
we
are
when
we're
on
Cupid
on
openshift
trial,
then
wrote
CTL
V&C
did
not
work
and
the
remote
viewer
just
short
waiting
for
display
one
to
understand.
What's
going
on,
the
traffic
flow
is
really
mute.
Viewer
connects
to
vert
CTL
what
CTL
connects
via
WebSocket
vert
API
server,
vert
API
server
does
cube
CTL
exit
to
the
launcher
port
that
one
has
a
socket
as
a
soccer
is
connected
to
the
EQ
mo
process,
so
that
is
a
traffic
flow.
E
It's
already
a
bit
complicated
so
and
after
checking,
the
traffic
I
figured
out
that
we
suddenly
saw
instead
of
only
a
line
feed
at
the
end
of
a
network
frame,
we
saw
carrier
to
return
plus
line.
Feed
I
did
not
find
out
why
this
only
happens
with
trial,
but
the
reason
was
that
in
the
exit
command
between
vert
API
server
and
launcher,
we
use
the
minus
E
and
T
flex,
which
is
for
an
interactive
TTY,
and
the
default
behavior
of
TTY
is
exactly
this.
F
B
The
the
quick
answer
is
for
API
a
server
validation,
I,
believe
that
when
we
were
using
the
just
a
bare
socket,
we
couldn't
impose
any
sort
of
our
back
rules
on
that
at
the
kubernetes
level,
because
it
was
just
exposed,
and
so
by
proxying,
that
through
the
API
server
and
then
using
a
remote
exec,
and
it
lets
us
use
the
just
the
basic
kubernetes
permissions
model
for
our
sub
resources
that
we
didn't
have
before.
Does
that
answer
the
question?
Yes,
thank
you.
Oh.
B
G
Just
a
short
update,
we
resolved
back
and
obvious
United
with
MAC
addresses
that
prevented
startup
of
VM
in
two
percent
of
price
and
its
merchants.
It
is
merged
and
new
images
built
with
this
fix
and
we
also
added
a
manifest,
obviously
never
repository.
This
will
deploy
obvious
ni
and
mote
was
an
open
shift
in
raising.
We
just
applied
manifest
and
it
should
be
relatively
used.
That's
it.
A
Yes,
so
this
is
a
smallish
heads
up,
so
we
had
a
couple
of
projects
which
we
need
to
to
test
if
they
continue
to
work
like
we've
got
the
queue
for
demo,
which
is
based
on
mini
queue
in
which
we
also
try
to
run
on
a
mini
shift
or
OC
cluster,
and
we
didn't
test
it
so
far.
So
the
problem
was
that
we
had
to
read
me
in
with
a
guy
describing
how
it
would
work
on
an
open
shift,
but
in
the
end
that
did
not
work.
A
So
that
is
very
charming,
because
that
means
any
other
project
can
just
use
these
four
lines
to
to
then
test
their
stuff
on
Travis
like,
for
example,
CD
I
could
use
that
a
pro
to
test
CDI
on
Travis,
yeah
and
I
hope
to
you
set
in
other
projects
as
well.
It's
no
complete
replacement
by
the
way
for
standards
yeah,
which
we
are
also
using,
but
it's
good
enough
for
a
lot
of
a
lot
of
things.
A
B
A
A
H
Kareem
I
would
like
to
announce
that
the
Qbert
push-button
trials
on
it'll
be
Austin.
Gp
are
now
available
on
a
website
Qbert
that
I
own
for
those
who
are
familiar
with
the
term,
push
punch
well,
what
we've
basically
done
is
pre
building
HS
on
AWS
and
TCP
that
pre
installs
of
stream,
kubernetes
and
Qbert
on
a
cloud
instance
that
a
user
starts
up.
H
H
We
also
have
CI
in
place
to
build
and
tests
and
publish
our
pitches.
So
that's
really
helpful
and
then,
with
help
from
the
add-on
from
the
Yu
Xing,
we
develop
a
couple
pages
on
the
website,
one
for
a
doesn't
work,
she's
describing
to
users
how
to
use
run
trials
and
at
the
end
we
also
have
pointers
to
a
couple
of
laps
that
the
users
will
go
through
to
sort
of
played
around
with
Qbert
and
the
CDI.
H
B
D
Have
one
question
and
I
am
wondering
because
I
had
one
problem
and
I
would
like
to
build
kubernetes
1.12
and
run
run
this
so
do
we?
How
do
we
build
this
kubernetes
images
which
we
we
had
which
were
running
on
the
CI?
Do
we
have
some
template
pattern
or
something
like
this
because
I
see
in
kubernetes
repo,
there
is
some
scripts
like
make.
Make
image
and
and
I
was
wondering:
how
do
we
provide
our
kubernetes
images
in
wrap-up
cube
cube
at
CI?
I
can
answer
that.
C
C
A
D
A
D
B
Well,
just
to
clarify
the
reason
we
can't
just
automatically
consume
a
new
kubernetes
or
openshift
version
arbitrarily
is
like
was
hinted
adds
it's
a
second
ago.
We
do
need
an
upstream
docker
image
built,
but
also
sometimes
real
estate
and
things
break,
and
so
we
need
to
actually
test
and
ensure
that
it
actually
works
before
we
can
deploy
get
rid
of
it.
So
it's
kind
of
the
the
holdup
with
adding
new
providers.
Okay,.