►
From YouTube: Prow for KubeVirt
Description
Introduction to Prow and how we can benefit in KubeVirt from it.
A
A
Some
of
you
might
have
already
contributed
to
kubernetes
and
you
have
seen
all
the
funny
labels
popping
up
there
when
you
add
a
pull
request
to
something-
and
you
have
seen
pretty
much
to
say
anything
when
you
deploy
something
to
Qbert
and
that's
the
first
thing
where
you
see
pro
in
action
because
pro
is
not
just
about
CI.
Pro
is
also
about
lifecycle
management
and
it
can
do
a
few
nice
things
for
you.
So
you
can
use
the
/cc
command
to
add
people
as
refuse
to
pull
requests.
A
It
can
do
some
milestone
management
for
you
with
safemart
on
stone
and
the
name
you
can
close
issues.
Slash
hold
to
put
a
whole
level
on
your
pull
request,
which
you
will
see
later
on,
but
that
is
nice
rip
will
add
a
not
much
web
label.
You
can
with
kind
assign
specific
meaning
like
if
it's
bug
or
a
feature,
help
it's
a
nice
command
to
show
you
the
commands
which
produce
ports.
A
A
The
command
help
link
here,
but
you
can
see
all
the
commands
which
Pro
supports
in
this
case,
it's
already
from
our
pro
instance
and
not
from
the
one
from
communities
and
one
nice
thing
about
these
commands
here
is
we
we
can
do
all
that
already
when
you're,
just
many
of
them
already
worked
when
you're,
not
even
in
the
communities
or
and
more
of
them
even
work
when
you're
just
a
contributor.
So
we
can
do
some
additional
privilege
management
with
all
those
commands,
but
that's
not
all
pro
can
do
for
you.
A
It's
of
course
also
a
CI
system,
and
if
you
have
Pro
chops
defined
we'll
come
to
that
later,
then
you
can
do
things
like
slash
test.
You
there
are
no
tests
that
we
test
to
run
all
tests.
You
can
also
have
optional
tests,
which
you
can
simply
skip.
For
instance,
there
they
are
just
option
because
they're
still
flaky
and-
and
you
can't
merge
the
progress,
because
the
test
fails,
so
you
can
just
run
slash
skip
and
the
tests
which
are
optional
go
to
green,
very
important.
A
You
can
run
specific
test
lanes
again
with
slash
tests
and
the
name
from
the
test.
You
have
very
nice,
a
very
nice
log,
few
artifacts
you
and
she
unit
test
result
view
which
is
uploaded
to
Google
compute
engine,
and
you
can
always
look
on
them
without
prove
itself
and
the
logs
are
already
visible
in
life
states.
We
will
go
through
in
a
nice
example
where
you
see
of
it
later
on,
oh
and
by
the
way,
always
interrupt.
A
A
There
is
also
a
component
Pro,
which
is
called
tied
and
tired
is
a
tool
which
automatically
automatically
merges
pull
requests,
make
sure
that
their
test
is
against
latest
master
and
if
you
have
multiple
pull
requests
ready
at
the
same
time,
it
allows
you
to
put
them
all
together
in
one
merge
pool
or
it
does
it
itself
and
we'll
run
the
tests
on
them
and
then
merge
it
together.
So
tight
basically
replaces
the
merge
button,
which
you
have
as
a
as
a
which
you
normally
have
to
click
is
a
maintainer.
A
Very
important
commands
here
are
slash,
looks
good
to
me
and
slash
approve.
So
if,
if
you
have,
if
you
have
a
pull
request
and
you
think
it's
ready
and
the
maintainer
comes
in,
you
can
just
write,
slash
looks
good
to
me.
Slash
approve
and
if
ty
thinks
that
all
tests
are
passing,
then
type
will
just
merge
it.
For
you,
a
very
nice
thing
here
is
that,
in
combination
with
owner
files,
which
is
also
a
feature
from
pro,
you
can
even
give
community
members
which
are
trust
contributors.
A
I
can
just
say
all
files
in
the
project
infra
directory.
This
is
the
filter
for
all
files
can
be
reviewed
by
these
people.
Here
you
can
be
proved
by
these
people
here
and
it's
even
possible
to
build
trees,
so
in
subdirectories,
I
can
have
another
owner
file
with
other
people
so
that
people
from
outside
can
still
always
merge
stuff
from
inside,
but
people
inside
in
the
nested
owner
files
can
only
emerge
from
from
their
starting.
A
But
ty
does
not
only
do
that;
you
can
also
configure
it
so
that
it
needs
specific
labels
to
be
present
or
absent.
I've
already
mentioned,
it
looks
good
to
me
in
the
proof
label,
but
we
also
have
a
lot
of
to
not
nurse
livers
like
do
not
merge,
hold,
do
not
merge
whip.
Do
not
merge
these
release
note
and
if
they
are
not,
there
then
type
the
simply
not
merge
it,
and
you
will
also
see
that
we'll
also
see
that
later
on,
we'll
be
follow
the
life
of
a
pro
chop
and
yeah.
That's.
A
A
Where
we
use
cubed
right
now
for
CI
is
just
in
two
projects
where
it's
kind
of
a
demo
setup
right
now
and
there
we
have
the
project,
infrared
Pro,
which
holds
Pro
itself
or
our
configuration
of
Pro,
and
what's
pretty
nice
series
that
we
use
Pro
here
to
check
its
own
config
and
the
chopped
convicts
if
they're
valid.
If
all
fields
are
present
and
everything.
A
So
we
use
tied
there
to
validate
that
the
chops
and
the
convicts
are
right
and
if
we
say
looks
good
to
me
in
the
proof
that
merges
it
and
then
there
is
a
pro
plugin
which
immediately
pushes
the
new
configuration
to
the
life
cluster
and
all
components
will
load
in
your
configuration.
And
then
we
have
to
keep
pretty
eye
repo,
that's
the
repo
which
holds
all
our
testing
clusters.
A
A
B
A
A
Here
we
have
a
pull
request
in
the
project
info
repo
and
I
want
to
add
a
new
job.
Don't
look
too
much
much
at
the
Chop
itself
right
now,
it's
just
I
just
add
in
a
chop,
Yammer
a
new
job
which
I
want
to
run
and
by
accident
I,
also
changed
the
pro
configuration
and
editor
plugin,
which
doesn't
exist
so
immediately.
A
First,
you
will
get
the
usual
information
from
from
pro
that
you
need
and
the
profile
from
from
an
owner
and-
and
it
will
also
suggest
people
which
can
do
that
and
will
show
you
the
link
to
the
owner
files
after
people
which
are
responsible
for
it.
If
you
want
to
add
more-
and
here
the
test
failed
for
me,
the
check
pro
config
test
and
and
I
just
insisted
on
that
it
should
work
so
ran
it
again
and
when
we
now
follow
the
link
to
that
shop
itself,.
A
You
will
see
from
Google
compute
engine
the
logs
when
it
was
running,
and
here
they'll
actually
see
the
log
line
which
is
produced
in
that
sense,
and
it
just
says:
error
in
Wailuku
new
plug-in
configuration
the
bloody
oops
doesn't
exist
here.
Comes
the
nice
feature
in
front
Pro
I.
Can
click
on
PR
history
and
Dera
see
first
a
headed
run
there
everything
looked
fine.
Indeed,
then
I
pushed
the
wrong
commit
where
I,
edit,
the
image
pull
request.
Add
invalid,
plug-in
and
I
can
see.
Now
my
pull
history
here
and
follow
it
up.
A
A
We
can
now
do
the
following.
Artyom
can
still
insisted
it's
that
it
looks
good
to
him.
That's
something
you
can't
do
on
his
own,
so
our
chimp
is
pleased.
Now
right,
looks
good
to
me.
I
also,
do
it
myself?
If
I
do
it,
you
will
see
a
woman.
You
can't
get
GM
your
own
PRS
kind
of
makes
sense,
but
Artyom
can.
A
A
A
You
know
see
that
what
you
saw
before
it
was
green
for
the
short
amount
of
time.
That's
just
a
cash
issue
from
you
to
get
the
pitches
from
times
you
have
to
reload
your
page,
but
we
now
we
see
that
the
looks
good
to
me.
Okay,
leave.
It
was
canceled
because
I
change
the
pull
request
and
the
check
is
now
running
our
team.
Can
please
change
at
the
looks
good
to
me
again.
A
Yep
there
it
is
again-
and
here
will
you
see
the
next
nice
thing?
If
we
now
go
to
the
tied
overview,
which
is
also
there,
you
can
see
a
nice
dashboard
which
shows
you
that
it
meets
all
labor
requirements.
We
looked
there,
it
has.
There
proofed
looks
good
to
me
label,
it
has
no
forbidden
labels
like
hold
still
not
merged.
So
let's
try,
if
it's
not
already
too
late.
It's
a
patrol
here.
A
A
A
A
Yeah
they're
always
uploaded
so
they're
always
kept
independent
of
your
cluster
state,
which
is
pretty
nice
because
it
removes
the
burden
of
maintaining
all
that
in
the
clusters.
You
can
go
to
different
providers,
or
so
you
just
get
your
infrastructure,
you
deploy
it
pro.
You
run
your
tests
and
artifacts
test
results
and
logs
are
just
uploaded
to
the
cloud,
and
you
know
this
exists
and.
C
A
C
C
A
It
tells
us
the
advantage
which
we
had
before
right
now,
pretty
often
that
if,
for
instance
in
standard,
say
I
think
it
is
all
over
that
you
can't
access
logs,
can't
access
artifacts
and
all
that
it's
kind
of
independent
from
the
from
Pro
itself.
That's
pretty
good,
so
I
kind
of
prefer
it.
We
will
see-
and
it's
also
I
mean
it.
Nothing
is
for
free.
Even
though,
if
we
don't
pay
it,
we
still
pay
something
it
has
to
be
somewhere.
So
yeah.
C
A
D
This
volume
I
think
that
alternative
and
I
think
we're
at
authorities
interested
and
we
can
discuss
it
internally
to
see
you
know
what
what
option
we
prefer
and,
in
the
end
to
me,
it's
important
that
it's
you
know
the
community
or
upstream
visible,
it's
transparent
that
so
that
they
don't
care
about
it
and
I.
Think
we
as
maintenance
of
that
we
need
to
look
to
know
what
is
what
is
low
maintenance?
What
suits
our
needs
at
the
moment
and
make
this
Basin
better
I.
A
A
Okay,
if
there
are
no
more
questions,
we
can
go
a
little
bit
more
into
details
of
how
the
jobs
are
done
themselves.
So
Pro
supports
a
few
different
types
of
chops:
one
are
pre
submits
the
other
wanna
post
submits
and
their
pre
or
periodic
s--
resubmits.
I
guess,
is
pretty
clear:
they
are
run
before
a
pull
request
is
merged
or,
as
part
of
the
merge
cruises
you
can
mark
them
as
optional.
That's
that
means
titusz
never
trigger
them.
Also,
the
trigger
plug-in
doesn't
figure
them.
A
A
That
means,
if
you
create
a
pull
request,
it
will
not
be
run,
but
as
soon
as
it's
on
merge,
criterias
tightly
run
the
additional
tests
and
only
then
merge
it
for
optional
tests,
since
they
can,
for
instance,
normally
sign
by
electing
opportunities
because
they
are
flaky.
You
can
always
type
slash
skip
in
the
pull
request
and
all
optional
tests
will
go
to
bring.
A
Also
you,
if
you
add
new
chops,
you
might
just
want
to
add
them
and
see
on
how
they
go,
and
you
don't
want
to
confuse
people
with
failing
red
lanes
or
something
like
that.
So
you
can
also
set
report
to
false.
Then
you
can,
then
it
doesn't
show
up
in
the
github
pull
request,
but
you
can,
for
instance,
random.
Always
you
just
say
they're
optional
and
or
they
don't
have
fur
and
always
you,
but
you
shouldn't
report
back
and
you
just
check
it
out
yourself.
A
If
we
go
just
to
Dec,
for
instance,
these
jobs
would
then
it'll
be
visible
here
on
deck.
It's
just
they're
not
reported
on
github,
so
that
you
don't
confuse
commit
to
do
this
here.
We
also
see
the
test
which
which
worked
for
us,
and
here
we
see
two
tests
which
failed
for
us.
So
this
is
also
the
overview
page
where
you
can
also
find
your
jobs,
for
instance,
the
reason
it's
here
and
if
you
go
there
again
under
on
the
log
part-
and
you
get
to
that
with
you,
then.
A
Yeah
presubmit
shop
support
even
more,
you
can
also
decorate
them
or
not
decorate.
Just
in
this
case
is
another
boolean
fields
on
the
spec,
and
that
just
means
this
is
a
job
which
should
upload
everything
to
GCE.
If
you
set
it
to
false,
it
will
just
run.
You
will
see,
it
will
report
the
state,
but
you
have
no
logs.
A
No
artifact
server
can
still
make
sense
for
some
stuff,
for
instance,
if
you're
just
trying
out
stuff
it
might
be
a
possibility
and
another
nice
feature
for
physical
jobs
is,
you
can
also
specify
run
if
changed
and
there
you
can
specify
record
patterns
and
tests
are
then
only
run
if
a
file
in
that
pattern
changes
something
that
we
think
about
using.
This
is,
for
instance,
in
the
QT
I
Riku
ever
ahead
of
a
lot
of
different,
faster
setups
doesn't
make
that
much
sense.
A
If
we
know
and
also
have
a
look
short
look
on
such
a
definition,
you
have
resubmit
config
files,
they
will
just
have
pre
submits,
and
in
there
you
say
that
repo
name,
you
want
to
create
a
job
for,
and
then
you
have
a
list
of
jobs
you
want
to
create.
They
basically
just
consist
of
this.
Of
these
extra
boolean's,
like
always
run
skip,
always
run
decorate
optional,
and
then
you
have
a
spec
section
where
you
just
put
the
pots
back
in.
That's
that's
the
whole
magic.
A
The
same
for
post,
submits
po
submits
are
just
run
afterwards,
so
once
or
something
is
merged,
and
you
wanna,
let's
say
then
push
new
images
and
we
have
periodic
s--
and
can
also
be
used
for
some
stuff.
Also,
some
projects,
for
instance
I've
seen
in
communities,
see
advisor,
for
instance,
pushes
every
day
new
images
once
with
the
latest
master
stage,
without
even
thinking
about
if
master
changed
or
not
and
well-
and
there
is
another
mechanism
which
is
called
presets,
which
is
similar
to
presets,
which
we
will
also
see
later
on.
A
A
A
A
First,
you
would
go
to
this
Chop
mark
down
on
the
test
info
page,
if
you're
not
sure
anymore,
on
how
to
read
write
a
test.
It's
just
how
to
from
the
kubernetes
people
about
how
to
write
tests.
They
explain
defeats
again
and
some
execution
behavior,
so
you
can
do
your
right
choices.
Then
you
add
your
trap
to
our
chops
with
to
us
chops
descriptions
that
would
be
here
in
right
now.
It's
here,
we
can
put
it
into
a
better
place,
maybe
princess
the
route.
A
A
A
good
thing
is
to
first
set
tests
to
optional,
true,
actually
a
real
false
here,
but
it
should
be
optional.
True
and
report
false
so
that
you
can
try
them
out
once
they're
merged
and
once
they
find
you
do
another
pull
request
and
settings
true
and
required,
or
whatever
you
just
pretty
much
the
closest
for
bet
and
one
question
might
be
now.
Okay,
now
I
have
a
job
with
the
very
simple
courts
back
in
there.
A
We
that's
very
similar
to
pod
presets.
Here
we
have
an
example
of
a
docker
credential
credential
preset
we
have
in
the
cluster
a
secret
with
Ken
which
can
push
to
the
cube
organ
toca
hub.
Your
pod
specification
just
needs
to
label
the
pod
with
this
label
and
then
the
Deaf
canoes
and
dock
a
password
and.
A
Values
are
injected
into
your
container
your
injected
as
files,
in
this
case
it's
a
secret
volume,
so
the
volume
itself
contains
files
with
the
username
and
password,
and
we
just
have
environment
variables
here
which
point
to
the
actual
files
so
that
you
don't
by
accident,
just
run
and-
and
you
expose
the
secrets
or
something
similar
I
mean
the
issues
regarding
to
security.
Here
are
the
same.
Like
any
thing
we
got
into
public,
which
is
regarding
to
CI
with
it,
which
is
public
accessible,
and
here
is
another
example,
for
instance,
for
daca
and
docker.
A
If
you
want
to
run
docker
in
there
and
just
renting
stuff,
we
copied
some,
we
copy
the
base.
We
are
using.
The
base
images
image
for
kuba
need
is
for
doing
constrict
bills,
and
if
you
just
add
that
label
to
that
image
the
then
this
way
environment
variable
will
be
set,
and
then
that
image
sets
up
top
of
you
and
you
can
just
run
the
motor
commands
yeah.
This
is
oh
yeah,
so
the
bootstrap
image
is
coming
from
Google
were
using,
and
that
gives
us
already
a
lot
of
magic.
A
So
you
don't
have
to
care
about
the
teens
details.
You
basically
just
add
some
basic
labels
and
we
will
inject
all
the
stuff
for
you
and
in
case,
since
it's
just
a
normal
part,
you
can
also
assign
to
your
job
a
service
account.
Someone
will
have
to
give
it
the
right
permissions,
but
then
it's
always
also
possible
to
create
anything
additional
parts
and
by
setting
there
they
write
on
a
reference
to
their
reference
of
the
pot
you're
running
in
you
can
get
information
for
the
pot
from
the
boundbox
API.
A
A
So
to
sum
that
up
why
I
want
to
use
Pro
actually
for
keyword
is
because
of
the
automatic
merges
we
use
it
already
for
lifecycle
management
and,
from
my
perspective,
works
pretty
pretty
well
pro
itself
definitely
scales.
Far
more
than
we
will
ever
need,
let's
prove
the
kubernetes.
We
have
LifeLock
fuse.
We
have
log
test
result
and
artifacts
stolen
GCE
as
you've
seen
you
can
access
a
pull
request,
history
for
all
jobs
for
the
logs.
A
Once
you
know,
once
you
have
read
that
there
is
no
magic
to
writing
the
chops
themselves,
there
are
a
few
boolean
switch.
You
have
to
understand
first,
but
then
you
just
write
the
pots
back
and
add
some
presets,
and
you
just
run
your
script
inside.
We
can
add
external
contributors
as
maintenance,
because
they
can
just
maintain
parts
of
the
code.
A
A
It's
also
pretty
easy
to
monitor
for
us,
because
the
core
prometheus
stuff
and
everything
is
already
there-
also
as
maintain
us,
you
can
have
a
look
there
than
pretty
easy.
It's
not
just
for
operations.
People
here
you
can
see
the
graph
on
a
dashboard
I
created
for
our
pro
instance.
Right
now
you
can
see
the
basal
cache
which
we're
using
for
some.
A
A
A
My
plan
to
move
forward
would
be
to
replace
Travis
and
standard,
say
I
completely,
with
Pro
make
some
of
our
lanes
in
cubed
cubed,
which
work
pretty
reliable,
that's
most
of
the
windows
lane
and
take
one
or
two
communities
lanes,
as
required
with
the
retest
of
specific
lanes
alone.
It
should
be
possible
to
also
merge
the
stuff
if
from
time
to
time,
even
that
fades
and
then
continue
from
here
to
stabilize
the
other
tests
with
all
the
caching
magic
from
basil
and
dr.
marrows
in
zone.
E
A
B
E
A
A
A
A
So
if
you
have
K
native
installed,
you
can
use
Kennedy
with
approach
ops,
so
you
can
I.
Don't
think
that
you
can
right
now
use
the
K
native
templates,
but
you
can
use
the
Canadian
build
steps.
Then,
instead
of
just
having
one
parts
back,
you
would
have
a
you
would
have
yeah.
You
can
have
a
list
of
steps
there
which
do
not,
which
are
then
run
bikini.
Do.
C
A
Chops
are,
in
this
case
the
entry
point.
I
guess
can
it?
If
can
do
Addison
things
for
you,
so,
for
instance,
pro
pro
can't
run
in
that
case
pipelines
Pro
just
reacts
on
on
on
github
issues.
Physically
include
requests,
so
I
think
any.
If
goes
farther
and
fro
can
be
a
nice
entry
point
from
the
github
perspective
and
you
can
reuse
it
and
it
can
evolve
al.
You
guess
it's
not.
So
it's
not
contradicting
from
my
perspective.
B
A
A
So
nothing
is
preventing
it
from
working.
When
you
look
here
here,
we
have
a
successful
build
for
the
communities
10
for
cluster,
so
it
really
deploys
the
whole
the
whole
cluster
and
at
the
end,
then
pushes
it
to
pro
to
prote
to
github
and
I
mean
the
end-to-end
tests
need
more
resources,
more
memory
because
they
start
all
the
beams
but
their
requirements.
Apart
from
that
are
exactly
the
same.
So
there's
no
technical.
A
B
A
B
B
A
Mean
what
Pro
also
supports
is
you
can
have,
for
instance,
in
Google
compute
engine
as
more
kubernetes
cluster
running
which
hosts
Pro
and
you
can
instruct
Pro
to
run
the
proof
run
some
pro
shops
on
other
clusters.
So
if
you
give
it
the
credentials,
if
you
just
create
a
chop,
the
pods
actually
there,
so
you
can
have
separate
remedy
glasses.
If
that
would
be
helped
or
not,
is
another
question.
B
A
D
E
A
A
B
B
At
least
the
slowness
you
know
it
should
be
solved
now,
with
the
new
Jenkins
dedicated
to
Qbert
I,
think
that
was
resolved.
B
A
Definitely
if
they
want
I
mean
some
I
guess
have
everything
they
need
with
service,
they
can't
change
or
not
for
the
Nazi
I
guess
it
also
depends
on
your
plans.
Do
you
wanna
do
so?
It's
not
something
where
when
when
I
mean
there
are
some
good
CI
people
here
already,
they
currently
maintain
standard
CI
also
for
us
I
hope
that
they
are
also
interested
in
taking
part
of
that
and
I
mean
it's.
B
For
us,
it's
you
know
the
best
tool.
Wins
I
mean
we're.
Not
you
know
advocating
to
keep
something.
If
it's
it's
just
a
matter
of,
can
we
support?
You
know
more
than
one
system
at
a
time
and
if
you
know
it's
mostly
owned
by
you
guys,
that's
perfect
I
mean
we
provide
the
infrastructure
and
we
can
focus
on
downstream
automation,
which
is
you
know,
lacking
resources,
yeah.
A
I
guess
also
infrastructure
is
a
good
key
word
here
since
I
think.
Maybe
that
gives
sedan
also
more
more
time
to
properly
maintain
the
hardware
resources
and
everything
I
do
do
you
know
do
all
this
other
stuff,
which
is
a
live,
look
like
glass,
the
login
matrix
collections
there
and
all
that
stuff
to
find
incidents.
You
know
and
don't
have
to
restart
Jenkins.
B
A
A
I
will
be
done.
Thank
you
for
for
joining
and
just
drop
me.
No
different
questions.
I
wanted
to
share
the
presentation,
also
with
the
mailing
list,
justice
issues
in
making
it
public
I
will
have
to
download
this
PDF
or
something
in
them,
send
it
to
the
mailing
list.
Ok,
then,
thank
you,
and
but
thanks.