►
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hello,
everyone
today
is
thursday
march
17th,
and
this
is
the
cluster
api
provider
azure
office
hours.
As
always,
please
follow
the
cncf
code
of
conduct
be
respectful
to
everyone
and,
if
you'd
like
to
speak,
make
sure
to
raise
your
hand
and
I'll
make
sure
you
get
a
chance
to
talk,
and
if
you
can,
please
add
your
name
to
the
attendee
list
right
here
and
I
will
share
the
document
link
and
if
you
have
any
topics
or
any
questions,
feel
free
to
add
them
to
the
agenda
discussion
all
right.
A
So
we
do
have
a
demo
from
or
not.
I
keep
saying
demo,
but
I
guess
a
walkthrough
of
how
to
write
some
end-to-end
tests
in
cab,
z
from
james
before
we
start.
Are
there
any
new
folks
here
who
haven't
you,
know
joined
this
call
before
and
want
to
say
hi
and
introduce
themselves
and
kind
of
tell
us
what
brought
them
to
the
project.
B
I
think
I
might
have
come
on
this
call
once
or
twice
but
hi.
I
am
a
pm
supporting
cecile's
team
and
it's
super
exciting
to
get
a
chance
to
be
more
involved
with
the
cluster
api.
C
Yeah
I'll
jump
in
I've
been
helping
with
some
of
the
end-to-end
tests,
with
cafe
and
mostly
lurking
today,
but
hoping
to
help
maybe
jump
in
with
capstie
and
do
some
of
this
thing
awesome
welcome.
A
Anyone
else
magana
is
that
how
you
pronounce
your
name.
D
Yes,
I'm
joining
this
meeting
for
the
second
time,
but
last
time
I
joined
late,
so
I
couldn't
introduce
myself
and
recently
I
started
contributing
to
cappy
and
cap
a
and
I'm
looking
forward
to
do
some
contribution
to
cabzi
as
well
and
yeah.
I'm
looking
forward
to
the
demo
today.
A
Awesome
good
to
see
you
all
right
and
I
think
that's
everyone.
A
Alright,
so
let's
get
started
james.
Are
you
ready
to
start.
E
Sure
looks
like
we
did
get
another
item
added
to
the
agenda.
Should
we
do
those
first
or.
A
Sure
we
can
actually,
you
know
what,
let's
start
with
the
demo
and
then
we'll
circle
back
at
the
end
and
make
sure
we
like
leave
the
last
20
minutes
for
other
topics.
If
anyone
comes
up
with
topics
in
the
meantime.
E
I
just
need
to
have
access
to
sharing.
Yes,.
E
Okay,
awesome!
That's
where
I
wanted
to
be.
I
didn't
seem
like
I
could
share
my
little
screen.
I
had
a
couple
ui
browser
tabs
that
I
wanted
to
share
as
well.
You
can
share.
E
Okay
cool
so
feel
free
to
interrupt
me
at
any
point
in
time
that
the
idea
of
this
was
to
just
there's
quite
a
few
settings
in
the
indents
and
if
you're
running
them
locally
and
things,
so
I
just
wanted
to
walk
through
what
those
some
of
those
settings
do
and
then
talk
about
the
different
entry
points
that
we
have
into
our
end-to-end
tests
and
do
a
little
walk
through
the
code.
E
E
The
first
one
is
the
cluster
api
end-to-ends.
These
are
the
ones
that
you
see
on
the
pr's,
so
you'll
see
like
when
you
submit
a
pr
you'll,
see
a
whole
bunch
of
tests
run
in
github
and
that's
that's
these
tests.
For
the
most
part,
they
typically
run
against
a
released
version
of
kubernetes
and
they
run
a
suite
of
tests
that
make
sure
cluster
api
is
working.
E
It's
creating
the
clusters,
it's
creating
the
nodes,
it's
creating
the
load,
balancers,
it's
creating
the
v-nets,
those
types
of
things.
It
also
includes
some
of
the
cappy
tests.
Like
the
rolling
machine
sets
and
cappy
adoption.
E
We
have
a
whole
suite
that
we
run
that
makes
sure
that
we
do
all
the
basic
cappy
tests,
as
well
as
specific
cluster
api
at
for
azure
tests.
E
The
majority
of
these
tests
are
entered
through
this
script
called
ci,
ede
and
I'll
show
that
in
a
few
minutes
here,
but
this
is
the
one
that
you're
going
to
see
the
most
often
and
probably
interact
with
the
most
and
then
we
also
have
another
entry
point
for
conformance
and
the
conformance
can
be
optionally
run
on
some
of
these
pr's,
but
for
the
most
part,
these
run
in
periodic
jobs
in
prow
and
I'll
show
this
in
the
dashboard.
E
In
a
minute
when
I
switch
back
to
the
other
screen,
but
these
run
either
nightly
or
maybe
every
few
hours
and
run
the
kubernetes
conformance
tests,
conformance
suite
test
suite
there's
about
350
tests,
tests-ish
that
get
run
against
the
cluster,
and
so
we
use
cluster
api
to
create
the
cluster.
E
These
typically
don't
run
against
a
kubernetes
version
and
they
use
the
ci
builds.
So
kubernetes
is
constantly
building
ci
versions
with
a
bunch
of
different
commits
in
it,
so
maybe
five
or
ten
commits
in
them,
and
then
they
publish
those
binaries
to
a
storage
account
and
anybody
can
access
this
storage
account
and
pull
down
these.
These
ci
binaries
by
going
to
this
url,
where
you
replace
the
the
s
string
here
with
either
latest
or
latest
122
or
latest
123
and
that'll
pull
down
the
latest
binaries
for
that
were
were
built
most
recently.
E
So
they
would,
they
most
likely
have
been
built
in
the
last
few
hours
off
of
whatever
branch
that
was
from
kubernetes
and
and
so
those
are.
We
actually
use
like
the
those
constantly
moving
bits
when
we're
running
it,
against
conformance
to
make
sure
that
you
know
kubernetes
didn't
break
something
in
cap
c
and
vice
versa,
and
vice
versa.
E
The
script
also
has
an
option
to
build
the
kubernetes
binaries
that
like
so,
if
you're
testing
kubernetes
locally
and
you
wanted
to-
and
you
have
the
kubernetes
branch
on
your
machine,
it
will
go
out
and
build
those
kubernetes
binaries
and
then
push
them
to
a
storage
account
and
then
build
the
build,
the
cluster
with
those
binaries,
and
so
this
we,
we
mainly
use
this
for
our
pre-submits
in
in
cluster
api,
but
you
could
potentially
use
this
to
to
test
your
own
kubernetes
components
if
you
wanted
to
and
the
way
this
this
enters
is
through
this
scripps
ci
conformance.
E
They
typically
run
against
a
released
version
of
cluster
api,
so
1.2
or
I
think
that's
our
most
recent
version,
and
so
this
is
our
azure
cloud
provider,
azure
disk,
azure
file,
they're
testing
those
components
and
they
they
need
a
cluster
to
test
them
against.
They
have
their
own
test
suite,
and
so
they
run
against
those.
E
It's
designed
to
run
any
kind
of
script
after
the
cluster's
brought
up.
So
it
brings
up
the
cluster
and
then
it
will
execute
any
any
shell
script
that
you
pass
that
to
it
and
the
the
entry
point
here
is
this
ci
entry
points
script
so
I'll
stop
there,
and
if
anybody
has
any
questions-
or
I
wasn't
clear
or
anything
like
that,.
F
So
one
question
on
confirm
and
c2e
like
so
I
get
that
you
know
we
could
actually
test,
let's
say
latest
binaries
from
kubernetes
so
like
I
know,
I
understand
that
when
the
kubernetes
cluster
is
created,
we
have
that
node
image
right,
so
how
this
binary
gets
placed.
For
example,
how
is
the
cubelet
binary?
Another
binary
is
getting
into
node.
E
A
great
question:
so
we
we
are,
we
take
a
reference,
one
of
the
reference
images
and
then
we
replace
the
the
binaries
and
the
images
that
run.
So
I
can
show
you
that
real
quick,
so
in
templates,
so
this
is
getting
into
some
of
the
details,
but
this
is
you're,
probably
familiar
with
our
templates,
our
flav,
our
flavors,
live
in
here
and
then
underneath
this
test
folder,
we
have
a
couple
different
other
folders
here.
The
ci
is
the
one
that
uses
this
conformance
and
does
that
replacement?
E
The
dev
is
the
one
that
builds
those
kubernetes
binaries
and
then
releases
them
so
nci.
Here,
if
you
come
into
prow
ci
version
in
here,
we
have
a
cubidium
bootstrap.
So
it's
a
complicated
piece
of
script.
I
think
there's
some
plans
to
maybe
make
this
a
little
bit
more
generic,
but
what
it
does
is
it
goes
out
and
grabs
the
ci
version.
E
It
grabs
those
packages.
Let's
see
yeah,
you
can
see
it's
actually
pulling
that
version
down
from
that
url
that
I
gave
you
and
then
somewhere
in
here
it's
actually
moving
some
of
those
those
packages,
it's
downloaded
and
then
moves
them
and
restarts
cubelet
so
that
we're
actually
getting
the
latest
latest
components.
So.
E
B
Bridgette
has
a
question:
hey
hi,
just
related
to
that
can
do
you
have
to
exactly
specify
the
ones
you
want,
or
can
you
ingest
something
like
the
currently
supported
versions
of
kubernetes
without
having
to
manually
go
back
and
be
like?
Oh
yeah,
they
dropped
1.20
yourself.
E
Yeah
so
the
way
these
entry
points-
and
this
actually
good
way
to
get
into
the
the
actual
implementation,
is
you
specify
an
environment
variable
that
says
I
want
latest
and
it'll
go
grab
those
things.
So,
let's,
let's
maybe
step
through
one
of
these
entry
points.
E
Okay,
so
if
we
go
into
scripts,
we
can
was,
everybody
seems
to
be
interested
in
performance,
so
I'll
I'll
kind
of
dive
in
there.
So
this
is
the
entry
point
they
all
are
very
similar
for
every
single
cluster.
You
have
to
set
your
azure
subscription,
tenant
id
and
those
types
of
things
if
it,
if
you
don't
have
that
information
it
it
bombs
out
the
in
in
prowl.
E
So
when
we
run
these
up
in
prow,
there's
actually
a
secret
stored
in
the
proud
cluster
and
and
that
gets
mapped
in
and
we
set
a
label
on
our
job.
That
tells
tells
it
to
map
that
secret
in
and
then
we
parse
that
that
secret
to
get
those
azure
azure
creds.
We
do
a
bunch
of
stuff
to
make
sure
that
things
are
already
there.
We
need
customize,
go
all
those
things
and
then
in
in
the
end-to-end
test,
we
pick
a
random
region.
E
So
these
are
the
regions
that
we
kind
of
run
against
and
then
down
here
the
so
so
you
if
you're
running
locally
it'll,
actually
map
it
directly
into
your
kind
cluster
instead
of
pushing
it
somewhere
else
and
then
down
here.
E
If
we're,
if
you've
set
that
build
kubernetes
flag,
it
will
go
and
actually
build
the
kubernetes
binaries
and
push
them
up
to
a
storage
account.
There's
another
script
in
here
see
I
build
kubernetes,
and
this
goes
and
says:
here's
all
the
things
we're
going
to
build
it's
going
to
go
map
those
images
and
then
build
kubernetes
right
out
of
the
box
and
then
push
them
up
to
a
storage
account.
E
E
Just
some
some
settings.
It
generates
an
ssh
key
or
you
can
provide
one
and
then
it'll
call
make
test
conformance.
So
it
just
make
sure
a
bunch
of
environment,
variables
and
everything's
all
set
up,
and
then
it
calls
into
this
make
test
conformance,
and
this
is
where
I
think
it's
kind
of
interesting.
E
Okay,
so
it
calls
into
this
target
here
and
it's
setting
the
ginko
focus
to
conformance,
and
one
of
the
important
things
for
the
conformance
test
is
that
our
ginkgo
focus
here
is
not
the
same
as
the
ginkgo
focus
that
is
going
to
be
run
for
the
performance
tests.
I'll
show
you
where
that
comes
in.
In
a
minute
we
passed
some
additional
some
additional
args
test.
E
Ede
local
goes
in
builds
the
latest
cluster
api
image
and
then
loads
it
into
the
the
kind
cluster
and
then
test
ede
run,
what's
kind
of
cool,
is
we
actually
called
ginkgo
here
and
so
we're
not
actually
like
running
a
script
or
something
like
that?
We're
calling
directly
into
ginkgo
and
we're
we're
passing
in
this
case
we're
going
to
be
passing
the
conformance
flag
for
for
ginkgo
and
a
couple
additional
things,
and
so
that,
at
this
point
we've
now
going
to
enter
into
this
folder
here
called
test.
E
Ede
and
test
ede
will
come
over
to
ede
suite,
and
this
is
the
entry
point
for
for
conformance,
as
well
as
the
the
caps
epr
tests
that
we
run,
and
so
it
comes
in
here.
It
has
a
bunch
of
setup,
and
this
is
happy
specific
setup
stuff-
we're
just
kind
of
mimicking
the
same
types
of
things,
but
it
makes
sure
that
there's
a
bootstrap
cluster
and
make
sure
there's
some
cube-
can
cubetest
config
information
and
then
this
is.
E
Yeah,
so
this
is
before
the
test
suite.
So
this
is
the
thing
that
gets
started
so
in
here
it
goes
loading
d,
it
loads
the
configuration
it
creates,
the
bootstrap
cluster
make
sure
the
bootstrap
cluster
is
configured
with
everything
and
and
then
finally,
it's
going
to
hand
that
off
to-
and
this
is
where
kind
of
tears
everything
down
and
then
so.
E
We
have
a
bunch
of
different
tests
in
here,
and
this
will
look
familiar
if
you're
familiar
with
ginkgo,
but
this
does
the
describe.
So
this
is
the
conformance
test.
So
when
we
pass
in
that
ginkgo
flag
for
conformance
it's
going
to
select
just
this
test
tweet
if
we
were
to
just
run
say
like
the
azure
tests
in
here
yeah,
okay,
so
in
here
we
have
a
whole
set
of
suites,
and
this
is
what's
run
on
the
prs,
and
this
is
our
workload,
cluster
creation.
E
So
this
is
the
one
that's
run
on
the
prs
and
and
these
this
set
of
tests
would
actually
be
selected
if
we
passed
a
different
ginkgo
flag.
E
But
if
we
go
back
to
conformance
there's
a
bunch
of
different
stuff
that
happens
here,
it
makes
sure
the
all
the
environment
variables
are
all
set.
It
sets
up
the
the
secret
for
the
cluster
so
that
we
can,
you
know,
set
up
cloud
manager
and
cloud
provider
manager
and
things
like
that
and
then
down
here
is
where
it's
going
to
get
the
kubernetes
version.
E
If
we
go
to
resolve
ci,
you
can
see
it's
pulling
out
that
ci
version
and
then,
if,
if
it
was
latest
it
will
call
that
that
url
that
I
was
mentioning
earlier
so
it
kind
of
bounces
through
and
then
make
sure
that
all
those
things
and
does
some
extra
logic,
because
you
can
actually
specify
some
other
versions
and
make
sure
that
all
those
versions
are
correct.
One
one
tip
that
I
give
here
is
if
you're
in
vs
code
or
go
land
or
you're
in
some
ide.
E
One
of
the
things
that
you
can
do
is
you
won't
out
of
the
box.
Get
like
go
to
definition,
get
reference
and
those
types
of
things
you
can
go
into
your
settings
and
in
vs
code
you
can
set.
E
Where
is
it?
You
can
set
your
build
tags
if
you
set
ede
that
will
allow
the
compiler
to
see
those
files.
You
can
do
the
same
thing
in
goland
and
it
will
it'll
give
you
those
it'll,
actually
look
at
those
files
and
compile
them,
so
they
can
kind
of
hop
around
which
is
kind
of
nice,
so,
okay
yeah.
So
if,
if
we're
using
the
ci
or
pr
artifacts,
it
does
some
extra
information.
Otherwise
it
does
some
more
setup,
there's
some
extra
additional
setup.
E
If
your
windows
and
then
finally,
it's
going
to
actually
create
the
cluster,
and
so
we
use
cappy's
framework
to
create
the
cluster
test
framework
to
create
the
clusters
and
then,
at
the
very
end
it
calls
into
cubetest
and
so
down
here.
E
This
this
function
here
is
is
provided
by
the
capi
test
framework.
Again,
it
creates
a
docker
container,
that's
using
the
conformance
image,
that's
produced
by
kubernetes,
and
then
it
passes
a
bunch
of
information
to
the
conformance
test
to
the
ede
binary
and
runs
it
the
the
biggest
one
that
it
does
is
the
this
cube
test
configuration
file-
and
this
is
hard-coded
right
now
and
it
goes
and
looks
in
this
data
folder
cubetest
and
here
it's
a
viper
config
file.
E
We
don't
actually
use
viper
anymore,
but
we're
using
the
same
format.
So
what
it
does
is
it
these
are.
These
are
the
things
these
are
environment
variables
that
you'd
run
to
ede
test.
You
pass
to
ede
test,
so
ginkgo
focus,
and
this
is
where
we're
actually
passing
to
run
the
conformance
test.
You
can
pass
additional
information
here
to
do
that,
for
instance,
with
windows,
we
pass
a
bunch
of
extra
information
windows,
ede
tests,
we
pre-pull
all
the
test
images
because
they're
so
big
to
make
sure
there's
not
flakes.
E
So
I
passed
just
an
extra
flag
to
ede
test
that
turns
that
on
you
can
see
that
I
can
override
the
conformance
windows,
doesn't
have
a
definition
of
conformance,
and
so
we
there's
a
subset
of
tests
that
we
run
and
and
skip
linux
only
tests
and
things
like
that,
and
so
when
that
is
done,
it
returns
and
it
does
the
cleanup
and
reports
out
the
tests
in
xml
format
and
then
prow
picks
those
up
and
and
runs
them
and
so
yeah,
that's
kind
of
like
a
full
walkthrough.
E
It's
been
talking
for
a
while.
Now
I
can
show
just
kind
of
running
the
conformance
test,
or
I
can
answer
some
questions.
It
looks
like
there's
a
lot
of
information
in
the
chat.
B
I
do
if
any
of
us
have
paid
any
attention
at
all
to
kubernetes
land.
We
often
see
things
about
blah
blah
blah
test
flake
and,
of
course,
that
kind
of
stuff
can
be
handled
externally.
But
I'm
kind
of
wondering
if,
in
this
particular
suite,
do
you
have
any
kind
of
like
back
off
retry
dealing
with
something
not
working
or
is
that,
like
out
of
scope
of
what
could
be
handled
in
this.
E
Yeah,
so
I
don't
so
for
conformance.
There
was
about
a
year
and
a
half
ago
or
something
ginkgo
has
the
ability
to
do
retries
and
the
kubernetes
community
decided
to
remove
all
those
retries
from
the
ginkgo
conformance
test,
suites,
and
so
we've
removed
all
those
now.
So
there
are
no
retries
on
like
a
flaky
test
and
for
hours.
E
I
don't
think
we
have
any
like
retries.
So
if
we
went
into
like
azure
tests
in
here,
yeah
there's
like.
A
Apply
yeah,
so
actually
we
do
have
some
retries
for
some
client
go
operations
that
jack
added
recently
so
for
some
stuff
that
we
think
might
feel
like,
for
example,
listing
those
or
something
like
that.
We
have
some
eventually
blocks,
which
basically
say
eventually.
This
should
be
true
like
this
should
succeed,
so
we
give
it
like.
A
Maybe
like
two
minutes
where
it
like
is
retrying,
and
if,
after
two
minutes,
we
still
can't
get
it
to
work,
then
we
fail
and
then
in
terms
of
like
actually
building
the
clusters
for
the
azure
specific
and
cappy-specific
pr
tests.
A
Because
of
the
world
we
live
in
with
controllers,
everything
is
kind
of
already
a
retry,
so
we're
basically
we're
not
doing
a
single
operation
and
then
seeing
if
it
fails
we're
applying
the
cluster
yemo
and
then
we're
waiting
a
certain
amount
of
time
before
we
call
it
a
failure.
So
it's
more
of
a
timeout
than
like
a
single
operation.
So
let's
say
you
apply
your
template
and
something
fails
in
like
azure.
A
It
like
tries
to
create
a
load
balancer
and
that
fails
then
like
by
nature
of
using
a
controller
that
will
like
req,
and
it
will
try
again
like
next
time
and
try
to
create
it
again.
So
we
do
have
some
resilience
from
that.
Not
at
the
end
to
end
level,
though,.
G
Thank
you.
I
would
just
say
that
we
don't
want
any
flaky
tests,
so
retrying
flaky
tests
is
absolutely
the
wrong
thing
to
do,
identify
flaky
tests
and
make
them
non
flaky.
E
So
one
thing
that
I
did
want
to
touch
on
that,
I
think
is
important
is
so
I
I
basically
create
an
environment
variable
flag
that
sets
up
the
environment
variables
for
what
I'm
trying
to
do
for
whatever
test
so
and,
and
some
of
the
really
important
flags
is
that
I
use
on
a
regular
basis,
is
skip
cleanup
true,
so
this
won't
clean
up,
especially
when
I'm
doing
development.
E
So
if
I
am,
this
won't
clean
up
the
management
cluster
and
it
won't
clean
up
the
cluster
that
I
just
created,
and
so
this
way,
if
I
run
the
test
and
I'm
doing
development
and
something
breaks,
I
can
go
and
investigate
why
it
broke.
If
I
don't
set
this
flag,
the
test
suite
will
clean
everything
up
and,
and
it
you
can't,
you
have
to
rely
on
logs,
which
is
always
helpful,
and
so
that's
an
important
one.
E
Also,
if
I
do
that,
once
I've
set
up
the
skip
cleanup,
I
can
also
set
this
other
environment
flag,
skip,
create
management,
cluster
true
and
then
specify
the
cube
config
for
the
management
cluster,
and
this
allows
me
to
not
have
to
recreate
the
management
cluster
every
single
time,
and
so
I
can
cr.
So
I
can,
you
know,
make
a
tweak
to
something
and
then
rerun
that
cluster
multiple
times.
E
I
can
also,
if
I'm
working
on,
say
one
of
these
ede
tests,
like
the
azure
load,
balancer
edu
test
and
I'm
trying
to
get
this
test
tweaked
or
or
you
know,
remove
a
flake
or
something
one
of
the
other
things
that
I
can
do
here
is
specify
the
cluster
name
in
the
cluster
namespace
and
it
will
go
and
create
the
it'll
reuse,
the
cluster
that's
already
created,
and
so
then
I
can
operate
just
on
that
test
and
not
have
to
recreate
everything
which
can
take.
E
Well,
and
so,
if
I
switch
the
screen-
oh,
I
guess
I
right
here,
it's
easier
to
just
talk
about
it.
So
there's
two
tools
that
I
use
pretty
regularly.
One
is
qb.
If
you
haven't
seen
this
before
it's,
it
allows
you
to
target
multiple
clusters
within
the
same
shell,
and
so
because
we
have
the
workload
cluster
and
the
management
cluster.
I
can
it
spins
off
a
new
shell
instance
and
and
copies
the
cube
config
into
that
shell
instance.
So
I
can
have
two
in
the
same
terminal.
E
I
can
have
two
clusters
and
I
can
kind
of
bounce
back
and
forth
between
them
without
having
to
do
extra
extra
exports
or
fancy
stuff,
and
so
it's
super
useful
to
to
target
both
clusters,
maybe
even
three
or
four
clusters.
I
use
cluster
ctl
to
get
the
cube
configs,
and
then
I
also
put
down
a
bunch
of
these
commands
and
things
that
I
do
in
this
blog
post.
So
you
can
follow
up
on
those
types
of
things,
but
yeah.
It's
super
super
handy.
E
A
F
Yeah,
I
think
it's
very
no
question
for
me,
like
you
were
mentioning
about
like
jingo
focus
stuff,
like
you
know,
how
can
we
skip
the
test
like
if
I
want
to
run
a
particular
test?
You
know
how
do
I
know
like
where
do
I
find
those
env
variables
here
like
you
are
sewing
somewhere
just
miss
that
part
and
how
do
we
know
like
what
to
pass
in
like?
Is
there
any
like?
I
know
there
is
a
description
for
that
describe
block
in
the
jingle.
F
E
Good
question:
so
the
azure
tests
are
these:
the
pr
tests
here
you
can.
This
is
where
you
would
find
those
strings.
So
anything
that
says
context
or
it
you
can.
You
can
take
any
part
of
that
string
and
pass
it
to
gingko
focus,
and
it
will
only
run
the
things
that
match
that,
so
it's
a
regular
expression,
so
I
can
choose
so
what
we
do
in
our
pr
test.
Is
we
do
it
on
this
required?
E
I
think
jack
recently
added
this,
and
so
we
just
pass
to
gingko
focus
required
and
it
will
select
all
the
tests
that
have
in
the
sub
sub.
It's
that
have
have
that
and
we'll
run
those.
But
if
I
wanted
to
run
so,
you
can
see
another
required.
So
if
we
passed
required
to
ginkgo
focus,
it
would
run
this
one,
the
one
above
it
would
run
this
one,
but
if
I
wanted
to
run
say
creating
an
accessible
load,
balancer
for
windows
or
validating
just
the
network
policies.
E
I
could
take
this
and
go
over
to
my
environment
variable
and
I
can
just
pop
it
in
there
and
that
will
that
will
select
just
that
test
as
long
as
that's
a
unique
string
across
the
test
suite-
and
you
can
do
things
like
you
know,
it's
regular
expressions,
so
you
can
do
all
sorts
of
things
to
select
different
types
of
tests.
E
A
Go
make
sense
thanks,
hey,
so
this
is
super
useful.
I
do
want
to
leave
a
bit
of
time
for
the
other
topics
that
we
had
do
you
do
you
think
we're
getting
closer?
Should
we
like
stop
here
and
maybe
do
a
part
two
and
show
test
grade
and
artifacts
and
stuff
next
time.
E
A
A
All
right
cool
matthias:
you
want
to
talk
about
black
card
templates.
H
Yeah
hi
everyone
so
yeah,
I'm
this
week
as
a
replacement
for
tilo,
which
is
usually
participating
in
in
those
meetings
and
yeah.
I
just
I
have
like
a
hopefully
a
quick
question
on
the
fat
card
templates
pr
so
yeah.
There
have
been
some
tweaks.
I've
been
doing
for
the
past
weeks
and
I
also
talked
with
tilo,
and
he
told
me
that
I
think
you
agreed
that
those
tests
don't
necessarily
have
to
pass
to
for
this
pr
to
be
merged
and
yeah.
H
It's
finding
right
now,
because
the
like
the
images
required
for
that
are
like
configured
only
in
our
like
private
sig
and
yeah
they're,
not
not
built,
I
think,
on
the
one
which
is
used
by
the
ci
and
yeah.
I've
been
wondering
if
there
is
like
anything
pending
to
get
this
merged.
A
Yeah
thanks,
so
I
don't
remember
agreeing
that
we
don't
have
to
have
the
test
passing
for
the
pr
to
merge.
Usually
when
we
add
a
new
feature,
we
want
passing
tests,
so
we
can
verify
that
the
future
is
working
and
then
it
doesn't
regress
in
the
future
when
other
prs
merge
so
the
whole.
You
know
idea
behind
adding
an
end-to-end
test
for
this,
so
that
we
could
validate
it
in
the
future.
A
I
understand
that
the
image
is
kind
of
a
blocking
issue
here,
like
the
reason
that
it's
not
passing
is
because
there
is
no
image
in
the
subscription
right
now.
We
could
go
and
like
build
an
image
in
a
seg,
but
then
my
word
with
that
is
that
it
would
be
kind
of
a
short
term
and
that
image
could
get
stale
like
it
would
never
be
updated.
A
So
if
we're
you
know
merging
this,
we
need
to
have
a
good
plan
for
maintenance
long
term,
so
it
doesn't
just
like
drift
and
then
eventually
fail
and
gets
removed
to
unblock
other
developers.
I
think
we
had
talked
with
tilo
about
you
know
his
team
potentially
publishing
an
image
in
the
azure
marketplace.
For
flat
card
as
part
of
the
flat
card
release
process,
so
I
don't
know
where
that
is
at.
H
A
Cool
does
anyone
else,
have
any
thoughts,
or
you
know,
questions
or
opinions
on
this.
A
Okay,
yeah
don't
see
any
hands
but
yeah.
My
my
take
is
that
we
should
definitely
not
merge
a
pr
with
the
failing
tests.
H
Right,
yeah
that
makes
sense.
Okay,
yeah,
then
we'll
focus
on
our
efforts
to
to
get
them
published
to
the
marketplace.
Okay,
that's
awesome.
A
A
F
So,
like,
I
think
it's
not
a
flake,
but,
like
you
know
before
I
deep
down
more,
I
wanted
to
check
with
you
folks
like
by
seeing
the
logs.
I
can't
make
sense
because,
like
it
fails
for
like
three
control
play,
notes,
two
linux
and
two
windows,
worker
notes
and
one
more
like,
and
I'm
not
sure
like
how
is
it
related
with
that
so
like?
If
there
is
any
suggestion
or
heads
up
just
wanted
to
know
on
that
else,
I
will
look
into
that.
A
Yeah,
so
actually
this
is,
I
guess,
could
follow
up
for
our
end-to-end
session.
But
when
I
see
something
like
this,
I
guess
with
like
four
tests
failed
and
that
kind
of
tells
me
it's
not
a
flag
because
like
unless
you
got
very
unlucky
and
all
of
them
flicked.
At
the
same
time
it
seems
like
something's
like
fundamentally
wrong,
and
you
know
it's
not
building
the
cluster.
A
F
A
There's
no,
like
known
you
know:
etvs
are
broken
this
morning
as
far
as
I
know
sure
checked
other
pr's.
Yet,
but
for
your
sanity,
you
could
also
just
retest
it
once
and
see
if
it
fails
the
same
way
again
or
if
you
get
different
results
but
yeah,
I
would
dig
into
the
logs
and
see
what's
going
on
yeah.
If
you
want,
we
can
dive
in
together.
If
there
are
no
other
topics
for
a
bit,
I
must
yeah
up
to
you
sure.
A
Okay,
I
guess
let's
just
first
check.
If
no
one
else
has
questions
before
we
start
getting
into
the
weeds,
do
we
have
any
other
questions
or
any
comments,
or
you
know
anything
that
anyone
wanted
to
bring
up
today?
A
Oh
and
cheyenne
already
found
the
issue
in
the
background.
I
guess
we
can
still
look
at
it
and
be
like
you
know.
This
is
how
you
would
do
the
bug
an
end
to
event,
for
you
know
for
the
learning
aspect
of
it.
A
Okay,
no
other
questions
I
don't
see
any
hands
raised,
so
artifacts
is
where
all
the
logs
are
kept
after
a
test
run.
A
So
if
you
go
and
I'm
telling
this
for
the
recording
by
the
way-
I
know
you
know
this,
but
if
you
go
into
clusters
and
so
here's
like
the
bootstrap
cluster
and
then
these
are
all
the
logs
for
the
workload
clusters
right
now,
there's
only
one,
I
guess,
because
all
the
other
ones
didn't
get
far
enough
to
have
logs
and
then,
if
you
look
into
controllers,
cabsie
controller
manager,
that's
the
cav
z-locks,
it's
usually
the
first
place.
I
look
since
that's
the
code.
We're
changing
and.
A
I
guess,
or
it's
a
mill
pointer
so
I'll
scroll
to
the
bottom
yeah.
So
this
is,
you
know,
probably
what's
going
on
here,
get
young
principal
id
and
then
you
can
check
the
line
and
see
exactly
what
the
issue
is.
But
it's
probably
happened
several
times
we've
been
restarted.
A
It
actually
only
happened
once
and
it
just
gave
up
interesting
but
yeah.
This
is
where
usually
a
lot
of
the
answers
are
in
the
cavs
controller
logs
when
it's
something
kevz
related
in
the
pr.
But
if
you
need
to
dig
into
cappy
vlogs,
you
can
also
look
at
the
core
tappy
and
then
the
bootstrap
logs
and
the
construction
logs
in
here.
A
Yes,
true,
and
there
is,
you
know,
I
think,
proud-
is
having
some
issues
due
to
github
having
some
issues
and
having
bigger
degraded
service.
So
if
you
run
into
slowness
when
trying
to
comment,
you
know
retest
or
anything
like
that
today.
That's
probably
why.
F
A
A
And
we're
doing
pretty
well,
there
are
two
new
bugs
that
popped
up
recently
and
those
are
both
unassigned
right
now.
So,
if
you're
looking
for
something
to
work
on,
these
are
good
ones.
I
think
they're
both
pretty
straightforward,
then
mark
them
as
good
first
issue,
because
it
requires
like
understanding,
you
know
controllers,
but
it's
pretty
straightforward.
The
fix
shouldn't
be
too
difficult,
so
yeah
and
then
everything
else
is,
I
think,
being
worked
on
and
on
track.
A
All
right,
cool
and
then
next
week
I
guess
we
can,
I
don't
think
we
have
any
host
or
you
know,
walk
through
yet
signed
up.
I
don't
know
if
james
you
wanted
to
do
a
part
two
or
not.
We
can
also
do
that
another
time
later,
but
if
anyone
wants
to
sign
up
for
a
demo
or
anything
like
please
reach
out
and
we
can
make
it
happen,.