►
From YouTube: CNCF Kubernetes Conformance WG - 2018-05-24
Description
Join us for KubeCon + CloudNativeCon in Barcelona May 20 - 23, Shanghai June 24 - 26, and San Diego November 18 - 21! Learn more at https://kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
Join us for KubeCon + CloudNativeCon in San Diego November 18 - 21. Learn more at https://bit.ly/2XTN3ho. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy and all of the other CNCF-hosted projects.
A
A
We
have
an
agenda
which
I
am
projecting.
Hopefully
this
is
coming
across.
First
up
on
the
agenda,
we
have
an
update
on
the
current
state
and
I
I
think
this
is
a
reasonable
place
to
start
with.
Each
of
these
meetings
see
there's
some
information
from
the
chat.
Great
dropped
in
the
dock,
with
the
agenda
feel
free
to
add
to
the
end
end
of
the
agenda.
Now,
if
you
didn't
make
that
so,
first
up
we
have
Ayesha
is
gonna,
give
an
update
on
the
current
state
on
coverage
numbers
as
of
today.
B
Yes,
these
numbers
I
got
it
from
running
amici's
tool
which
basically
gets
the
it's
the
endpoint
coverage,
so
this
was
run
against
masters,
took
the
test
from
masters
master
branch
and
ran
it,
and
you
are
stable.
We
have
about
18
percent
coverage
and
overall
we
have
got
11,
it's
not
much
of
a
jump
from
where
we
were
at
to
understandably
so
because
we
are,
we
are
just
wrapping
up
on
coverage
in
the
areas
that
we
discussing
further
down.
Should
we
go
up?
Should
I
go
on
to
the
next
item
to
terms
of
our
position.
B
Let
a
pointer
to
the
coverage
tool
itself.
Anybody
wants
to
take
a
look
at
you
go
take
a
so
as
for
prioritization,
so
we
met
with
sick
art
of
weeks
that
there's
update
on
that
coming
further
down
in
the
agenda,
but
from
the
meeting
it
became
clear
that
we
want
to
focus
on
components
that
can
be
easily
swapped
out
by
providers.
At
least
you
want
to
focus
on
that
for
a
first
round
of
conformance
coverage
so
towards
that
I
met
with
signaled
early
this
week
and
we
have
a
couple
of
volunteers
from
there
too.
B
Who
can
help
us
cuz
through
all
the
api's,
that
we
are
tracking,
specifically
the
powered
ones
and
then
identify
ones
that
have
already
have
coverage
and
also
ones
that
to
be
prioritized
for
the
first
round,
and
once
we
have
that
list,
I
plan
to
sit
with
them
and
define
user
journeys
which
then
our
vendors
can
help
automate
I
open
tracking
issue
for
that
this
morning,
and
so
I'll
keep
that
issue
up
to
date
with
progress,
then
I
plan
to
do
the
same
thing
for
API
machinery.
Again
I
opened
another
tracking
issue.
B
We
have
a
couple
of
PRS
in
flight
again
they
are
laying
down
in
the
agenda
they
are
already.
There
are
some
into
end
tests
being
proposed
to
be
added
to
confirm
thanks
for
watch
and
applicator,
so
that,
hopefully
those
tests
will
go
in
111.
We
will
then
be
evaluating
gaps
and
seeing
where
we
should
increase
coverage
for
112,
so
hopefully
for
112
it
by
the
next
conformance
group.
Meeting
I
will
have
identified,
hopefully
a
couple
of
areas
that
we
will
increase
coverage
for
in
point
well
for
both
pod
and
Eva
mission.
A
I
expect
this
one
is
probably
danced
and
we
will
come
back
later
in
the
agenda,
distant
of
the
process,
improvements
and
visibility
and
communication
issues
that
were
raised
by
the
steering
committee
initially.
So,
let's
hold
on
that
part
of
the
conversation
and
so
later
in
the
agenda.
But
any
questions.
C
Okay,
I'll
just
give
the
yeah
very
brief
update
here
that
we're
up
to
58
certified
vendors,
which
of
course
is
completely
insane
and
really
makes
this
one
of
the
largest
and
most
successful
certification
programs
I'm.
Aware
of-
and
we
did
this
refactor
about
a
month
ago,
where
we
divided
the
all
of
the
certified
products
into
being
distributions,
hosted
platforms
or
installers,
and
we're
then
tracking
that
number
and
other
than
a
few
Twitter
fights
and
such
I
think
that
most
folks
have
found
that
distinction
useful.
C
And
then
we
are
tracking
the
number
of
certifications
from
different
levels.
That
one
other
piece
I'll
just
remind
people
of.
Is
that
the
there's
a
small
number,
four
or
five
implementations
that
are
only
certified
for
1.7
and
haven't
done.
A
newer
certification
and
those
folks
now
have
another
month
and
a
half
to
certify
as
either
1.9
or
1.10,
in
order
for
their
1.7
certification
to
remain
valid,
and
you
know
we're
emailing
them
and
reminding
them
and
such.
But
if
they
don't,
then
the
1.7
certification
goes
away.
D
E
A
A
C
Mean
we
have
contact
info
for
all
the
people
as
part
of
their
application,
so
they,
while
reaching
out
to
them
it's
mainly
just.
Are
they
gonna
prioritize
it
or
not?
Excellent,
of
course,
if
they
did
fall
out,
41.7
they're
always
welcome
to
come
back
again.
It's
just
that
older
version
that
would
lose
certification
forever.
Serger.
B
I
took
a
look
at
the
features
that
are
going
to
stable
in
111,
so
four
of
them
are
going
and
I
was
following
up
with
the
owners
yesterday
to
see
if
they
have,
if
they
have
enough
representation
in
the
confirm
suite.
Two
of
them
put,
the
are
back,
the
are
back,
one
was
identified
as
need,
not
I
mean
not
required
to
be
conformance,
so
we
have
omitted
that
and
the
second
one
is
Cody.
A
Okay,
excellent,
the
core
DNS
one
is
especially
interesting
because
it's
an
external
repository,
so
I
would
encourage
folks
to
take
a
look
at
that
one
and
just
think
critically.
Through
that
whole
dependency,
it
is
a
new
world
that
we're
moving
into
where
some
of
the
default
components
are
no
longer
in
the
kubernetes
repository
I
think
we
will
see
more
of
that
where,
as
cloud
provider
extraction,
project
moves
on
and
many
of
the
external
cloud
providers
are
already
in
that
world.
So
please
help
think
through
that
and
anticipate
issues
in
this
whole
process.
A
So
I
wanted
to
give
a
quick
update
on
discussion
at
signal
architecture
I.
Think
two
weeks
ago
now
there
was
there
were
some
questions
raised
in
the
steering
committee
meeting,
I,
believe
and
maybe
Tim
saint-clair
can
give
an
update
here
as
well,
because
I
think
he
was
in
both
of
these
meetings,
but
I'm
I'm
happy
to
give
a
quick
update
here
and
then
Tim.
A
We
would
like
to
be
conformist.
Tests
and
I
have
some
proposals
for
process
improvements
further
down
here,
but
broadly
those
vendors
were
working
on
first
pr's
sort
of
starter
projects
for
what
would
be
a
new
hire
in
a
in
a
company
and
I
think
conflating
the
conformance
aspect
of
a
new
end.
End
test
with
the
test
itself
caused
some
concern
and
alarm
that
it
was
happening
outside
of
the
existing
stakes,
and
so
we
talked
more
about
including
the
sig
leadership
in
those
areas.
A
A
D
A
F
That
would
be
great
I've
tried
to
initiate
some
conversations
on
the
that
mailing
list.
Following
up
to
our
conversations.
Last
time
this
morning
after
I
posted
there,
last
night,
I
put
some
information
about
possibly
mirroring
the
Sun,
oh
boy,
scanner
approach
to
allowing
our
community
to
easily
contribute
their
logs,
hopefully
in
exchange
for
some
instant
analyzation
to
show
what
part
of
the
ApS
are
using.
F
Possibly,
they
identify
some
practices
like
automatic
Arabic
generation,
some
other
incentives
to
contribute
their
data,
but
the
steps
were
a
little
complex
currently
was
having
to
set
up
things
with
the
API
server.
It
was
really
nice
to
wake
up
this
morning
to
the
kept
proposal
for
dynamic
auto-configuration
by
mr.
Parker,
and
in
addition
to
that,
we
went
ahead
if
he
can
pull
the
two
sunburst
charts
up
side-by-side
for
coop
test
and
segue,
and
we're
still
looking
to
and
I've
inspect,
identify
why
we
have
differences
and
the
test
coverage
results
for
different
tools.
F
That
is
these
two
charts
do
show
an
increase
from
our
last
results
from
from
824
and
we've
also
created
an
API
snoop,
a
slack
channel.
If
you'd
like
to
engage
it
and
we'd
love
to
help
anybody
to
go
through
the
process
of
submitting
the
lungs
and
also
we
would
love
some
feedback
and
thoughts
on.
How
can
we
drive
our
capex?
F
F
Looking
at
only
pods,
but
I,
don't
know
that
we
have
enough
data
for
a
prioritized
list
of
what
a
party
API
is
were
using
I
think
an
intelligent
selection
of
which
cake
mix
we
would
focus
on
to
help
generate
that
data
would
be
some
great
feedback
Rowen
that
you
have
some
additional
features
that
we
didn't
add
some
thoughts.
I.
G
So
things
that
could
be
interesting
to
explore
is
the
ability
to
show
the
differences
between
coverage
and
eg.
He
runs
so
across,
like
you
know,
you've
taken
one
ETA
conformance
run
and
then
the
next
version
you
take
the
next
one
and
you
can
show
the
differences.
What's
changed
to
provide
like
a
like
to
be
able
to
focus.
G
Chris
is
just
going
for
that
on
a
certain
area
of
the
api's
to
show
a
timeline
graph,
maybe
of
coverage
over
time
as
to
Sarah,
and
then
life
is
also
the
the
prioritized
list
that
we
talked
about
some
time
ago
would
be
interesting
to
explore
wall
I.
Think
that's
all
I've
got
in
terms
of
about
potential
features
that
are
going
around
on
my
head
right
now.
F
Thanks
for
that,
growing
I
really
encourage
some
active
feedback
on
the
mailing,
the
the
thread
on
all
of
the
list
and
please
feel
free
to
reach
out
to
us
directly
on
API
Institute's
like
channel,
so
that
we
can
engage
the
community
more.
It
would
be,
and
the
idea
of
using
the
dynamic
configuration
for
either
what
exists
today.
I'm
looking
forward
to
using
audit
I'd
love
some
feedback
on
that
as
well,
but
in
particular,
I
have.
D
G
So
I
can't
explain
Ali
yeah
bad,
but
for
those
we
used
the
e,
the
heck
/e
to
me
to
bring
up
a
cluster
on
GCE
and
then
for
the
coop
tests
e
to
e.
We
basically
just
a
hack,
slash
each
week,
go
test
and
then
for
soda
boy.
We
basically
bought
up
a
cluster
using
hex
/e
to
8
and
then
deployed
so
no
boys
straight
onto
that
for
our
coop
cuddle.
So
so.
D
The
one
thing
that
comes
to
mind,
but
there
is
a
parameter-
that's
specified
to
the
test,
which
is
the
provider
and
not
all
tests
will
run
default.
Ly.
I.
Think
Scoob
test
makes
a
lot
of
suppositions
about
the
provider
default
in
the
GCP,
where
suitably
specifically
does
not
so
the
that
that
provider
flag
actually
turns
enables
or
disables
extra
tests.
So
are
you
seeing
less
number
of
tests
being
run
on
Sanibel
a
versus
coop
test
slightly.
D
A
I
especially
appreciate
the
way
that
you
guys
are
using
the
the
appropriate
mechanisms
for
gathering
this
information
instead
of
building
in
some
extra
layers
and
things
that
have
to
be
maintained,
alongside
so
using
audit
logs
to
understand
what
is
being
exercised
and
how
and
I'm
particularly
interested
in
figuring
out
how
to
get
more
information
about
the
particular
verbs
and
payloads
and
the
range
of
options
in
some
of
those
endpoints
that
are
more
frequently
used
to
than
some
other
more
obscure,
endpoints
I.
Think
that
will
really
help
in
the
prioritization
effort.
C
A
C
C
A
A
One
is
to
promote
the
aggregator
été
tested,
conformance
and
the
PR
is
linked
here.
Please
take
a
look
give
feedback.
If
you
have
concerns
I
expect,
as
we
have
more
energy
going
into
prioritization,
we
will
see
more
of
this
and
I
just
want
to
make
sure
we
have
a
mechanism
for
communicating
those.
The
other
is
adding
a
watch
EBE
test
to
conformance
test,
and
one
thing
I
noticed
was
just
there
isn't
a
consistent
format
or
for
labels
applied
to
these.
A
D
A
Okay,
so
move
in
this
direction
like
there
were
a
couple
of
suggested
improvements
around
automation,
supporting
this
process.
But
please,
if
you
see
folks,
adding
a
conformance,
a
test
and
immediately
proposing
that
it
be
a
conformance
test.
Try
to
reinforce
that
these
are
two
distinct
steps
will
help
to
avoid
confusion
and
concern
and
test
getting
lost
in
the
fray.
B
B
That's
one
part
of
the
proposal
that
way
we
wouldn't
be
accumulating
debt
when
it
comes
to
confirmands
coverage.
The
second
initiative
is
to
for
the
beginning
of
every
release.
Come
up
with
a
target
set
of
API
is
our
sakes
where
we
focus
on
to
increase
than
firmance
coverage
for
we
12.
We
are
thinking
as
we
spoke
about
it.
It
will
be
mostly
known
and
a
missionary
that
we
will
focus
on
to
increase
coverage.
B
So
so
that
way,
every
release
we
have
a
few
tests
going
into
coverage
to
ingrain
in
conformance
and
the
third
one
was
as
part
of
another
discussion
we
had
with
suggesting
we
do
have
a
conformance
dashboard.
But
currently
this
dashboard
is
not
being
looked
into
by
the
CI
signal
lead
because
it's
not
part
of
the
synchrony
stash
moon.
It
exists
as
its.
So
sometimes
we
do
miss
signals
where
conformance
test
fails.
So
you
just
want
to
make
sure
that
this
dashboard
is
also
looked
into
as
out
of
rates
blocking.
B
So
one
proposal
that
we
have
is
to
move
this
conformance
dashboard
into
the
cig,
release
dashboards
as
well,
so
that
gets
looked
into
on
an
ongoing
basis.
This
way,
any
new
conformance
test
that
gets
into
this
week,
we
can
make
sure
that
it's
not
flaking
and
also
it
can
help
us
gather
efficient
numbers
ongoing
basis
as
well.
So
these
are
the
reasons.
Do.
D
You
know
whether
or
not
it
makes
sense
to
at
least
have
because
right
now,
the
default
provider
for
everything
is
pretty
much
GCP.
You
know
how
it
makes
sense
to
have
at
least
one
other
provider
when
call
their
common
providers
to
make
sure
that
that
signals
consistent,
because,
even
even
if
we
hope
it
was
always
the
case
that
stations
should
be
extracted
away.
The
history
has
taught
me
that
they're
not
so.
A
We
have,
we
have
talked
with
AWS
as
well
and
tends
to
run
the
confirmes
test,
suite
at
least
on
every
submit
and
submit
those
test
results
back.
This
dashboard
came
out
of
the
cloud
provider
extraction
where
this
becomes
increasingly
important
to
ensure
that
making
it
change
and
testing
only
on
one
provider
doesn't
inadvertently
break
others.
Our
hope
is
that
this
is
a
compelling
benefit
and
folks
will
opt
in
and
submit
those
test
results
back.
A
If
you
are
a
representative
of
a
provider
or
distribution
and
would
like
to
participate
in
that
feel
free
to
reach
out
here
on
that
connect,
conformance
or
cloud
provider
working
group
mailing
lists,
and
we
can
set
up
with
the
right
instructions
to
run
but
fully
fully
agree.
We
should
get
as
broad
participation
as
we
possibly
can.
A
Looks
like
we're
on
to
special
topics.
Oh
this
reminds
me.
There
was
also
a
request
in
the
cigar
collector
meeting
to
convert
this
document
about
conformist
testing
for
kubernetes
to
markdown
and
submitting
it
to
get
up
to
ensure
discoverability
and
visibility,
and
Mather
has
signed
up
for
that
AI.
So
that
was
our
first
special
topic.
I,
don't
think,
there's
a
whole
lot
more
to
say
there.
So
we'll
move
on
to
the
next
one,
which
is
just
planning
going
forward.
A
So
far,
we've
been
somewhat
ad
hoc
in
the
planning
for
each
release
and
just
best
effort
for
individual
states
to
submit
tests
that
they
propose
become
conformist
tests.
Trying
to
push
that
earlier
in
the
kubernetes
process.
We
already
have
so
trying
to
ensure
that
by
feature
freeze,
we
have
at
least
articulated
those
goals.
I
don't
know
if
that's
a
hard
requirement
yet
or
if
that's
just
guidance,
I'm
just
spitballing
here
honestly,
but
trying
to
push
earlier
in
the
process
has
always
seemed
to
help.
A
So
that
is
the
intention,
and
the
call
to
action
here
is
this.
Whole
effort
is
really
about
ensuring
the
portability
of
workloads
for
end-users
across
providers
of
kubernetes.
If
you
are
representative
of
a
sig
and
please
go
back
to
your
sig
and
the
best
efforts
in
this
whole
area
will
start
from
within
six
that
already
know
what
e
to
e
tests
they
that
ought
to
be
part
of
conformance
or
they
wish
were,
but
are
too
flaky.
A
C
C
I
appreciate
you
bringing
it
up
because
we
the
deadline
for
that
is
six
weeks
from
now
with
the
CFP
closing
the
deep
dive
in
interest
on
the
same
schedule.
I
I
guess
my
impression
is
that
both
the
intro
deep
dive
meetings
in
Copenhagen
were
quite
well
attended.
I
would
love
to
do
any
true
to
this,
at
least
in
Copenhagen,
and
so
I
would
I
mean
in
Shanghai
and
so
I
don't
know.
C
C
You
regard
me
yeah,
so
I
would
encourage
you
to
submit
a
talk
as
well
on
related
topics,
perhaps,
and
then
you'd
be
in
a
good
position
to
to
help
manage
an
intro,
deep
dive
there
or
one
of
the
other
and
then
for
Seattle
I
presume
we're
going
to
want
to
want
to
do
that
meter.
We
we
have
the
ability
to
do
workshops,
we
haven't
really
done
them
in
the
past.
C
I'm
open
to
it,
maybe
we
should
take
it
to
the
list
and
and
and
brainstorm
on
it.
I
mean
what's
strange,
is
that
I
sort
of
expected
that
we
would
have
half
the
vendors
on
board
with
the
program
and
then
we
would
do
a
workshop
for
the
other
half
to
convince
them
that
they
should
we
really
participate,
but
we
basically
have
everyone
now.
So
the
the
question
is,
is
what
do
we
want
folks
to
do
additionally
or
differently
or
more.