►
From YouTube: Kubernetes SIG Multicluster 29 Nov 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
C
A
A
C
C
B
Good
well
then,
I'll
steal
the
Thunder
of
p.more's
usual
Spiel,
which
is
hello
and
welcome
to
Sig
multi-cluster
on
Tuesday
November,
29th
Laura.
You
have
the
floor.
Okay.
Now
it's
my
turn,
which
is
I'm
going
to
talk
a
little
bit
about
some
efforts
to
improve
the
Sig
multi-cluster
like
contributor
experience.
B
So
I
mentioned
this
in
Slack
some
time
ago.
I
forget
exactly
when,
but
so
I
guess
it
was
a
little
bit
after
our
last
meeting.
But
there's
this
doc
here,
sigmc
website
proposal
I
have
pulled
up
over
here,
which
is
basically
consolidating
the
ideas
we
had
talked
about
at
kubecon,
North
America
and
then
in
lastigmc
meeting
about
giving
some
more
discoverability
to
some
of
Sig
multi-clusters
projects
and
also
just
overall
make
it
easier
for
people
to
know
what
is
going
on
from
the
outside
and
also
how
to
contribute.
B
So
fundamentally,
this
is
all
based
on
Gateway
apis
efforts
in
the
same
type
of
way.
So
I'm
just
going
to
click
through
to
here
to
remind
everybody
that
Gateway
API
has
this
website
that
introduces
the
Gateway
API
and
has
some
documentation.
It
has
lots
of
like
updates
about
when
they
meet
and
how
they
meet
and
what
the
process
is
to
update
the
Gateway,
API
and
yeah.
B
So
this
proposal
here,
there's
like
more
details
in
here,
but
the
tldr
is
to
take
a
lot
of
inspiration
from
that,
certainly
in
terms
of
like,
like
development
and
Logistics,
but
I
actually
unify
all
information
on
Sig
multi-cluster
subproject,
so,
instead
of
being
just
about
one
of
the
apis
or
just
having
each
of
the
apis
host
their
own
version
of
this
site
that
at
least
the
high
level
documentation
is
all
unified
together.
B
This
doesn't
necessarily
like
replace
like
actual
individual
API
documentation
and
whatever,
certainly
not
like
API
reference
documentation,
but
more
that
high
level
thing
that
people
can
first
get
a
hold
of,
and
we
have
talked
with
the
Gateway
API
maintainers
to
learn
more
about
how
they
set
theirs
up.
B
So
logistically
there's
a
like
a
skeleton
proposal
and
a
just
in
a
personal
repo
and
an
example,
deployment
there
as
well
that
I'm
going
to
show
you
some
changes
that
Nicholas
has
made
in
a
few
minutes,
and
fundamentally
it's
based
on
this
framework
that
the
Gateway
API
uses
Gateway
API
site
users,
and
this
is
the
docs
that
I'm
thinking
need
to
be
here
so
like
the
big
overview
of
the
Sig
like
stuff
kind
of
like
that's
in
the
charter,
but
also,
what's
in,
like
some
of
the
kubecon
slides
and
discussions
about
the
roadmap
and
sigmc
again
that
high
level
documentation
on
Sub
sub
projects
with
really
basic
use
cases,
some
aggregated
status,
information
and
links
out
to
implementations
of
sub-projects.
B
So
this
could
be
a
Consolidated
place
where,
like
the
different
MCS,
API,
implementations
or
different
work,
API
implementations
can
be
referred
to,
so
people
know
where
they
are
and
what
stage
they're
in
some
contributing
guidelines.
Some
of
the
stick
generally
like,
what's
normal,
regarding
like
adding
stuff
to
the
agenda
or
like
what
do
we
think
is
a
valid
project
for
us
to
work
on
and
potentially
aggregate
some
API
documentation
from
the
sub
projects.
B
So
that's
the
overall
idea
and
again
this
link
here
goes
to
like
a
really
Bare
Bones,
like
straight
up
just
what
the
first
make
docs
like
in
it
basically
makes,
but
Nicholas
has
been
working
on
more
content.
It
has
a
PR
out
for
that.
So
this
is
kind
of
like
an
initial
stab
at
introducing
more
details
about
all
of
our
Concepts
guides,
places
where
we
can
put
individual
implementations
more
contributing
info,
so
Nicholas
I
don't
want
to
steal
your
thunder
because
you
did
all
this
is
that.
E
Yeah,
so
that's
still
a
bit
of
work
in
there
because
you
know,
as
Laura
mentioned,
I
took
a
lot
of
examples
from
the
Gateway
API
docs.
So
there
are,
you
know
like
the
contribution
stuff
is
really
a
rip
and
replace
basically
on
the
phone.
It's
basically
modifying
a
few,
a
few
things
basically
to
to
match
the
the
Sig
multi-cluster
stuff,
very
few
things
to
fix
like
getting
a
proper
calendar
and
things
like
that
and
what
I
did
actually
is.
E
You
know,
as
I'm
new
on
the
project,
I
actually
had
I
I
I
I
I,
put
a
bit
of
time
on.
You
know,
listening
to
the
the
old
videos
and
getting
some
ideas
and
stuff
like
that.
So
there's
you
know
there's.
Obviously
the
cluster
set
service
export
service,
import,
stuff,
there's
also
this
notion
of
about
API
and
and
multi-cluster
Services
API
and
work
apis.
E
Somehow,
so
you
know
I've
I've
not
settled
yet
on
on
a
Model,
but
I
try,
basically
to
gather
as
much
as
information
and
it's
not
finished
yet
right
and
there's
still
the
implementation
stuff.
Basically,
but
you
know
it's,
it's
basically
a
work
in
progress,
and
you
know
this
is
just
the
latest
and
greatest
that
I
put
together
to
for
today.
Basically.
B
Nice
and
I
see
Mike
is
on
the
call,
definitely
needs
some
work
API.
What's
the
word
specialist
domain
experts
so.
F
B
Yeah,
let
me
also
maybe
just
speak
a
little
bit
to
kind
of
order
of
operations
here
on
like
where
this
repo
will
live.
So
right
now,
I
just
made
like
this
proposal,
one
so
that
we
could
see.
You
know
how
it
was
made
and
Nicholas
made
a
fork
to
work
on
his
stuff
too.
But
I
did
open
a
request
to
make
like
an
official
repo
and
kubernetes
sigs
for
the
site,
so
I
think.
B
Ultimately,
at
some
point
it
will
be
if
everybody
is
cool
with
this,
and
this
gets
is
fine,
and
unless
anybody
has
any
complaints
about
that
process,
then
we
would
migrate
whatever
the
final
repo
is
over
there
just
FYI
that
it
could
move.
B
So
yeah
back,
okay,
great
cool,
well
I
will
mention
there
were
some
Alternatives
considered
down
here
with
pros
and
cons
of
not
doing
it
this
way.
B
So
if
you
want
to
take
a
look
at
that
those
and
be
like
no,
this
other
one
is
a
better
idea,
like
I
think
there's
some
variations
in
whether
it's
best
to
have
like
one
site
for
everything
in
sigmc
for
sub
projects
to
have
their
own
like
Gateway
API
project,
fundamentally,
is
if
all
of
this
should
just
be
in
like
the
KK
docs.
If
all
this
should
be
in
the
kubernetes.dev
docs,
you
know
questions
like
that.
B
B
Yeah
I
think
that
was
all
of
my
agenda
item.
Yes,
but
Stephen
wants
to
talk
about
conformance
tests.
G
Thank
you
so
I'll
take
over
the
screen
share,
maybe
go.
B
G
Right
so
I
I
took
a
look
at
the
well
following
the
last
call.
I
took
a
look
at
the
end-to-end
tests
and
the
MCS
API
repo,
with
two
goals
in
mind.
One
was
to
see
what
the
conformance
tests
look
like
and
the
other
was
to
see
whether
submarine
there
passes
them,
and
so
this
raised
a
few
things
so
I
opened
while
an
issue
and
some
pull
requests
and
I
thought
it
would
be
worth
bringing
them
up
in
the
call.
G
So
the
first
one
that
surprised
me
a
little
bit,
because
this
is
the
first
hurdle
that
Submarina
ran
into
is
that
the
the
test
is
entirely
based
around
service,
Imports
and
endpoint
slices
well
the
current
test
and
it
doesn't
create
a
service
export.
Whereas
Submariner
takes
the
view
while
interprets
the
MCS
API
is
saying
that
the
service
export
is
where
everything
starts.
G
So
I
opened
this
issue
for
clarification
on
that
point,
doesn't
make
sense
to
have
a
service
import
without
a
corresponding
service
export.
First.
B
Yeah,
so
I
only
responded
in
slack,
but
I
feel
that
these
should
start
from
the
service,
export
and
I,
don't
know
Jeremy
and
Paul.
If
you
want
to
talk
about
any
of
the
legacy
of
these
end-to-end
tests,
if
they
were
like
explicit
on
this
point,
but
that's
how
I
also
like
I
interpret
the
cap
and
everything
the
same
as
Stephen
yeah.
D
No,
they
should
definitely
start
with
the
service
export.
The
the
the
tests
that
were
there
originally
were
not
meant
to
be
conformance
tests.
There
was
a
like.
There
was
never
an
implementation
of
MCS
right.
There
was
a
hacky
tool
that
would
make
it
easy
to
Implement
in
a
way
that
Cube
proxy
could
understand
by
Auto,
creating
services
to
align
with
any
service
Imports
and
endpoint
slices.
D
You
created
more
as
a
convenience,
so
that,
if
you
ran
that
tool,
you
could
conceivably
create
a
bash
MCS
controller,
but
this
was
never
really
meant
to
be
like
a
full
implementation.
D
I
think
yeah,
I
100
agree
for
conformance
tests.
We
should
start
at
the
service,
export
I
think
it's
probably
worth
breaking
down.
You
know
if
a
each
each
step
along
the
way,
but
I
don't
think
we
should
be
creating
service
Imports.
That
I
mean
that's
the
implementation's
job
right,
yeah.
A
G
Yeah
well
so
that
was
my
next
point,
which
is
that
there's
solora
opened
this
issue
a
few
months
ago
to
turn
the
end-to-end
tests
into
an
actual
conformance
test,
and
so
we
need
to
figure
out
what
the
what
should
be
covered
and
so
yeah.
So
just
if
we
maybe
take
a
look
at
what's
currently
in
the
test.
B
G
Yeah
exactly
yeah,
so
that's
yeah,
one
of
the
things.
Maybe
we
can
discuss
now
or
perhaps
we
need
to
yeah
leave
time
for
people
to
think
about
it
anyway.
So
what's
in
the
test,
currently
is
this
so
there's
hello
service
and
the
Holo
service
import
deployment
that
goes
with
it
and
our
request.
Pod,
that's
used
to
run
a
test
and
the
connectivity
test
does
a
bunch
of
setups.
So
there's
a
pile
of
things
in
the
before
reach.
G
It
creates
a
namespace
on
both
clusters.
So
then
we
get
the
namespace
same
as
idea
creates
the
pods
the
deployment
service
and
there
it
creates
the
service
import
and
then
it
expects.
G
G
In
like
190.
yeah
yeah,
but
I'm
here
and
then
yeah
still
looking
for
pods
and
then
this
is
the
the
interesting
part
to
expect
the
no
still
not
yeah.
C
B
G
C
G
So
it's
fairly
basic.
Well,
like
you
said,
Jeremy,
it's
not
really
intended,
but
I
when
I
was
reading.
I
got
the
impression
it
was
more
intended
to
the
song
tool
to
help
development
of
the
specs.
D
Yeah
that
that's
that's
correct,
so
I
think
we
probably
need
a
separate
test
Suite
to
create
service
exports
and
then
validate
that
eventually,
the
correct,
Imports
and
endpoint
slices
are.
A
D
You
know
that
you
can
actually
connect
to
the
pods
on
the
other
end
I
think
that's
part
of
a
conformant
implementation
like
obviously,
if
you
create
the
end
points,
but
they
don't
work,
that's
not
helpful,
but
I
would
definitely
look
at
it
from
the
from
the
end
user
perspective,
and
you
know,
I
would
I
would
expect
that
the
test
basically
test
the
behaviors
that
you
as
someone
just
consuming
it
consuming
an
implementation,
would
would
you
say,
create
an
export
and
then,
sometime
later
your
pods
can
talk
to
each
other.
D
A
Yeah
I
I
agree
with
what
what
was
said.
A
I
had
two
things:
one
is
that,
like
I,
wanted
to
make
sure
that
I
explicitly
said
I
like
the
idea
of
like,
let's,
let's
build
out
a
new
conformance
suite
instead
of
thinking
about
like
mutating
existing
tests
because
they're
they
kind
of
have
like
a
different
goals
and
conformance
in
in
the
the
older
ones
and
the
the
other
thing
that
we
haven't
talked
about.
Is
these
tests
are
like
the
simplest
working
ede
that
you
can
make
right
you
have
to
have.
A
You
have
to
have
at
least
two
clusters,
and
these
tests,
if
I'm
not
mistaken,
are
are,
are
testing
two
cluster
scenarios
once
we
feel
good
about
these,
what
I
think
would
be
great
is
a
test
that
you
can
parameterize
with
the
number
of
clusters.
G
A
That
you
can
say
test
this
across
yep
exactly
and
you
can
test
The
Primitives
in
scenarios
where,
like
you
know,
there's
there
is
no
reason
to
think
that
if
it
works
for
two
it'll
work
for
three
and
if
it
works
for
three
it'll
work
for
five
so
like
being
able
to
dial
up,
the
number
of
clusters
would
be
very
useful.
I,
don't
think
we
need
to
dial
it
to
a
thousand,
but
having
a
parameter.
I
think
would
be
very
helpful
to
uncover
bugs
that
only
surface
at
higher
numbers
of
clusters.
Yeah.
C
C
D
Yeah
and
then
the
other
thing
that
we
should
test
too,
is
in
the
opposite
direction.
Is
the
single
cluster
case,
because
one
of
the
one
of
the
neat
things
I
think
about
MCS
is
that
you
can
have
a
single
cluster
deployment
that
uses
MCS
and
then
you
can
easily
just
add
clusters
right.
So
that's
that's
a
way
to
be
kind
of
future
proof
from
the
start.
That
would
be
a
good
thing
to
probably
include
here.
C
G
B
Yes,
just
want
to
double
check
on
so
part
of
the
reason
to
work
on
these
end-to-end
tests
in
the
system
that
they
are
and
in
the
the
definitions
that
they
were
in
the
cap
was
because
that's
a
beta
blocker
and
you
know,
if
I
let
this
Dangle
on
one
more
year,
then
I'll
come
back
as
a
ghost
and
hunt
and
stick
see
for
the
rest
of
play.
My
eternity
but
anyways
I
just
want
to
confirm
the
order
of
operations
here.
I
do
agree
that
the
conformance
suite
like
may
be
different.
B
There
was
a
time
where
we
were
talking
about
them
like
conveniently
being
the
same,
but
either
way,
I
think
unless
there's
some
other
change
to
the
graduation
criteria,
updating
the
current
end-to-end
test
so
that
they
meet
the
kep
obligations.
Is
it
correct?
That's
still
like
the
first
priority.
B
B
Yeah,
so
that
is
kind
of
the
confusing
point
and
I
think
maybe
it
was
that
they
would
start
out
on
this
demo
implementation
and
then
graduate
someday
or
something.
But
if
we're
talking
about,
if
we
think
that
the
infrastructure
is
just
not
shareable
between
these
two
things,
then
I
do
think
we
have
the
opportunity
to
make
the
decision
of
what
does.
What
should
those
graduation
requirements
really
mean?
B
D
The
so
the
yeah,
the
test,
the
implementation
that
was
in
there
now
was
really
just
like.
Do
these
resources
in
that
can
can
they
can
they
actually
describe
an
MCS
and
then
the
tests
were
there
just
to
make
sure
that
it
stayed
working
the
and
it
was
just
a
helper
like
it
was
never
really
an
implementation
I'm,
not
sure
how
much
value
the
that,
like
Baseline
implementation,
that
I
threw
up.
D
You
know
a
while
ago,
really
has
to
the
cap
now
that
there
are
real
implementations
and
you
know
I'm,
not
I'm,
not
sure
it's
like
a
start
over
but
I
just
you
know,
I
think
taking
those
those
tests
and
just
basically
copying
them
and
and
ripping
out
the
body
so
that
they're
testing
more.
What
we
want
is
probably
more
useful
at
this
point
than
than
trying
to
fix
that
implementation
to
be
something
it
never
was
meant
to
be.
That's.
D
Of
my
my
gut
but
I'm
kind
of
happy
to
proceed,
however,
but
I
think
that
the
goal
you
know
and
anyone
please
chime
in
if
your
understanding
is
different
but
I-
think
the
goal
of
the
ede
test
that
we
wanted
in
the
kept
was
to
basically
test
that
an
implementation
that
an
implementation
implemented.
A
That
is
my
understanding
too,
and
the
like
another
little
view
that
I
have
on
this
is
that
the
the
test
shouldn't
make
sense
if
you've
only
read
this
the
spec
yes,
so.
G
C
D
At
all-
and
hopefully
that
makes
them
relatively
light,
because
we've
tried
to
keep
the
cap
pretty
light.
I
think
there's,
certainly
some
you
know
DNS
tests,
but
again,
like
you
know,
the
test
would
just
be
that
a
pod
can
see
the
record.
That's
supposed
to
be
there,
for
example,.
G
D
D
C
B
So
I
think
the
end-to-end
test
is
there
today,
like
it's
not
like,
we
don't
have
to
like
throw
them
out.
I
think,
like
the
actual,
like
specify
steps,
are
still
relevant,
the
like
language
of
the
cap
of
what
they
should
be
able
to
do
in
terms
of
like
connectivity
and
expected
endpoints
respond,
blah
blah
blah.
That
type
of
thing
is
still
valid.
B
It
seems
to
I
think
it
seems
to
me
that
all
this
like
setup,
that
most
egregiously
has
this
like.
Does
the
service
Imports
start
part
4
and
implementation
we
don't
like,
and
we
would
rather
start
earlier
in
the
process
and
then
let
the
implementation
prove
it's
it's
worse.
B
I
guess,
I
think
one
thing
that
all
of
this
converges
to
to
me
is
that
we
should
be
continuing
to
develop
these
end-to-end
tests,
but
against
live
implementations
and
throw
out
anything
that,
like
don't
like,
keep
making
filling
in
the
holes
for
the
demo
implementation.
Yes,
and
so,
is
it
also
so
two
things
about
that
one?
Can
we
like
should
can?
Should
we
straight
out
throw
out
the
demo
implementation
like?
Has
it
served
its
purpose
and
is
it
like
noise?
B
Even
just
like
in
the
repo
and
then
also
are
we
okay,
therefore,
that
like
development
on
these
end
tests,
like
basically
requires
you
to
be
leveraging
some
specific
implementation
to
do
so.
D
I
think
this
is
a
really
good
question,
so
first
of
all,
I
have
no
qualms
of
throwing
out
the
demo
implementation.
If
it's
not
used.
Okay,
yeah
I
do
want
to
raise
the
question
of,
should
we
have
some
horrible
Baseline
implementation
that
honors
the
spec
that's
like
written
in
bash.
That
is,
you,
know,
human
readable,
but
horribly
inefficient.
B
Yeah,
like
I'll,
speak
just
a
little
bit
for
Nick,
who
is
working
on
the
end-to-end
test
one,
but
he
was
using
the
kind
clusters
which
only
like,
which
was
sort
of
like
well
set
up
for
using
this
little
demo,
implementation,
which
was
really
fast,
which
she
led
to
because
they
weren't
super.
The
the
tests
were
too
prescriptive
compared
to
the
gke
implementation,
which
is
what
he
had
easiest
access
to
run
them
on.
So
maybe
that's
fine,
but
just
like
as
a
example
as
a
test
developer,
not
having
any
true
perfect
example.
B
A
B
B
C
A
D
Right
or
or
and
I'll
say,
the
other
thing
that
could
teach
us
is,
if
all
the
implementations
have
not
implemented
something
in
the
spec.
Maybe
we
should
revisit
if
that
thing
actually
belongs
in
the
spec
yeah
like
if
they're,
if
these
they're
yeah
I
think
the
two
outcomes
are
a
implementations
are
lacking.
This
is
important
because
it
will
tell
the
implementation
authors
that
it's
time
to
add
that
feature
or
B.
C
G
But
also
yeah,
so
I
am
on
the
topic
of
improving
the
existing
E3
test
to
be
more
useful,
so
I've
pushed
a
few
PR's,
there's
one
that
that's
it's
a
bunch
of
dependency
upgrades
and
that
leads
to
this
one,
which
is
adding
support
for
Discovery
view
one
instead
of
B1
beta1
or
alongside
it
cited
in
a
fairly
ugly
fashion,
and
so
that's
something
else,
really
that's
worth
being
in
mind.
I.
Think
too,
is
that.
Then
the
existing
intern
tests
are
fairly
simple.
G
But
yeah
so
there's
this
one
and
so
I
add
support
for
endpoint
sizes
and
I.
Think
I
removed
a
few
things
as
well
that
weren't
all
that
useful.
So
there
were,
for
example,
queries
the
same
pods
several
times
quiz
then
point
sizes
several
times
things
like
that,
and
so
this
then
at
least
allows
the
test
to
run
against
clusters
that
no
longer
have.
G
Discovery
V1
beta1
and
then
there's
this
one
where
I
add
creating
a
service
exports,
and
this
allows
Submariner
to
pass
the
set
phase
of
the
test,
and
then
it
fails
the
test
itself,
because
both
tests
rely
on
having
a
single
cluster
set
IP,
which
summary
doesn't
implement
and
then
so
the
next
phase
and
well
I'd
added
to
the
agenda
was
one
I
was
wondering
whether
any
of
the
MCS
API
implementation
projects
tests
could
be
useful,
so
the
ones
I'm
familiar
with
obviously
are
submariners
and
we've
got
a
bunch
of
well
what
we
call
Discovery,
which
is
really
the
part
that
implements
MCs
more
around
the
DNS
side
of
things,
perhaps
but
I
think
some
of
them
at
least
are
not
submarine
or
specific
and
could
be
reused
as
a
sort
of
conformance
test.
G
So,
for
example,
this
series
here
sets
up
a
headless
service,
that's
exported
and
then
checks
various
things
about
it
like
if
they
can
find
the
Pod
IPS.
G
And
so
on
and
so
forth,
this
one
is:
if
you
set
up
a
remote
service,
it
can
be
seen
from
the
from
other
clusters.
G
You
can
resolve.
Well,
it
resolves
the
local
service
preferentially.
So
that's
some
really
specific.
The
order
doesn't
matter
if
there
are
no
active
pods.
You
can't
resolve
the
service
anymore,
Etc.
D
I
think
yeah,
as
long
as
the
licenses
are
compatible
bring
them
over.
It
seems
like
there's
a
lot
of.
G
D
Exactly
so,
I
think,
given
that
those
tests
were
basically
testing
the
conformance
of
Submariner
and
bring
them
over
and
and
if
anything,
hopefully
they
just
match
right,
the
the
spec
and
if,
if
anything,
when
we
go
through
it-
and
they
don't
that's
great
signal
as
well
so
yeah,
if
it's
already
written
it's
great.
A
Okay,
I'm
I'm
Pro,
bringing
them
over
I.
Think
the
thing
to
keep
an
eye
on
is
like,
let's
make
sure
that
the
the
tests
that
are
in
this
repo
stay
true
to
the
spec
in
the
sense
that
not
felt
like
they'll
I'm
I'm,
not
worried
about
them.
Testing
the
spec
incorrectly
I'm
worried
about
test
that
seem
useful
that
aren't
covered
by
the
spec
creeping
it
yeah.
G
C
G
A
Bingo
and
I
think
you
know
like
Steven
when
you
were
saying
that
the
the
the
test
failed.
That,
like
you
know
a
particular
test
failed
for
Submariner
due
to
like
single
cluster
set
IP.
It
made
me
feel
like.
Maybe
that
test
should
pass
and
maybe
a
distinct
test
for
single
clusters,
that
IP
should
be
failing.
Yeah.
A
G
Yeah
yeah:
we
need
to
split
the
test.
We
need
to
split
the
tests
up
so
that
they're
much
more
grammar
granular
things
like
if
I
create
a
service
export.
Then
I
see
this
matching
service
import.
If
I
query
a
service
that's
been
exported,
I
can
connect
to
it
and
I,
don't
care
what
the
IPA
address
looks
like
or
whatever
at
that
point
another
test
and
then
there's
another
test
that
says:
I
always
get
the
same:
IP
address
across
all
the
Clusters
and
so
on.
Yeah.
C
A
B
Yeah
and
I'll
throw
in
the
chat
this
these
tests,
that
I'm
referring
to
which
are
are
this
test.
One
test
to
you
test
three
are
from
this
section
in
the
cap
that
I
just
put
in
the
zoom
chat
whether
they
are
certainly
more
broad
than
something
like
stopping
before
checking
for
if
there's
a
cluster
IP,
it's
like
way
more,
it's
not
that
detailed.
C
B
Got
it
great
based
on
oh,
my
god,
what
have
I
done.
B
A
A
A
D
Yeah
I
think
it
would
be
really
useful
to
just
have
a
dock
where
we
just
basically
Point
form
list
out
here
are
the
things
here
are
the
simple
tests.
We
want
here's
this
section
of
the
spec
and
then
that
that
basically,
the
section
of
the
spec
can
go
in
the
test
name
so
that
failures
report
it
and
we
can
Implement
those
those
tests
and
I
think
you
know
it
keeping
it
simple
like
we
have
a
the
test
that
checks
connectivity.
D
D
I,
don't
know
that
we
need
a
huge
amount
of
detail,
but
just
kind
of
did
that
list.
So
we
can
make
sure
it's
it's
comprehensive
and
then
we
can
go,
go,
write
them
and
I
expect
that
the
setup
part
is
going
to
be
in
line
with
a
lot
of
stuff
we've
already
written.
So
hopefully
it's
not
a
lot
of
extra
work
and
then
it's
really
just
going
to
be
all
the
all
the
cases
you
know
checking
to
make
sure
the
records
are
correct,
that
the
endpoint
slices
exist
with
the
right
IPS.
A
A
All
right:
well,
thanks
everybody
and
Steven
and
Laura.
C
A
You
both,
for
you
know,
maintaining
the
attention
on
the
conformance
tests.
This
was
a
quality
conversation
that
we
had
today.