►
From YouTube: Kubernetes SIG Multicluster 15 Nov 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
C
C
We
both
have
the
same
name,
interesting.
C
Cool
all
right,
so
three
after
welcome
everyone
to
the
what
day
is
it
Tuesday,
November
15th,
Sig,
multi-cluster,
meeting
Laura?
You
have
the
agenda
so
I
will.
Let
you
take
it
away
thanks.
A
Okay,
so
I
wanted
to
spend
a
little
bit
of
time
today
talking
about
the
MCS
end-to-end
tests,
both
like
in
terms
of
being
a
framework
and
some
of
the
status
and
progress
on
it
and
I.
First
I
off
I
think
I'd
like
to
remind
everybody
that
the
MCS
end-to-end
tests,
existing
quote
unquote
and
particular
some
tests
that
don't
exist
yet
are
one
of
the
beta
blockers
for
the
MCS
API.
A
So
that's
why
we're
working
on
it
and
the
first
point
and
then
another
point
which
I'll
bring
up
later
today
too
and
showcasing
the
demo
is
that
we
would
also
like
these
to
operate
as
conformance
tests,
so
that
implementations
can
check
to
see
that
their
implementation
is
conforming
to
our
standards
programmatically
instead
of
cross-referencing,
10,
000
caps.
So
hopefully
that
also
improves
the
life
life
and
lifestyle
of
MCS
implementations
around
the
globe.
A
I'm
going
to
shout
out
to
him
several
times,
but
Nick
eberts
in
the
call
is
contributing
to
the
end-to-end
test
and
I'm
actually
running
on
a
branch
of
his
right
now
for
this
demo,
both
he
is
both
working
on
the
tests
that
we're
missing
and
dealing
with
the
Fallout
of
these
tests
not
being
updated
through
many
changes
to
kubernetes,
since
they
were
last
touched.
So
thank
you
for
helping
Identify.
A
Some
of
the
you
know,
Flags
in
Cube
cuddle
that
have
been
deprecated
that,
like
you
know,
the
The
Bash
scripts
need
to
be
updated
or
just
other
J
other
things,
especially
regarding
API
version,
dependent,
API
versions
that
we
need
to
update
here
so
I'm
going
to
share
my
screen.
Here.
We
go
okay,
so
fundamentally
what
I
want
to
get
across
the
most
is
that
let's
make
these
more
beautiful
right
now.
A
Is
that
if
you
want
to
run
the
end-to-end
test,
there's
kind
of
two
variations
here
that
I'm
going
to
show
you
one
is
to
run
them
against
kind
clusters
that
are
running
the
sort
of
demo,
implementation
of
the
MCS
controller.
That's
in
the
MCS
API
repo
and
then
there's
the
idea
of
running
it
against
your
implementation
or
implementation
of
interest.
A
So
for
the
first
point
there
are
two
relevant
shell
scripts.
I
want
to
show
you
in
here.
So
one
is
this
one
e
to
e
test,
which
itself
will
call
this
other
script
called
up,
which
I'll
show
you
in
a
second
which
fundamentally
creates
two
kind
cluster
clusters
that
will
be
installed
with
the
demo
implementation
of
MCS.
That's
in
this
repo
and
then
go
test
the
e2e
test
package.
A
So
over
here
and
up
there's
some
Stefan
here
and
here
I've
commented
this
out,
but
under
normal
circumstances,
here
we
go.
It
will
create
this
cluster.
You
know:
do
some
of
some
other
set
up
to
your
local
environment
connect
the
two
clusters
together,
so
that
they
have
a
flat
Network
set
up.
Some
are
back
the
service
count
for
the
MCS
API
controller
and
actually
deploy
the
MCS
API
controller
in
both
clusters.
A
So
that's
what
this
up.sh
is
doing
for
people
who
have
gone
into
this
MCS
API
repo
before
you'll
you'll,
recognize
this
and
also
recognize
it
as
part
of
the
demo
scripts
as
well.
So
this
whole
thing
that's
trying
to
set
up
your
kind
clusters
for
you
with
the
demo.
Implementation
of
MCS
is
itself
called
in
this
ete
test.sh,
which
you
could
just
like
run
this
all
together.
A
If
you
have
no
kind
of
clusters
and
you're
ready
to
go,
I
already
have
my
kind
clusters
set
up
in
here
and
I
have
two
Cube
configs
that
are
referencing
them.
My
C1
Cube
config
and
my
C2
Cube
config
and
I'm
setting
these
as
these
variables
Cube
config
one
and
cubeconfig
2,
because
these
will
be
used
by
the
each
e-test
package
to
determine
who's
cluster
one
and
who's
cluster.
Two.
A
So
let
me
pop
over
there
really
quick
just
to
show
you
that
in
this
e
to
e
e
to
e
test
package,
there's
these
two
files,
ET
Suite
test.go-
is
kind
of
where
the
setup.
This
is
the
in
the
input
point
for
this
package,
and
here
you
can
see
that
it's
going
to
grab
that
Cube
config
one
and
Cube
config
2
environment
variables
and
that's
how
it's
going
to
decide
who,
whose
cluster
we're
working
on
so
I'm
going
to
go
ahead
and.
A
A
So,
for
example,
the
end-to-end
tests
are
working
against
the
V1
beta
and
API
for
endpoint
slice,
but
that's
an
example
of
something
we
need
to
update
in
the
tests,
because
past
124
it's
V1
but
yeah.
So
this
will
kind
of
run
for
a
little
while
here
and
then
we'll
see
kind
of
like
our
classic
go
test
output
yeah.
A
So
here
all
of
this
I
have
been
mentioning
about
how
you
could
either
start
from
the
from
up.sh
or
the
whole
wrapper
ete
test.sh2,
both
provision,
your
kind
clusters
and
run
the
end-to-end
test,
or
you
can
set
your
Cube
config.
If
you
already
have
your
kind
cluster
set
up
the.
A
If
you
have
some
other
clusters
that
aren't
kind
clusters
that
you
would
like
to
run
these
end-to-end
tests
against
same
idea,
you
can
use
these
Cube
config
environment
variables
to
configure
how
what
clusters
your
antenna
tests
are
running
against
so
I'm
going
to
come
over
here
and
show
you
the
thrill
of
how
non-compliant
gke
MCS
is
right
now,
because
I
have
some
HK
clusters
already
running
that
have
MCS
configured
and
I.
Have
them
in
these
two
Cube
configs,
but
you'll
see
if
I
run
the
end-to-end
tests
against
here.
Eventually.
A
There
we
go,
and
eventually
it
will
get
all
mad,
because
it's
like
I,
don't
know
where
service
and
poor
set
multicluster.xk.io
is
and
that's
because
gke
is
running
a
mirror
of
the
service
import
and
service
service
export
crds,
that's
in
a
different
API
Group,
so
already
gke
is
showcasing.
It's
non-compliance
here
against
what
we
want
to
use
as
our
conformance
test.
But
this
is
more
to
show
you
that
if
you
have,
you
know
any
other
arbitrary
Cube,
config
environment
variable
set,
and
you
run
these
tests
against
it.
A
If
it's,
you
know,
MCS
controller.
If
any
MCS
controller
implementation
is
configured
against
those
clusters,
then
the
ideal
situation
is
that
these
tests
all
pass
showcasing
that
your
implementation,
that
you're
testing
against
is
conformant
great,
so
that
is
kind
of
the
the
general
gist
I
wanted
to
get
across
a
little
bit
more,
maybe
like
a
really
brief
dive
again
into
these
two
files.
A
I
already
mentioned
it
a
little
bit,
but
inside
this
ET
test
package
we
have
this
file
ET
Suite,
test
code.
That
is
going
to,
basically,
you
know,
parse
some
Flags
or
environment
variables
and
then
call
the
tests
that
are
over
in
this
connectivitytest
dot
go
file,
so
here's
where
again
I'm
on
Nick's
Branch
here,
so
it's
recently
been
updated
with
more
requirements
that
were
missing
for
test
one.
A
But
you
know
it's
a
lot
of
manifest
looking
stuff
here
in
the
beginning,
but
fundamentally
the
way
that
the
test
works
is
that
there's
some
service
that
will
be
deployed
in
one
cluster
that
is
backed
by
this
hello
deployment
over
here
and
scrolling
down
some
more
and
the
other
cluster
later
on
we're
going
to
deploy
this
request
pod,
that's
going
to
try
and
request
from
that
deployment
in
the
first
cluster
and
on
this
Branch
it's
a
it's
a
on
both
the
master
right
now
and
on
this
Branch
right
now,
they're
configured
to
have
the
response,
the
sorry,
let
me
say,
the
hello
deployment
and
the
first
cluster
to
like
output,
some
information
and
the
request
pod
to
parse
that
response
to
make
sure
it's
what
we
expect
in
the
master
Branch
right
now.
A
What
that
is,
is
I
the
hello
deployment,
literally
outputs,
a
string
that
says
hello
and
the
request
deployment
make
sure
that
the
response
says
hello.
A
The
Improvement
that
Nick
has
on
this
branch
is
that
the
hello
deployment
aka
the
thing
in
the
exporting
cluster
will
report
its
pod,
IP
and
the
requester
side
will
receive
that
pod
IP
and
then
the
test
will
check
to
make
sure
that
only
the
Pod
IPS
that
we
expect
okay,
the
expected
endpoints
were
the
ones
that
responded.
So
that's
one
of
the
specific
updates
that
are
in
this
branch
that
are
part
of
making
our
MCS
API
ET
tests,
compliant
with
how
the
cap
is
defined.
A
So
everything
down
here,
there's
a
lot
of
sort
of
boilerplate
set
up
to
create
a
namespace,
that's
dedicated
just
to
the
this
conformance
test
and
set
up
in
both
cluster
one
and
cluster,
two
everything
that
is
necessary,
at
least
in
terms
of
namespaces
in
both
and
then
our
hello
deployment
in
the
exporting
cluster,
which
in
this
case
is
cluster
two
and
then
way
down
here
somewhere.
After
all
that
set
up
there
are
these
two
checks.
A
For
UDP
and
TCP
to
actually
connect
across
these
clusters,
it
requests
the
the
VIP
the
cluster
set
IP
and
then,
as
mentioned,
it's
going
to
confirm
that
the
responses
that
came
from
pods
that
responded
to
that
cluster
set
IP
are
all
the
Pod
IPS
that
we
expect.
So
that's
what's
going
on
in
here.
A
So
that's
what
I
wanted
to
Showcase,
there's
still
some
more
tests
to
implement
and
then,
as
mentioned,
there's
some
fixes
to
just
generally
the
the
entend
has
to
be
more
hip
with
the
new
versions
of
kubernetes,
so
definitely
open
to
dividing
and
conquering
more
of
this
work
if
you're
interested,
but
also
want
to
just
double
click
on
that
shout
out
to
Nick
for
taking
a
stab
at
it
and
helping
improve
the
tests
and
get
closer
to
our
MCS,
long-awaited
MCS
beta
status.
B
C
Awesome,
that's
the
show
thanks
Laura
and
Nick,
really
nice
to
see
to
see
that.
A
Yeah
and
I
think
if
other
folks
here
are
representing
their
MCS
implementation,
your
your
usage
of
these
conformance
tests,
and
also,
if
you
see
like
any
major
issues
in
them,
would
be
super
helpful
as
well.
So
if
you
have
some
time
and
have
you
know
some
clusters
running
your
implementation
of
MCS
lying
around,
it
can
run
these
end-to-end
test
against
them.
That
would
be
awesome
and
either
open
an
issue
against
it,
or
you
know,
send
a
message
in
the
slack.
D
It's
me
hi
I'm
I'm,
the
one
that
raised
my
hand,
okay,
so
I'm
I'm,
really
glad
to
see.
You
know
like
the
the
development
continuing
I
I,
think
that
if
the
only
thing
standing
between
conformance
or
gke's,
MCS,
implementation
and
conformance
is
the
API
Group
name,
then
I
would
love
for
this
test.
D
Suite
to
be
able
to
show
people
that
you
know
the
and
I
I
I
believe
the
the
reason
for
the
difference
in
API
Group
is
around
gke's
policy
on
which
apis
they
expose
to
users
and
what
their
versions
are
so.
A
D
You
know
I,
think
I,
think
that
is
a
valid
strategy
that
a
vendor
may
want
to
do
are
the
it's
just
the
the
group
name
right:
the
resources
aren't
named
differently
right.
A
Yeah,
so
yeah
I
definitely
agree
that
if
this
was
a
little
bit
more
resilient
to
being
able
to
report
more
information
without
freaking
out
about
the
API
Group,
that
may
be
helpful.
I
mean
obviously
it's
helpful
for
gke,
but
it
might
be
helpful
for
you
know
other
upcoming
or
other
vendor-specific
implementations.
A
We
can
I
I,
definitely
think
we
can
open
an
issue
about
that
and
figure
out
how
we
want
to
proceed
down
there.
I
mean
right
now
how
the
reason
it's
throwing.
This
is
because
the
it's
trying
to
query
the
API
as
written
in
the
MCS
API,
like
the
type
that
is
actually
in
this
repo,
so
I,
think
we'll
need
to
you
know,
do
something
to
make
that
more
resilient
without
having
to
like
make
a
mirror
of
everybody
else's
like
API,
Group,
I,
guess
so
yeah,
but.
D
I'm
I
think
you
can
probably
contrive
a
way
to
do
it
with
the
with
the
unstructured
client,
but
I
doubt
there
is
a
way
to
get
any
typed
client
generated
off
of
those
crds
to
be
coerced
into
using
a
different
API
Group
gotcha.
C
Worst
one
yeah,
so
the
I
think
I
think
that
makes
sense.
I,
actually
wonder
if
we
shouldn't
just
parameterize
gbr
entirely
the
group
version
resource
because.
A
C
I'm,
just
thinking
too
like
when
we,
if
we
had
a
V2,
which
hopefully
we
won't.
But
if
we
did
or
as
we
progress
through
it
might
be
useful
to
be
able
to
say
which
version
you're
testing
again.
A
C
A
I
mean
maybe
people
who
have
more
experience
with
version
like
not
deprecations,
but
version
upgrades
over
time,
like
even
the
point
of
that.
These
tests
were
originally
written
against,
like
endpoint
slice,
V1
beta
and
let
like
I
guess
the
Alpha
version
of
MCS,
or
you
know,
they're
like
they're,
not
in
KK
together
so
like
that
is
still
optional.
Crd
and
this
controller
did
work
against
clusters
at
a
time
that
were
in
beta,
so
I
think
I.
A
Think
I
would
like
to
know
from
the
chairs
if
it's
valuable
for
these
each
e-tests
from
a
conformance
perspective,
to
still
be
like
back
compatible
to
a
certain
version
of
kubernetes.
C
I
I
would
think
that
generally
we
want
to
be
as
back
compatible
as
possible,
because
one
of
the
benefits
to
doing
this
was
out
of
out
of
tree
was
to
not
be
tied
to
like
version
releases,
but
but
I
don't
think
we
necessarily
need
to
go
out
of
our
way
to
support
things
that,
like
have
been
deprecated
for
like
a
long
period
of
time.
So
you
know
we
we
definitely
shouldn't
like
we
shouldn't
cut
Corners
if
we
can
avoid
it,
but
and
I'd
like
to
support
far
back.
C
But
let's
not.
Let's
not
go
crazy
here,
I
think
if,
if
a
version
has
been
in
the
wild
for
like
a
year
or.
D
Yeah
I'm
I
think
we're
pretty
much
on
the
same
page.
The
you
know
the
do
remember
that
the
the
ede
binary
itself
can
be
released
in
versioned
coincidental
with
the
you
know,
with
the
spec,
so
I
think
I.
Think
I'm,
there's
there's
two
different
axes
in
what
we've
talked
about
right
like
one
is
one
is
a
time
like
forward
backward
axis
and
another
is
a
like
productization
or
vendor.
Behavior
type
of
axis
and
I.
D
Think
where
my
priority
is
is
like
I
would
want
to
make
sure
that
the
ede
is
a
useful
tool
to
vendors
saying
at
this
point
in
time.
How
close
to
conformance
is
your
implementation
and
in
time
as
we
move
forward,
we
can.
We
can
cut
versions
of
that,
but
I'm
I'm,
less
worried,
for
example,
in
being
Backward
Compatible
back
to
like
endpoint
slice,
which
is
relatively
ancient
in
the
lifetime
of
MCS
right,
so
I
think
I
think
that's
where
I
stand.
Yeah.
A
I
think
the
Nuance
here
is
that
the
like
the
version
SKU
is
not
from
the
MCS
API
the
end.
The
version
SKU
and
the
end-to-end
test
is
not
from
the
MCS
API,
but
something
in
core
and.
A
It's
because
it's
endpoint
slice
and
then
but
I
do
agree
that
conformance
tests
themselves,
don't
need,
like
implementations,
are,
of
course
free,
but
particularly
because
we
chose
to
implement
this
out
of
tree
to
be
backwards
compatible
through
many
variations
of
endpoint
slice
or
any
other
dependent
API
through
in
the
past.
As
long
as
crds
were
part
of
kubernetes
for
as
long
as
CRTs
were
part
of
kubernetes,
but
I
do
think,
there's
a
lower
bar
for
backwards.
Compatibility
against
white
kubernetes
clusters,
you'll
test
against
for
end-to-end
conformance.
D
A
D
No
okay,
so
we
should.
We
should
think
about
whether
it
should
and
I
I
think
it
probably
would
be
wise
to
do
that.
You
know
for,
for
example,
there
is
a
natural
one.
That's
included
right
like
you,
you
wouldn't
be
able
to
run
any
MCS
implementation
in
a
version
of
kubernetes
that
that
didn't
have
all
of
the
crd
features,
yeah
that
are
required
right,
yeah,
so
I.
D
D
This
probably
won't
work,
but
if
there
are
other
attributes
of
the
way
that
the
tests
are
written
that
are
going
to
affect
the
like
that
are
going
to
be
affected
by
the
details
of
the
control
plane,
we
should
think
about
how
to
surface
those.
Additionally,
yeah.
D
A
C
Yeah
I
would
I
would
look
at
like
I.
Think.
All
of
that
is
a
really
good
point
to
me
to
also
save
effort.
I
think
and
point
slice
is
the
newest
dependency
and
well
gke
also
works
with
endpoints
I.
Don't
know
that
we
want
to
have
to
deal
with
that
for
conformance
right
and
121
is
certainly
mature
at
this
point
and
that's
the
endpoint
slice,
stable
I
believe
so
you
know
yeah
when
figuring
this
out.
C
I
think
that
would
be
reasonable,
like
go
ahead
and
Implement,
something
that
supports
older
versions
for
sure
and
I
think
you
know
a
few
of
the
implementations
do
but
in
terms
of
conformance,
like
you
know,
that's
still
significantly
mature
at
this
point
for
us
to
call
that
a
baseline,
given
that
MCS
is
not
yet
a
ga
API
I
think
that
that
to
me
feels
reasonable
in
terms
of
back
compatibility.
A
Okay,
great
I
will
proceed
with
121
as
the
expected
cut
off,
since
our
freshest
dependency
was
V1
then-
and
it's
quite
mature
and
well
also
look
into
encoding
the
minimum
version
required
into
the
MCS
API,
though
that
may
still
be
as
far
as
118
or
we
could
decide
something
somewhere
in
the
middle
or
we
could
decide.
It's
also
121,
but
I'm,
not
sure
that
we
want
to
do
that.
A
I,
don't
get
the
sense
that
we
want
to
do
that
and
then
the
other
takeaway
was
we're
cool
with
parameterizing
the
API
Group,
and
you
gave
some
ideas
of
how
to
do
that,
or
even
just
the
entire.
A
A
Cool
well
again,
help
is
always
appreciated
and
holler
at
me.
If,
if
you
want
to
get
involved.
C
C
B
C
Well,
thank
you
all
have
an
excellent
Tuesday.
D
Yeah
Jeremy:
let's,
let's
talk
about
Cube,
Fed,
so
Jeremy
and
I.
We
we
need
to
we've
had
like
an
open
item
that
we
needed
to
follow
up
with
the
Groupon
and
here's.
Here's.
What
I
think
Jeremy
and
I
are
aligned
on
doing
is.
Let's
give
it
six
weeks
from
today
and
at
the
end
of
that
six
weeks,
we'll
cut
a
tombstone
commit
for
cube
Fatima
archive
for
repo.
D
You
know,
as
we've
said
before
the
archival's
not
deletion.
You
know
people
can
work
with
that
code.
They
can.
They
can
use
it
Etc,
but
the
figure
that
we
have
in
mind
to
bring
this
you
know
thread
on
deprecation
to
a
close.
Is
that
we'll
do
that
at
the
end
of
six
weeks
from
today,
I
think
is
what
we
discussed
right.
Jeremy.
C
Yeah,
an
old
draft
email
and
send
that
out
to
the
to
the
group
today
to
start
the
six
week,
clock
and
reminder
Cube
fed
is
it's
it's
archived,
but
it's
not
deleted.
You
know
anyone
who
wants
to
use
it
still
can
Fork
it
run
with
it
as
you
will,
but
the
official
six
repo
will
be
gone
or
archived
not
gone.
Sorry.
C
All
right
now,
with
that
now
we're
actually
done
so
thanks.
Everyone
have
a
great
Tuesday,
we'll
see
you
in
a
couple
weeks.