►
From YouTube: Kubernetes Federation WG sync 20180321
Description
See this page for more information: https://github.com/kubernetes/community/blob/master/sig-multicluster/README.md
A
A
A
A
A
So
there
are
a
couple
of
you
have
a
couple
of
expectations
when
we
finished
Monday's
meeting
two
pointers,
one
was
like
a
generic
pointer
that
we
basically
need
to
converge
on
that
building
blocks
API,
that's
what
is
needed
or
expected
from
today's
meeting.
The
other
point
I
remember.
While
we
were
finishing
their
last
meeting,
I
think
Paul.
You
suggested
that
maybe
Maru
or
somebody
would
prepare
the
differences
for
what
larus
prototype
has
with
respect
to
the
documentation
that
don't
understand,
yeah.
B
E
C
E
A
A
A
My
intention
is
that
these
are
sort
of
open
and
if
we
are
see
the
whole
whole
reason
we
are
actually
doing
b2
or
we
moved
through
this
space
for
the
air
we
are
trying
to
figure
out
Multi
cluster
API,
which
might
be
a
fit
or
suited
suitable
stuff
for
multiple
implementations
was
because
of
some
some
inherent
issues
we
saw
with
v1,
which
were
sort
of
difficult
to
solve
or
not
solvable
at
that
point
of
time.
So
the
same
sort
of
applies
to
current
implementation.
A
A
So
I
think
there
are
five
that
I
listed.
One
one
still
remains
is
versions
to
skew.
The
other
one
is
which
API,
which
version
of
the
k8s
API
Federation
should
use.
There
has
been
discussions
around
this
that
if
there
is
a
generic
way,
it
can
understand.
For
example,
a
caters
cluster
supports
three
different
versions
for
apps
API,
then
Federation
should
should
also
be
able
to
understand
all
those
three
different
versions
of
apps
api,
but
there
is
no
concrete
solution
for
that
here,
I
mean
that
it
it's
either.
C
E
C
Like
controller
controller
are
built,
I
think
it's
basically
we'll
use
the
the
latest
version
that
we
know
about.
So
if
Federation
has
v1
per
se
secrets
at
the
time
that
it's
built
and
released,
that's
how
it's
going
to
talk
to
the
underlying
clusters.
It
has
only
v1
alpha
one,
that's
how
it's
going
to
talk
to
the
clusters
and
when
you
upgrade
you
have
the
possibility
of
changing
the
version
that's
supported.
But
to
me
this
is
kind
of
like
it.
C
It's
not
really
something
that
we
have
a
lot
of
wiggle
room
over
because
of
the
way
that
typing
works
in
a
way
that
client
libraries
work.
We
can
only
use
what
we
know
about
rather
than
supporting
multiple
versions.
We
just
use
the
next
version
we
know
about,
like
I,
don't
see
any
value
in
supporting
previous
versions.
If
we
know
about
v1,
there's
no
point
in
talking
to
the
server
as
in
a
v1
alpha
one
mode
or
beta
one
or
whatever
does
it
make
sense?.
A
Yeah
I
sort
of
agree
to
what
you
are
saying.
Actually,
our
previous
discussions
also
sort
of
concluded
this
that's
what
I
have
written
in
the
notes
section
of
it,
but
there
is,
there
are
some
gotchas
for
this
also,
it
is.
It
is
with
respect
to
that
version
skew,
because
the
users
are
free
to
federate
different
versions
of
clusters.
Now
that's.
C
Not
true,
we
can't
allow
them
to
to
use
any
version.
Yeah
can
only
be
for
word
versions
and
the
reasons
for
that
have
been
discussed
in
depth
previously,
so
you're
correct
that
they
could
potentially
have
like
a
1/9
federation
and
a
110
111
one
to
how
many
over
forward
versions
were
supporting
kubernetes
and,
as
you
say,
there
is
a
potential
for
like
maybe
it's
v1
beta
one.
C
You
know
one
clustered,
it's
be
one
in
another
cluster,
but
essentially
Federation
is
still
gated
by
the
maximum
version
it
knew
about
when
so
B
1
beta
1
is
what
it's
talking
to
all
the
clusters
at
the
fact
that
one
of
the
clusters
might
support.
A
newer
version
is
something
that
we
essentially
like
we're.
Just
gonna
have
to
talk
with
B
1
beta
1,
and
we
have
to
rely
on
kubernetes
backwards
compatibility
guarantees.
B
If,
if
you
try
to
federate
an
API
and
you
use-
or
you
say
that
you
use
a
federation
that
knows
about
Reese
web,
that
knows
about
1:10,
for
example,
and
you
try
to
use
that
against
a
cluster.
That's
on
1:3,
like
you,
should
not
have
an
expectation
that
that
is
going
to
work
and,
by
the
same
token,
I
think
that,
like
we
should
be
very
careful
about
qualifying
when
you
use
a
cluster.
B
That
is
ahead
of
what
your
Federation
knows
about
that
you're
getting
into
territory
where
things
may
not
work,
as
you
expect
so,
for
example,
kubernetes
api
s
are
supposed
to
be
backward
compatible,
but
it's
it's
not
uncommon.
At
all,
to
add
validations
that
were
missed
in
an
old
version,
for
example,
I've
done
this
several
times
and
when
you
add
a
validation,
you
basically
make
make
resources
that
would
fail.
The
validation
now
invalid
and
so
in.
The
interest
of
keeping
like
unexpected
validation
errors
to
a
minimum.
B
A
C
So
as
long
as
we're
clear
on
the
forward
version,
okay
backwards
version,
not
okay,
and
then
users,
if
they
want
to
have
multiple
if
they
want
to
have
different
versions
of
between,
remember
clusters
are
different
versions.
They
just
have
to
understand
there's
a
certain
amount
of
risk
involved.
It
certainly
would
be
something
that
you
know
we're
supporting
this
in
the
sense
that
we're
going
to
try
to
propagate
to
these
things,
and
you
have
a
reasonable
degree
of
success.
C
A
C
C
But
but
the
concern
was
that,
if
like
backwards-compatibility
works
to
a
point,
but
if,
if
you're
at
a
given
version
of
federation
you're
talking
to
a
previous
version
of
kubernetes,
you
may
have
new
fields
that
define
new
behaviors
and
the
users
are
setting
those
fields.
And
if
those
resources
are
applied
to
older
version
of
kubernetes,
the
expected
behavior
won't
necessarily
occur
because
those
fields
don't
exist
and
therefore
it's
undefined.
A
But
it's
like
it's
like
a
quite
a
layer
case
if
you
think
it
logically.
So
what
what
will
happen
if
we
continue
to
do
active
development
on
suggestion
is
being
sort
of
follow
a
particular
needs
of
kubernetes.
So,
for
example,
1.10
is
cacao.
Radiation
would
know.
Okay,
so
probably
the
latest
API
available
for
apps
in
1.10
is
apps
women.
Let's
use
this
particular
version
for
absolute
aid
resources.
C
C
Right,
no
yeah,
no
sorry
I
was
talking
about
the
Cuban
Ares
version
like
for
think
about,
like
I
have
AB
city
one
and
I
was
you
know,
replica
set
or
something
and
in
kubernetes
one
you
know,
111
it'll
appear
in
one
way
and
in
112
they'll
add
a
field
and
the
addition
of
that
field
will
change
behavior
and
they'll
be
an
intelligent
default,
but
effectively.
If
you
don't
provide
that
field
you're
getting
that
default,
behavior,
which
may
or
may
not
be
what
you
want,
but
at
least
the
defaults
provide
it.
B
The
explanation
is
pretty
clear
it
might.
It
might
be
worth,
though,
since
I
wouldn't
I
wouldn't
say
that,
like
these,
these
things
that
we're
discussing
our
obvious
right
in
like
they're,
not
they're,
not
super
obvious
even
to
us
in
this
call
a
subject
matter
experts,
so
it
probably
is
worth
decomposing
what
the
different
types
of
skew
are
and.
A
And
this
says,
Maru
I
think
is
well
aware
that
this
has
been
discussed
time
and
again
and
the
only
thing
I
am
trying
to
reach.
It
is
a
tack,
inclusive,
inclusive
mechanism,
to
address
this,
or
we
say
that
only
one
way
of
addressing
this
is
that
we
keep
forward
compatibility
only
and
that's
how
their
friends
in
open-source
will
go.
Yeah.
B
C
Go
ahead,
man,
sorry
I'm,
just
gonna,
say
I.
What
I
would
hope
is
we
could?
You
could
put
the
conclusion
down
because
I
think
at
least
some
of
these
issues
I
think
we've
decided
on
it,
but
I
do
agree
with
Paul
that
there
maybe
is
insufficient
documentation
as
to
why
those
decisions
were
reached,
and
that
way
you
know
we
can
have
clarity
as
to
how
things
are
going
to
be,
and
then
we
can
make
sure
that
we
we
backfill
and
we
bring
people
up
to
speed
as
to
why
that
is
the
case.
I
tell.
A
Paul
I
was
actually
thinking
of
whatever
I
have
put
here
as
any
open
problems.
They
can
be
added
as
a
FAQ
kind
of
a
thing,
for
example
a
question
or
a
problem,
and
then
possibly
suggestions
or
solutions
for
that
in
whatever
conclusive
doc
to
use
for
the
API.
So
that
then
finishes
I
mean
that
content.
You
can
go
ahead
and
do
that
and.
B
B
C
So
my
suggestion
is,
like
I
said:
the
inclusion
on
version
skew
is
forward.
Versions
of
kubernetes,
like
kubernetes
version
has
to
be
greater
than
or
equal
to
Federation
version
and
then,
as
far
as
the
version,
which
Federation
should
cry
to
a
particular
resource
to
the
maximum
version
it
knows
about,
since
we're
only
doing
forward
versioning
of
the
release
we're
guaranteed
that
every
cluster
every
member
cluster
will
support
at
a
minimum,
the
maximum
version
at
the
kĂ¼bra,
so
that
Federation
knows
about
what
it
was
built
those
released.
C
So
these
are
really
tied.
If
you
don't
solve
one
of
these,
you
can't
really
make
a
decision
about
the
other.
But
if
you,
if
you
say
forward
compatibility,
then
you're
kind
of
forced
into
deciding
that
you
support
the
maxim
version
only
like
you
can't
support
versions,
you
don't
know
about,
and
why
would
you
support
older
versions,
I.
C
I
guess
part
of
this
is
that
I
mean
in
a
previous
world
where
kubernetes
and
the
Federation
api's
were
the
lines
between
them
were
blurred.
It
would
be
varied,
like
it
would
be
possible
to
create
a
replica
set
and
a
different
like
in
multiple
versions
we've
created
in
v1
or
you
can
create
it
in
v1,
beta
1
and
then
you'd
be
like
well.
What
version
do
I
federate?
C
Well,
in
the
case
of
federated
resources,
it's
gonna
be
a
little
bit
more
explicit
because
we're
gonna
support,
whichever
version
we
know
about
we're
not
going
to
tie
the
version
of
the
federated
resource
to
the
actual
version
we're
gonna
propagate.
It
can
vary
over
time
if
I
create
a
federated
resource
and
at
the
time
that
I
created
I
only
know
about
v1,
beta
1
and
then
a
future
release.
You
know
you
upgrade
to
one
the
supports
v1
of
that
you
know
replica
set
resource,
we're
gonna,
Federated
v1
from
then
on.
A
C
Think
that's
true
in
the
case
of
push
Shrek
silly
ation
and
where
people
want
different
behavior,
we
would
point
them
that
the
possibility
of
polar
reconciliation
because
then
did
they
be
able
to
define
I
mean
a
pole.
Reconciler
could
be
as
comprehensive
as
they
want.
It
could
have
full
of
minute
rights,
but
it
could
also
have
the
limited
rights
to
selectively
namespaces
in
that
particular
cluster
and
it's
kind
of
kicking
down.
We
can
down
the
road,
but
I
think
that's
kind
of
what
we
need
to
make
progress.
A
A
The
defaulting
and
validation
I
I
just
tried
to
listen
and,
as
I
mentioned,
the
my
intention
was
to
probably
create
a
effect.
You
kind
of
a
list
at
the
end
of
that
API
documentation
which
can
be
authoritative,
location
for
these
problems,
so
defaulting
and
validations
also
is
there
which
I
think
Maru
has
suggested
some
solution
currently,
which
can
be
used.
I
saw
that
in
the
added
documentation
you
have
listed
that
around
that
API
documentation.
A
A
C
I
think
we're,
probably
gonna
have
to
develop
like
strategies
that
are
more
in
the
form
of
monitoring
and
documentation
like
educating
users
as
to
how
to
detect
problems
that
occur
in
a
distributed
system,
as
we
just
you
know,
in
the
same
way
that
if
you
you
can
create
a
pod
with
an
image
that
is
not
pull
a
ball
and
that
valid
it
won't
validate
the
image
name.
But
then,
once
you
actually
try
to
run
the
pod,
you
know
the
cubelet
will
go.
Oh
it
just
doesn't
seem
to
work.
C
Darker
will
complain
ever
and
the
user
needs
to
understand
the
implications
that
like
just
because
I,
submit
something.
The
API
doesn't
necessarily
mean
that
it's
correct
and
my
workload
will
run
and
they
have
to
be
able
to
detect
when
problems
occur.
So
I
think
we
kind
of
faced
the
same
challenge
with
Federation
in
educating
users.
C
I
mean
the
way
that
at
least
the
the
push
reconciler
that
the
Nord
has
works.
I
mean
it's
based
on
on
how
v1
works,
where
there's
a
cluster
controller,
that
does
health
checks,
and
this
could
be
some
plane.
This
could
be
health
check
and
I
could
even
be
something
that
you
know.
We
could
configure
ibly
disable
that
health
check,
so
people
really
wanted
to
do
that
they
could,
but
by
default
we
would
just
say
sorry.
This
cluster
is
not
available,
because
you
know
it
is
not
of
the
correct
version.
C
C
C
A
C
You're
talking
about
the
the
defaulting,
so
that's
a
that's
sort
of
a
good
thing.
Yes,
so
that's
that's
more
about
version
accounting
so
that
when
I'm
in
propagating
your
resource,
I'm,
recording
the
resource
versions
that
we
use
to
generate
that
resource
and
the
resulting
resource
version
after
successful
propagation
and
add
or
update.
So
yes
that
is
implemented.
Ok-
and
this
is
I
mean
a
lot
of
this-
is
in
the
context
of
push
reconciliation
in
the
case
of
pull.
Reconciliation
would
obviously
be
different.
A
A
C
You
right
so
I
mean
I,
so
the
problem
is
that
it's
you
can't
do.
Direct
comparison,
I
think
there's
two
problems
like
defaulting
is
one
thing:
invalidation
is
another
validation,
so
I
would
separate
those
out
like
have
a
separate
item
for
validation
defaulting.
The
problem
is
that
it's
not
possible
to
compare
a
resource
created
in
the
Federation
API,
with
a
resource
created
in
a
kubernetes
api.
C
Oh,
let
me
update
that,
and
it
would,
you
know,
make
a
call
in
the
next
reconciliation
loop
you'd
be
like
oh
still,
not
the
same,
and
we
just
endlessly
try
to
update
it
so,
rather
than
include
all
the
defaulting
or
try
to
include
all
the
defaulting.
The
suggested
approach
is
to
track
the
versions
of
the
resources
involved,
so
a
version
of
template
version
of
overrides
match
it
to
the
version
of
the
resource
that
we
add
or
updated,
and
then
the
next
reconciliation
will
loop,
we'll
go.
Oh
the
resource
versions
haven't
changed.
A
C
C
In
the
paper
right
right,
so
if
that
occurs,
the
resource
version
of
that
resource
will
change
and
the
recorded
resource
version
and
the
controller
will
will
be
different,
and
so
the
controller
will
know
oh
I
have
to
I
have
to
add
or
update.
It
is
different,
like
essentially
we're
substituting
a
version
check
or
sorry.
An
equality
check
for
a
version
check
rather
than
saying
are
these
different
I'm
just
saying
are
the
versions
that
I
know
about
different?
C
A
A
What
about
the
rest
of
the
stack?
So
so?
Let
me
give
an
example.
So,
for
example,
a
bigger
set
is
created,
in
addition,
a
penalty
to
flick
a
set,
so
it
has
a
template.
It
has
its
omits
everything
except
a
few
fields.
For
example,
you
totaled
up
the
files
or
something
something
like
that.
Okay,
this
is
created
by
the
controller
now
into
a
cluster
1
and
cluster
1
applies
defaults
and
all
and.
F
A
E
C
The
only
thing
we
track
our
diversions
so
aside,
like
the
city
today,
like
the
resource
version,
is
something
that
is
is
incremented
or
changed
every
single
time.
You
make
a
change
to
the
object,
so
when
I,
if
I,
if
I
take
you
through,
like
I,
have
a
replica
set,
template
and
potentially
replica
set
override
resource
and
then
I
generate
like
the
representation
I
want
to
appear
in
a
given
cluster.
I.
C
Add
it
to
that
cluster
I
get
the
resource
back
as
part
of
the
ad
or
the
create
call,
and
then
I
record
its
resource
version,
along
with
the
resource
version
of
the
template
and
the
override
that
were
used
to
generate
it.
Now,
if
any
one
of
these
change,
if
the
template
changes,
if
the
override
changes
or
if,
if
the
resource
in
the
cluster
changes,
even
if
it's
an
innocuous
change
effectively
and
no
law
like
if
their
form
doesn't
change,
the
controller
will
detect
that
change,
because
the
next
time
it
sees
it'll
go.
C
The
version
of
the
resource
is
different
from
the
one
that
I
recorded
from
the
last
successful,
add
or
update.
So
I
need
to
perform
an
update
like
it's
in
the
same
way
that
I
there
used
to
be
like
an
equivalency
check.
Are
these
resources
equal
now?
What
I'm,
doing
or
saying
are
the
are?
The
versions
of
the
resources
involved
different
from
what
I
recorded?
If
they
are,
then
an
update
is
applied.
A
C
A
C
Doesn't
matter
what
the
fields
are
whenever
any
field
changes
kubernetes
like
that's,
it
used
comes
from
that
CD,
it's
like
a
sequence
number,
but
the
resource
is
updated.
So
if
I
change
any
field,
even
one
that
doesn't
appear
in
the
template
or
the
override
in
the
cluster,
the
resource
version
of
that
like
resource
is
changed
and
that
will
be
detected
by
the
country.
So
we're
really.
A
Rely
I
do
that
this
is
detected
by
the
controller
I'm
talking
about
the
next
step,
like
controller
detect,
something
has
changed
now.
How
does
the
determine
that?
What
should
I
write
so
he
update
whatever
the
replicas
that
it
created
using
the
override
and
the
template
again
as
the
original
correcting.
A
C
How
do
you
detect
that,
and
you
know
I
mean
some
of
it
should
be
solved
by
I
mean
detecting
the
problem.
If
say,
replica
set
didn't
get
deployed,
I
think
that
a
reasonable
expectation
is
that
for
any
given
application,
there's
enough
monitoring,
so
you
go.
Oh
my
application
isn't
actually
running
in
that
cluster
and
then
the
secondary
you
know
response
would
be.
Why
is
that?
Let's
go
look
at
the
propagation
logs?
Oh
look,
I
couldn't
propagate
because
you
know
I
was
out
of
quota.
C
B
I
also
think
it's
important
for
for
propagation
to
to
do
the
right
things
which,
when
I
say
right
things
I
mean
I,
don't
think
propagation
can
like
solve
these
issues
for
you,
but
I.
One
of
the
things
that,
if
I
remember
correctly,
is
on
like
the
design
document
is,
if
propagations
propagation
should
have
status,
so
that
there
is
a
place
that
you
can
look
at
for
a
particular
propagation
scheme.
B
C
And
I
believe
we
already
have
that
like
if
any
sort
of
error
with
a
reconciler
events
are
generated
associated
with
that
resource.
So
if
someone
was
to
do
you
know
get
that
resource
and
take
a
look
at
it.
Just
like
a
you
know,
when
you
have
a
pod
and
it's
not
a
restart
loop
or
something,
you
see
a
bunch
of
events,
we
would
have
something
very
similar
for
ya.
B
So
I
want
to
tease
out
I
want
to
tease
out.
There
are
two
different
like
very
hard
to
tell
apart,
but
fundamentally
different
facets
here,
at
least
to
me.
One
of
them
is
as
the
propagation
mechanism
I
tried
to
put
something
into
a
cluster
and
I
couldn't
like
I
was
prevented
from
doing
so
by,
for
example,
the
emission
controller
enforcing
my
quota
in
that
cluster
or
the
quota
for
the
service
account
that
I'm
using
so
the
API
server,
wouldn't
even
accept
the
resource.
I
couldn't
put
it
there,
but.
C
B
C
C
B
Like,
like
I,
said
I'm
more
I'm,
more
sharing
the
questions
that
that
are
in
my
mind,
then
any
answers
since
I.
Don't
have
any
answers
for
this.
Yet,
but
it's
something
that
that
we
should
think
about
and
I
think
you're
right.
That
like
it
may
be
that
the
right
thing
to
do
at
ten
clusters
is
not
the
right
thing
at
all
to
do
for
a
thousand
clusters,
but.
C
That
said,
I
mean
the
UX
of
having
it
on
a
given
resource
is
probably
terrible,
but
I
I
mean
events
I'm,
presuming
like
I'm,
assuming
this
isn't
a
problem
unique
to
us
and
that
there's
it's
there's
a
way
to
slice
and
dice
events
in
the
same
way
that
you
would
say
you
know,
grep
through
or
filter
like
a
syslog
or
any
other
form
of
like
you
know,
logging.
So
my
perception
of
how
you
would
use
it
maybe
didn't
line
out,
but
I'm,
not
sure
the
mechanism
for
recording
the
problem
would
be
different.
B
So
here's
here's
another
thing
that
it
just
came
to
mind
week.
We
could
have
a
resource
that
actually
represents
an
error.
That
is
not
an
event
like
one
thing
about
events.
Is
they
have
a
TTL
so
like
they
only
live
for
so
long
in
the
API
server
before
they're,
just
garbage
collected
and
I
mean
I,
it's
not
a
problem,
it's
so
so.
Here's
here's!
What's
in
my
head,
say
that
say
that
we
do
have
a
thousand
clusters
and
I
think
it's
probably
arguable.
B
Although
I
don't
have
any
experience
to
back
this
up
that
if
you
had
a
thousand
clusters,
you're,
probably
less
less
interested
in
knowing
about
success
for
everything
than
you,
our
failure.
I
wonder
I!
Wonder
if
there's
a
way
that
we
can
like
call
out
things
that
need
user
attention,
that
shouldn't
have
a
TTL
like
events,
do
I.
C
C
And
I
guess
like
I'm,
assuming
the
TTL
is
a
concern,
because
if
I
have
an
error
and
then
I
miss
it
or
it
disappears,
but
I
would
say
that
the
way
that
at
least
a
push
reconciler
works.
If
there's
some
sort
of
quota
error,
that
error
will
appear
every
five
seconds.
Every
minute
like
we'll,
probably
have
a
back
off,
but
that
will
recur.
So
it's
not
like.
Oh
it's,
the
TTL
and
I'll
lose
that
error.
C
B
I
guess
like
at
a
qualitative
level,
I'm
thinking
to
myself
that
there
aren't
this.
This
isn't
something
that
will
probably
solve
in
this
call
I
think
will
will
likely
need
to
pay
attention
to
this.
As
we
move
forward
and
I
guess:
I
guess
it's
worth
actually
constructing
scenarios.
So,
for
example,
we
could
we
could
artificially
constructed
a
scenario
where
you
had
an
exhausted
quota
for
secrets,
and
you
tried
to
point
Nord
at
add
a
cluster
where
your
your
secret
quota
was
exhausted
and
seeing
what
happened
and
asking
ourselves.
C
Mean
I'm,
I
I
think
I
have
a
pretty
good
idea
of
that.
Just
because,
like
I
said
Federation
to
be
one
isn't
too
different,
and
if
there
was
some
sort
of
problem
in
propagating
to
a
given
cluster,
the
reconciler
would
be
recording
hairs
on
every
reconcile
pass
on
that
resource
and
but
I
I
really,
don't
think.
That's
how
you
would
detect
things.
You
don't
think.
Oh
yeah,
you
want
to
go.
Look
at
the
Federation
logs
as
your
first
line
of
defense.
It
would
be
I
have
an
application
that
is
depending
on
that
secret.
C
That
is
now
blinking
rad
on
my
monitoring
dashboard
and
then
I
will
go.
Why
is
that
the
case?
And
then
maybe
you
know,
one
of
the
secondary
steps
I
would
I
would
take
would
be
go.
Look
at
the
org
that
applications
resources,
so
maybe
I,
mean
I,
think
there's
kind
of
two
levels
to
this.
One
is:
am
I
generating
enough
detail
those
events
or
resources
or
whatever,
so
that
someone
could
diagnose
the
problem
and
then
having
sort
of
layers
above
that
they're
like
I,
want
to
see.
C
You
know
what
are
the
errors
for
everything
in
this
namespace
I
think
that's
something
you
can
do
with
events
by
default,
but
then
maybe
a
you
know.
Another
use
case
is
I
want
to
see
all
the
errors
for
the
application
defined
by
this
selector
like
maybe
whether
that's
something
that's
easy
to
do
or
not,
but
like
I
can
imagine
wanting
that
sort
of
user
experience
where
it's
like
I
have
an
application.
F
F
If
I
understood
correctly
the
questions,
we
were
very
happy
with
what
is
being
proposed
by
polar,
sometimes
I,
go
where
the
statuses
of
the
clusters
can
be
something
configurable
or
something
that
can
be
dynamically
changed
according
to
the
implementation.
So,
if
I
want
to
drill
down
to
some
kind
of
to
have
more
knowledge
for
one
particular
clusters,
I
will
implement
something
that
will
expose
that
information.
F
F
C
F
Know
actually
make
sense,
but
in
our
vision
is
something
that
we
already
spoke
internally.
It's
not
that
different
that
what
you
said
previously
before
I'm
sorry.
So
when
you
have
a
thought
that
doesn't
work
correctly,
what
you
do!
What
we
normally
do
is
that
we
log
on
each
single
node
and
be
able
to
date
to
the
journal
and
start
understanding
what
is
happening
in
our
vision.
Federation
should
work.
Similarly,
so
when
you
have
an
application,
does
not
work
correctly.
You
have
to
go
to
the
cluster
and
try
to
understand
what
is
happening.
F
A
C
C
Monday
yeah
that
would
be
great,
yeah
and
I
think
to
reframe
a
little
bit
because
I've
been
listening
to
you,
I've
been
realizing.
The
fundamental
problem
is:
how
does
a
user?
How
do
you
you
detect
when
a
resource
is
missing
from
from
a
cluster
like
it
was
intended
to
be
propagated?
It
wasn't
propagated
for
whatever
reason
like
your
application
might
fail,
because
the
secret
is
missing
or
a
pod
wasn't
deployed
or
whatever
and
the
the
root
cause
is.
Something
is
missing.
C
How
do
you
detect
that,
like
you,
probably
can't
detect
that
at
the
cluster
level,
unless
you
know
exactly
what
is
making
out
that
application,
and
so
it
would
only
be
like
this
application
isn't
running,
you
know
it
would
be
like.
Are
there
any
errors
related
to
propagation
for
the
resources
of
that
application,
which
to
me
implies
like
knowing
like
having
a
sense
in
the
Federation
side
like
what
like
having
an
application
grouping,
I,
guess:
I,
don't
know
if
that's
useful
or
if
there's
other
ways
to
do
it,
but
if
I
just
focus
on
that.
C
F
A
F
Good
staff
to
troubleshoot
a
venturi
but
are
very
bad
to
try
to
automatize
staff
and
and
the
events
are
very,
very
short
life
in
the
class.
So
we
don't
do
that,
or
at
least
we
shouldn't
do
that.
Then
now
you
have
several
clusters
and
I
cannot
grant
100%
that
the
one
is
using,
but
at
least
we
are
trying
to
avoid
to
them
any
more
than
that.
We
are
trying
to
avoid
to
do
to
take
some
decision
according
to
events
and
to
automate
a
staff
on
top
of
events,
because
it's.
F
C
It's
weird
cuz
events,
I
guess
the
term
event
kind
of
implies
something
that
it's
somehow
an
indication
of
state
that
you
know
is
consistent
and
I
really
think
an
event
stream
and
kubernetes
is
really
just
a
bunch
of
like
it's
just
logging,
so
yeah.
C
F
F
C
D
C
I
definitely
appreciate
more
insight
if
you
have
time
to
devote
to
that
before
the
next
session.
I
think
this
is
sort
of
an
active
area
of
investigation.
We
haven't
really
even
in
Federation
v1.
There
hasn't
been
a
lot
of
investigation
as
to
how
to
like
give
useful
feedback
to
the
user
or
give
them
visibility
of
the
system.
So
any
help
is
very
much
appreciated.
A
A
A
A
The
other
is
really
the
bridge
where
a
user
might
have
an
active,
a
CD
or
a
control
plane
run
in,
and
they
just
want
to
ask
a
couple
of
months
or
whatever
and
a
new
version
comes
in.
They
want
one
place
to
envision,
which
might
have
different,
which
might
support
different
caters,
API
version
so
that
the
something
probably
should
good
thing.
But
maybe
you
know.