►
From YouTube: 2021-07-06 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
welcome
to
the
kcp
community
meeting
july
6th
2021..
We
skipped
last
week,
so
there's
a
bit
more
this
week
to
go
over
there's
been
some
progress
in
prs.
Github
changed
the
way
that
they
display
a
link
to
a
pr.
It's
weird,
I
hate
it.
They
write
out
the
whole
thing
unrelated
some
ci
improvements,
finding
linked
errors,
running
more
tests,
basically
improving
ci,
finding
things
and
then
fixing
the
things
that
ci
fix
identifies
david
sent
a
pr
this
morning.
A
That
is
really
nice,
because
it
basically
runs
the
demo
and
makes
sure
that
the
demo
succeeds
and
produces
the
correct
output.
A
I
think
it's
it's
infinity
times
better
than
the
previous
end-to-end
tests
we
had
because
we
did
not
have
intent
tests
before
and
I
look
forward
to
improving
upon
the
end-to-end
test
story
even
more
as
we
as
we
sort
of
go
through
this
and
grow
up
and
add
scenarios
that
we
want
to
cover
david.
Do
you
want
to
talk
about
that
anymore,
or
is
that
sort
of
a
good
overview.
B
No,
I
think
it's
okay.
I
think
you're
you're
completely
right
that.
Finally,
it
would
not
be
a
replacement
for
end-to-end
tests
and
it's
not
sufficient
and
it's
a
as
yeah
as
you
as
you
not
talk
it
in
in
the
period
it's
a
bit
bashy
as
well,
yeah
too
much
and
yeah,
but
but
at
least
it
seems
quite
interesting
to
you
know
ensure
that
that
also
the
the
demos
that
we
use
to
showcase
stuff-
in
you
know,
conferences
or
also
for
people
to
run
interactively-
are
not
broken
by
the
by
any
change.
B
So,
that's
to
me
that's
more
something
that
is
complementary
to
what
should
be
into
the
future
into
it.
Entering
tests.
A
Yeah
yeah,
absolutely
the
the
the
demos
we
have
now
are
currently
the
only
end-to-end
experience
we
have,
and
so
we
should
run
them
in
ci,
but
definitely
non-bash
tests
will
be
great
in
the
future
as
well,
and
I
said
to
pr
last
week
that
david
had
some
feedback
on
it
about
using
a
multi-error
package
to
describe
so
when
you
do
and
see
our
negotiation,
there
can
be
dozens,
hundreds,
millions
potentially
of
things
wrong,
and
we
shouldn't
just
fail
at
the
first
one
and
say
like
the
worst
thing
is
when
a
million
things
are
wrong
and
it
tells
you
here's
the
first
thing
I
found
wrong.
A
You
fix
it
and
then
it
says
here's
another
thing
you
found
wrong
and
then
another
thing
and
then
another
thing
you'd
rather
just
see
like
a
thousand
errors
at
once
and
then
fix
each
one.
So,
and
that
is
the
case
already,
we
already
have
that
using
the
kubernetes
aggregate
error
type.
I
forget
exactly
what
it's
called,
but
yeah
in
the
feedback.
For
that
pr,
I
thought
we
should
use
a
different
error
package.
I
think
there's
still
room
for
improvement
on
this.
I
think
your
david,
your
feedback,
is
correct.
A
We
should
sum
up
with
something
even
better,
but
I
thought
this
was
at
least
a
a
slight
improvement
over
that
that
we
can
that
we
can
figure
out.
Unfortunately,
there's
a
lot
of
like
a
fighting
with
goes
type
system
and
goes
standard
built-in
support
for
stuff.
A
I
wish
that
the
language
just
had
this.
This
kind
of
thing
I
mean
I
wish
the
language
had
a
lot
of
things,
but
this
is
at
least
I
think
better
than
we
had
before,
but
we'll
keep
we'll
keep
working
on
that
there's
also.
I
forget,
I
forgot
to
mention
the
the
pr2
rebalance
across
clusters
as
buster's,
add
and
delete,
and
move
and
change
is
also
still
open.
A
I
should
I
should
make
sure
that
he's
all
good
after
all
the
ci
improvements
that
I
haven't
broken
something
in
there,
but
that's
still
good
to
go.
I
don't
know
if,
in
the
last
two
weeks,
we've
also
also
talked
to
a
lot
of
folks,
both
on
slack
and
and
in
meetings
about
improvements
for
multi-cluster
networking
and
how
this
all
works
with
submariner
and
in
general
networking
across
clusters.
I
don't
know
if
anybody
has
any
updates.
A
I
would
guess
what
keem
is
probably
the
person
most
likely
to
have
an
update
there?
If
not,
that
is
fine,
but
I'm
just
curious.
C
Well,
honestly,
about
multi-cluster,
I
don't
really
have
an
update.
We
are
working
on
defining
how
to
proper.
You
know
well
propagate
the
services
et
cetera,
but
not
typical
day
and
about
the
global
global
load.
Balancer
I've
been
working
with
it
and-
and
basically
I
hit
some
of
those
questions
that
you
have
here
for
discussion,
which
is
in
what
cases
shot
should
controllers
run
against
kcp
versus
physical
clusters,
for
example,
that's
one
one,
big
question
that
we
need
to.
C
A
Yeah
thanks,
I
think
that's
it.
That's
a
good
segue
into
the
discussion
that
I
that
I
wanted
to
talk
about
so
over
the
last
week,
or
so
I've
been
sort
of
trying
to
write
down
in
preparation
for
the
next
thing.
A
The
live
stream
walkthrough
of
kcp
tomorrow
at
11
am
to
try
to
sort
of
refine
and
update
the
elevator
pitch
of
what
kcp
is
and
how
it
works
and
everything
to
the
bare
minimum,
and
as
part
of
that,
I
wanted
to
try
to
come
up
with
a
good
short
description
of
when
a
controller
should
run
against
kcp
versus
a
bunch
of
them
running
against
physical
clusters
and
sort
of
you
know.
A
Crd
negotiation
is
fantastic
and
and
honestly
bordering
magic,
but
it
is
only
necessary
when
we
expect
those
crds
coming
into
kcp
to
go
down
to
physical
clusters
and
possibly
be
incompatible
and
possibly
need
negotiation
in
the
first
place.
If
we
run
everything
against
kcp,
that
would
fix
that
would
sidestep
the
crne
negotiation
issue
and
incompatible
controller
behavior.
A
But
then
you
know
we
don't
remove
problems.
We
just
move
the
problem
over
here
to
kcp.
Where
now
we
have,
you
know
performance,
bottlenecks
and
resiliency,
a
single
point
of
failure
issues
and
you
know
any
number
of
other
things.
I
don't.
C
A
That
there
is
an
answer
I
don't
think
we're
going
to
get
to
an
answer
of.
You
should
always
run
against
kcp
or
you
should
always
run
against
physical
clusters.
But
I
think
the
thing
I've
been
thinking
of
is
concisely
and
correctly
describing.
If
you
are,
this
kind
of
thing
prefer
running
against
the
kcp,
and
if
you
are,
this
kind
of
thing
prefer
running
against
the
physical
cluster
and
possibly
both
possibly,
you
will
need
to
run
some
components
in
physical
clusters
and
some
components
against
kcp.
A
I
was
talking
to
clayton
a
bit
earlier
today
and
he
had
the
good
insight
of
I
had
been
thinking
like
k-native,
for
instance,
could
just
run
against
the
kcp
in
my
thinking
and
not,
you
know,
create
pods
and
send
those
down
to
the
clusters
and
the
clusters
just
regular,
vanilla,
kubernetes
clusters,
without
crds
or
controllers,
or.
A
A
But
then
the
k
native
service
to
pod
translation
controller
could
run
in
kcp.
Then
you
end
up
having
a
single
point
of
failure
in
kcp.
A
A
Be
able
to
modify
them,
but
the
data
should
still
flow
through.
In
short,
I
think
the
answer
is
it's
complicated.
A
I
think
we,
I
think
we
all
know
that
it's
going
to
be
complicated,
but
I'd
like
to
have
a
better
idea
of
identifying
behaviors
of
controllers
and
types
of
types
of
controllers
to
say
you
should
prefer
to
run
against
the
kcp,
and
you
should
prefer
to
run
against
physical
clusters
and
maybe
split
your
controller
into
you
know
if
your
controller
does
eight
things
if
it
has
eight
reconciled
loops,
these
three
should
be
over
here
and
these
five
should
be
over
here
and
so
that
definitely
ties
into
the
networking
stuff
that
we're
talking
about
about
giving
kcp
a
service
or
a
load
balancer
or
things
like
services
and
load
balances.
A
Should
those
end
up
getting
passed
down
to
the
clusters
or
should
they
be
reconciled
against
the
kcp?
I
don't
know,
but
we
will
find
out.
I
think,
as
we
keep
going
david,
doesn't
I
don't
think
you
and
I
have
talked
to
much
about
this.
Do
you
have.
B
Yeah
ideas
are
yeah.
I
mainly
have
the
the
feeling
as
well
that
that
the
you
know
border
between
those
two
layers
is
somehow
you
know,
fuzzy
or
moving
according
to
requirements
and
and
the
the
mainly
the
topology
and
the
type
of
things
that
have
to
run
on
both
layers.
So
that's
also
mainly
my
feeling,
but
I
still
find
quite
interesting
that
we
more
clearly
would
define
the
type
of
constraints
or
the
type
of
structures.
B
That
would
mostly
point
you
know
to
kcp
layer
or
physical
cluster
layer.
So
I
mean
having
some
sort
of
advices
for
for
users
is,
is
quite
important
to
me.
Maybe
just
one
point
I
was
thinking
about
is
regarding
crd
negotiation
or
more
generally,
you
know
the
the
the
consistency
checking
of
apis.
B
It
seems
that
it
it
spans
wider
than
than
only
physical
clusters.
If
I'm
not
mistaken,
there
is
the
abili
I
mean
we
would
have
the.
I
mean
the
fact
that
an
api
comes
from
a
physical
cluster
or
from
a
another
logical
cluster
is
just
you
know,
implementation
details,
so
I
assume
that
it's
not
something
that
we've
you
know
envisioned
or
implemented.
B
Concretely
in
our
current
use
cases,
but
if
I'm
not
mistaken,
the
idea
also
of
logical
clusters
is
to
have
you
know
quite
small
logical
clusters
with
dedicated
workloads
inside
and
then
having
them
discussing,
or
you
know
interacting
between
several
logical
clusters
and
in
such
cases.
I
assume
that
we
would
have
some
use
cases
where
we
would
import
apis
from
several
logical
clusters
that
some
somehow
interact
together.
So
maybe
that's
also
an
another
area.
Where
crd
you
know
negotiation
or,
let's
say
more,
you
know
api
consistency.
A
Yeah,
that's
a
that's
a
good
point,
the
because
we
have
separated
physical
clusters
and
logical
clusters.
We
we've
done
this
on
purpose
so
that
people
logically
talking
about
domains
or
or
you.
A
B
A
B
A
B
D
A
No,
I
think
it's
worth
pointing
out
because
right,
it's
not
just
saying
like.
If
we
can
tell
people
physical
clusters
or
kcp,
then
we
will
have
obviated
the
need
for
crd
negotiation.
We'll
still
have
the
crd
negotiation
problem,
which
is
good
because
we
still
we
have
a
solution
for
it,
but
I
would
like
to
not
have
to
involve
it
as
much,
but
we'll
still
have
to
have
it.
I
think
the
the
thing.
The
reason
that
I
got
into
this
into
this
line
of
thinking
is.
A
It's
all
out
the
window
and
there's
no
way
to
describe
it
right,
like
there's,
there's
fundamentally
no
way
to
tell
whether
two
controllers
will
have
the
same
behavior
for
whatever
that
means
with
each
other.
The
case
that
I
had
been
using
in
my
head
is
that
the
test
case,
for
it
was
controller
version.
A
One
sets
some
field
to
false
if
it's
unset
and
controller
version,
two
of
the
same
for
the
same
type,
unsets
it
if
it's
false,
and
so
if
these
two
controllers
are
running
against
the
same
type,
they're
going
to
battle
each
other
and
continually
set
and
unset
this
field
forever
and
the
type
that
has
nothing
to
do
with
crd
type
negotiation,
that
the
field
is
the
same
and
there's
no
way
to
detect
it
without
there's
a
way
to
detect
that
this
field
is
going
back
and
forth
and
to
raise
some,
you
know,
raise
your
hand
and
say:
hey
something's
happening
operator,
but
there's
no
way
to
detect
it
and
stop
it
automatically
and
at
scale
and
assuming
this
behavior
across
thousands
of
fields
for
thousands
of
types
across
thousands
of
clusters.
A
This
could
be
really
really
bad,
but
there's
no
way
to
stop
it.
A
B
Yeah,
so
such
a
use
case
is
mainly
is
typically
something
that
we
can
described
in
the
constraints
about.
You
know
putting
your
controllers
on
the
physical
layer
or
the
kcp
layer
that
when
you
have
them
under
physical
layer,
then
you
have
the
risk
of
doing
some.
You
know
incompatible
process
on
this
shared
apis
that
will
conflict
when
brought
up
to
the
kcp
level,
but
then
it's
useful
for
for
other
stuff.
So
mainly
yes,
I
think
you
know
trying
to
categorize.
B
The
various
use
cases
is
important
because
such
such
an
an
action
that
you're,
just
describing
that
would
lead
to
a
problem,
probably
would
only
be
required
in
some
dedicated
use
cases
that
we
could
precisely
assigned
to
to
the
kcp.
A
Layer
and
that
so
that
that
issue
is
not
technically
limited
to
kcp,
you
can
already
run
two
controllers
that
fight
each
other
inside
of
the
single
cluster.
There's
nothing
stopping
you
from
from
having
to
battling
robots,
but
the
the
impact
of
that
behavior
is
much
lower,
it's
isolated
to
that
cluster
and
you
could
probably
more
easily
tell
that
it's
happening
and
stop
one
of
them
as
opposed
to
it.
Being
you
know,
one
of
my
hundred
clusters
keeps
setting
this
field
and
another
one
of
my
hundred
clusters
keeps
unsetting
it
like.
What
do
I?
A
How
do
I
stop
that?
There
also
tends
to
be
one
version
of
one
controller
for
one
type
like
in
general:
nothing
stops
you
from
having
two
controllers
over
the
same
type,
but
in
practice
people
create
a
type,
and
then
people
create
a
controller
for
that
type.
A
So
it's
really
the
version
sku
on
the
same
controller
code
base
on
the
same
type
that
we're
we're
worrying
about.
I
don't
know
if
there's
anything,
we
can
do
to
stop
it,
but
we
should
at
least
and
clayton's
idea
was
to
like
we
should
have
something
that
detects
a
back
and
forth.
You
know
robots
fighting
scenario,
and
that
is
useful,
even
in
a
single
cluster
that
is,
that
is
a
useful
like
controller.
A
A
B
B
This
case
is
very
a
very
simple
one,
but
you
can
have
you
know
more
complex
interactions
between
fields
right
of
every
single
resource,
driven
by
several
controllers
there,
which
would
be
much
more
hard,
much
harder
to
detect.
I
assume.
A
Yeah
yeah
it
it
doesn't
have
to
be
a
to
b
to
a
it
can
be
a
to
b,
to
c
to
a
or
a
to
b,
to
b
or
a
to
b,
to
c,
to
b,
to
a
like
in
general,.
A
Something
is
weird,
maybe
just
by
volume
of
updates
like
this
object
keeps
getting
updated.
Are
you
sure
you
want
to
keep
making
updates
to
it?
That's
odd.
B
And
maybe
a
first
step
could
be
detect
all
the
controllers
that
watch
a
given
resource.
I
mean
detect
resources
well
detect
apis
of
exactly
the
same
version
that
are
managed
by
several
controllers
and
then
at
least
you
you
would
be
notified
of
that,
even
if
it
doesn't
stop
it,
but
I
mean
being
able
to
on
a
quite
very
big
deployment
where
you
have
thousands
of
of
physical
clusters
connected
to
a
kcp.
A
Yeah
yeah
and
it
would
be
relatively
easy
to
detect,
especially
if
the
owner
references
are
set
like
you
can
just
scan
everything
and
say
hey
this.
This
object
seems
to
say
it
has
two
controllers.
A
Are
you
sure
this
is
weird
not
not
necessarily
impossible,
but
certainly
something
that
raises
suspicions
anyway,.
A
Of
the
thing
I
had
been
thinking
of,
I
need
to
come
up
with
more
concrete
examples
of
types
of
operators
that
should
run
against
kcp
and
types
of
operators
that
should
run
against
physical
clusters
and
probably
more
even
finer
grained
types
of
operations
that
you
would
want
to
happen
at
kcp
and
types
of
operations
you
would
want
to
have
like
the
case.
The
k
native,
auto
scaler
operation
should
happen
in
physical
clusters,
but
the
k
native
service
to
pod
translation
controller
should
happen
could
prefer
to
happen
at
the
kcp
level.
A
Anyway,
yeah
does
anybody
else,
have
questions
or
thoughts
on
on
that
topic?
That's
been
sort
of
the
the
thing
burrowing
through
my
brain
over
the
last
few
days.
A
C
Think
there's
also
go
ahead.
I'm
sorry,
but
I
I
just
wanted
to
ask
if
you
have
some
notes
or
documentation
on
why
you
went
with
deployment
leaves
you
know
the
deployment
controller
that
creates
several
leaves,
instead
of
actually,
for
example,
creating
the
the
actual
deployment
object
directly
into
a
cluster
is
because
you
want
to
actually
represent
those
deployments
or.
A
So
I
wouldn't
say:
first
of
all,
I
don't
have
like
notes
or
a
or
a
rationale
document
for
it,
mainly
because
I
don't
have
much
of
a
rationale
for
it.
The
the
the
way
that
it
is
written
is
not,
you
know,
written
in
stone.
If
we
decide
we
want
to
be
able
to
create
those
things
directly
in
the
logical
clusters
we
would,
we
would
still
create
them
in
the
logical
clusters
and
then
have
them
synced
to
the
physical
clusters.
A
We
wouldn't
reach
out
to
the
physical
clusters
and
create
them,
but
we
could
do
smarter
things
by
creating
them
in
logical
clusters,
instead
of
all
in
the
same
logical
cluster
as
the
original
deployment
came
in,
I
I
wouldn't,
I
wouldn't
say
it's
written
in
stone,
so
I
I
don't
want
to
like.
A
C
A
Than
sit
you
know
like
if
I
create
a
deployment
in
in
cluster
in
kcp,
and
I
want
to
see
what
it
created
right
now.
I
can
just
do
like
cube.
Cuddled
list
deployments
or
you
know,
get
deployments
and
it'll
show
me
all
of
them
in
one
place,
as
opposed
to
having
to
like
search
across
all
the
logical
clusters
that
might
be
assigned
to
that's
not
a
reason
why
it
should
be
that
way
forever,
but
that's
certainly
convenient
right.
B
Yeah,
and
maybe
I'd
also
that
at
the
beginning,
thinking
of
the
demo
and
the
various
you
know,
use
cases
demo
use
cases
that
was
quite
interesting
to
distinguish
between,
let's
say,
transparent
thinking
which
the
sinker
does.
You
know
whatever
be
the
the
type
of
object
and
then
the
the
controllers
dedicated
to
a
given
object.
Typically,
the
deployment
splitter,
which
you
know,
changes
the
object
that
you
have
in
kcp
to
derive
some
sub-objects
for
for
physical
clusters.
So
this
is
it's.
B
I
mean
semantically,
two,
two
distinct
things,
but
obviously
in
the
future.
I
think
we
already
had
discussed
that
also
with
clayton
that
the
sinker
could
be
some
somehow
intelligent
or
you
know,
have
some
sort
of
hooks
that
you
would
be
able
to
hook
into
to
change.
B
Do
some
transformations
when
thinking
that
means
that
what
we
currently
do
in
in
really
two
distinct
steps.
You
know
sinking
and
splitting
could
in
fact
be
down
during
the
sinking,
especially
if
the
sinker
leaves,
in
the
case
the
sinker
lives
in
the
physical
cluster.
B
So
the
pull
mode,
in
which
case
you
know
the
physical
cluster,
would
just
watch
for
objects
in
in
kcp
and
then
directly
with
some
logic
that
is
hooked
into
the
sinker
and
possibly
you
know
specific
logic
that
people
could
could
add.
Then
it
would
directly
create
the
right
objects
at
the
right
place
in
the
in
the
physical
cluster.
So
I
mean
it's
a
completely
open
area,
but
obviously,
as
a
start,
it
was
much
easier
to
completely
distinguish
the
the
two
aspects
of
thinking.
Generic
thinking
on
one
side
and
specific
actions
like
splitting.
A
B
A
Alternative
way,
that's
that
splitting
and
syncing
could
work.
If
it
was
one
thing
was:
a
user
gives
a
deployment
of
15,
replicas
and
somehow
big
question
mark
the
three
physical
clusters
attached.
They
have
sinkers
that
search
for
unsynced
things
or
you
know
unsplit
deployments,
and
they
say.
Oh
a
new
deployment
has
arrived.
I
will
take
five
of
its
replicas
and
create
five
replicas
locally
and
somehow
update
the
status
on
that
single
deployment.
Object
that
I'm
watching
that's
certainly
possible.
A
A
Creating
leaf
deployments,
it
means
annotating
the
single
deployment
with
how
it
should
be
split,
and
you
can
just
watch
that
single
object
and
do
that
thing
or
whatever
it
could
be
a
single
phase.
It
could
be
two
phases
or
one
phase,
two
phase.
The
nice
thing
is
that
if
we
decided
there
or
if
we
want
to
play
with
that
route,
it
wouldn't
be
terribly
hard
to
do
that.
I
think
we
could
do
that
with
the
deployment
splitter
instead
of
creating
leaf
deployments,
do
it
by
annotating
that
deployment
and
having
sinkers
watch.
A
I
guess
the
nice
thing
is
that
the
leaf
deployments
are
each
labeled
with
a
thing
each
sinker
is
watching,
and
so,
if
we
we
would
need
an
efficient
way
to
communicate.
Sinkers
here
is
what
you
should
watch
if
we
didn't
have
single
individual
leaf
objects,
there's
probably.
A
A
route
worth
exploring
for
services
or
load
balances
or
other
types
and
and
a
good
area
for
investigation
is
like
the
better,
the
optimal
way
to
split
a
thing
or
schedule
a
thing
across
clusters
and
have
things
watch
them?
Another
thing
that
I
think
we
don't
that
I
don't
personally
think
of
enough
as
a
tool
for
for
sinking
and
splitting
is
so.
Let's
say
a
deployment
comes
into
kcp
with
15
replicas.
A
There's
nothing
in
the
rules
that
says
we
have
to
create
leaf
deployment
objects
for
those
we
could
create.
Our
own
crd
type
of
you
know
a
a
split
deployment
type
with
controllers
in
the
cloud
that
the
sinker
watches
for
split
deployments
and
does
something
with
them.
We
don't
we're
not
we're
not
required
to
use
the
same
type
coming
into
kcp
as
going
out
or
coming
into
the
splitter
is
going
out.
We
just
need
to
make
sure
that,
on
the
other
end,
when
it
hits
the
physical
actual
cluster,
the
behavior
is
the
same.
A
It
doesn't
you
never
have
to
actually
create
a
deployment
even
on
the
physical
cluster?
You
could
just
do
the
regular
thing
with
those.
That's
maybe
not
a
useful
tool,
but
a
tool
we
haven't
used
yet
and
I'd
be
curious
to
explore
more
how
that
works
right
like
so
for
services.
A
If
a
service
comes,
it
comes
into
kcp,
we
could
create
foo
service
objects
and
pass
them
down
to
the
to
the
cluster
and
a
foo
service.
Object
is
reconciled.
Some
way
to
do
something.
B
Yeah
and-
and
even
you
would
be
able
to
you-
know-
have
variance
of
the
of
this
logic
to
create
virtual
services.
If
you
want
to
switch
to
astu
istio
or
anything
else,
I
mean
yeah,
it's
completely
open
the
way
we
translate
the
objects
that
initially
live
on
the
kcp
layer.
Yes,.
C
I
was
playing
with
that
exactly
I
mean
getting
a
basic
ingress
and
then
creating
into
the
physical
clusters.
Yeah,
I
think,
into
the
physical
clusters.
Is
the
objects
themselves
just
to
you
know
to
create
the
well
to
configure
the
ingress
gateway
in
each
one
of
those
clusters,
but
I
was
curious
about
this
deployment
leaf
how
to
split
those.
C
If
I
I
need
to
talk
directly
to
the
physical
clusters
or
not
how
to
get
the
the
actual
status
from
those
objects
back
into
the
kcp
object,
you
know
if
we
could
extend
the
sync
the
sinker,
and
you
know
all
of
that.
That
was
amazing.
A
Yeah,
so
the
the
the
sinker
should
be
responsible
should
should
be
the
go-between
between
kcp
and
the
physical
cluster.
The
kcp
should
never
talk
to
the
physical
cluster
api
or
vice
versa.
The
right
this,
the
syncer,
pulls
specs
from
kcp
and
potentially
modifies
them
right
now.
It
doesn't
do
any
modification,
but
potentially
modifies
them
in
some
way,
either
setting
fields
on
setting
fields,
adding
fields
or
moving.
A
The
type
doing
literally
anything
is
possible
and
then
creates
it
in
a
local
physical
api
server
and
then
watches
that
api
server
object
that
it
created
or
any
of
the
hundreds
of
objects
that
created.
Maybe
it
creates
bunches
of
things
and
then
summarizes
that
status
and
passes
that
back
up
to
kcp,
but
there
is
a
lot
of
unexplored
possibility
for
how
sinker
does
that
right?
Now,
it's
very
very
simple:
it.
A
It
directly
passes
objects
to
the
api
server
one
to
one
without
modification,
and
I
think,
there's
a
fertile
area
of
development.
For
what
kinds
of
modifications
can
we
make
in
the
meantime?
What
kinds
of
you
know
can
we
take
a
deployment
and
create
five
other
crd
types
from
those
deployments?
Who
knows
so
so
yeah?
I'm
definitely
curious
to
see
where
your
investigation
goes
and
because
I
think
you
are,
you
are
attacking
a
problem
more
complex
than
deployments
and
you
are
probably
gonna
hit
those
those
need.
A
Yeah,
that's
that's
kind
of
all.
I
had
that
discussion
and
updates
on
progress
in
prs.
I
have
slides
for
the
live
stream
walkthrough
tomorrow.
If
people
want
to
see
them,
I
can
share
them
publicly
in
the
slack
before
then,
and
and
if
not
you
can
come
see
them
tomorrow,
at
11am.
C
A
Any
it's
recorded,
so
you
can
see
it
anytime,
but
yeah,
I'm
really
looking
forward
to
it.
Does
anybody
else
have
anything
else
burrowing
holes
in
their
head
that
they'd
like
to
talk
about.
C
Yeah
this
other
topic
about
moving
the
controllers
to
k
native
libraries.
I
don't
know
if.
A
A
Those
frameworks
depend
on
stuff
in
kubernetes
later
than
118,
and
because
we
pinned
to
our
fork
of
kubernetes
that
one
from
it's
a
fork
from
118,
it
was
a
dependency
hell
to
get
it
all
together,
working
together,
so
I
spent
about
a
day
trying
to
get
it
to
work
and
failed
catastrophically
and
went
back
to
writing
controllers
the
regular
way
like
a
caveman
but
david
in
his
crd
negotiation
stuff
had
some
good,
like
idioms
and
that
and
and
patterns
that
we
can
follow
that.
A
I
think
I'd
like
to
extend
to
some
of
the
splitting
stuff
I'm
working
on,
but
yeah.
I
think
I
think,
k
native
using
k
native
stuff
will
be
a
dead
end
until
we
either
update
what
update
our
fork
to
be
on
a
later
kubernetes
or
in
the
long
long
term
not
have
a
fork
at
all.
A
A
A
B
Yeah
yeah,
maybe
I
would
raise
the
question
I
think
jorah
came
you.
You
asked
me
some
time
ago
about
the
and
you
mentioned
also
in
the
kcp
prototype
channel
about
you
know
how
to
manage
the
go
client.
B
You
want
to
choose
to
work
in
objects
that
you
create
in
kcp,
because
that
you
create
in
kcp,
but
whose
api
in
fact
came
from
a
physical
cluster,
because
now
we're
importing
if
it
stopped
me,
if
I
don't
surprise
that
correctly,
but
it
seems
to
me
what
I
understood
is
that
now
we
import
apis
from
physical
clusters
as
crds,
and
so
then,
when
you
have
when
you
want
to
interact
with
those
in
kcp,
and
you
wouldn't
like
to
use
the
dynamic
client,
then
you
finally
try
to
use
the
client
go.
B
C
Yeah
it
was
that
one
I
mean
I
just
wanted
to
create
a
controller
for
for
ingress.
For
example,
in
this
case,
the
client
go
that
that
we
have
in
kcp.
The
fork
only
supports,
I
think
it's
ingress
beta
or
I
don't
remember
exactly,
but
I
wanted
the
latest
version
and
it's
not
not
available,
and
when
I
try
to
import
those
types
from
client
go.
C
Of
course
there
is
a
replacement
in
go
just
disclaimer,
as
you
can
see,
I
I'm
not
not
an
expert
in
in
the
kubernetes
libraries
or
anything,
so
I'm
a
little
bit
lost
with
that,
but
I'm
having
having
problems.
Try
getting
those
definitions
from
the
client
go
from
the
latest
version
into
a
controller
that
uses
kcp
objects.
B
Yeah
and
after
thinking
about
it,
I
didn't
know
really
how
what
to
what
to
answer,
or
I
mean,
was
not
able
to
give
a
definite
answer,
but
it
seems
that
it.
It
raises
a
wider
question
about
how
do
we?
You
know
we
have
api
imports.
B
In
some
cases
we
can
even
calculate
the
lcd
for
some
apis.
If
you
have
several
physical
clusters,
but
then
what
do
we
have
on
the
client
side
to
access
those
apis?
I
mean
how
do
we
relate
that
to
to
client-side?
B
C
Totally
at
least
having
a
small
you
know
some
guidelines
or
or
an
example
that
covers
this
use
case
would
be
amazing,
because
I'm
I'm
a
little
bit
lost,
I'm
trying
now
with
with
the
dynamic
client
and
I'm
learning
how
to
use
and
everything.
But
you
know
it
will
help.
B
So
I
mean
maybe
as
a
as
a
first
step,
the
controller
that
you
want
to
to
build
could
be
built
as
a
distinct
project.
I
mean
because
you
you
you
can
run.
For
example,
the
deployment
splitter
can
be
run
out
process
completely
out
process
from
kcp.
You
just
have
to
point
to
the
right
cube
config,
so
I
mean
short
term,
at
least
for
you
not
to
be
blocked.
It
would
be
possible
for
you
just
to
have
your
controller
as
as
a
distinct
go
project.
C
Listing
the
clusters
or
any
information,
I
need
it's
only
on
there
yeah,
so
you
need
to
have.
C
A
I
understand:
do
you
think
I
I
think
one
way
and
we've
talked
about
this
in
other
contexts.
One
way
to
solve
this
is
to
re-re-atomize
the
repos
and
have
a
kcp
repo
that
is
just
kcp,
with
no
cluster
controller,
with
no
deployment
splitter
with
no
nothing
else
in
it
and
have
a
cluster
controller
and
type
repo
that
you
can
depend
on.
That's
that's
not
based
on
that's
not
sort
of
tainted
with
the
fork
of
kubernetes
that
we
use
and
deployment
splitter.
A
That
depends
on
the
cluster
controller
because
it
needs
the
cluster
type,
but
isn't
otherwise
dependent
on
the
the
fork.
That
is
a
bit
of
a
daunting
accounting
task,
just
like
mechanically
moving
things
around
is
is
sort
of
annoying,
but
if
it
means
that
people
can
write
controllers
against
kcp,
then
that's
what
we're
trying
that's.
What
we're
here
for
is
to
try
to
make
that
happen.
A
Maybe
we
can
solve
it
with
different,
go
modules
inside
the
same
repo,
but
that
feels
gross
too,
because
goma
different
go
modules
inside
of
a
repo
is
also
gross
yeah,
I
would
be.
I
would
be
interested
and
willing
to
sort
of
meet
outside
of
this
community
meeting
and
and
hack
on
this
and
see
what
we
can
do
to
to
fix
this
and
unblock
you.
A
A
So
it
shouldn't
require
you
to
have
specific
definitions
of
things
or
or
pinned
to
this
old
fork,
but
because
of
the
way
we
have
laid
out
the
repo
it
it
does
and
that's
problematic
so
short
term,
I'm
willing
to
to
offline
help
you
with
this
and
try
to
get
you
unstuck
and
if
there's
no
way
to
get
it
unstuck
without
breaking
apart
the
repos.
That's
something
that's
on
the
table
too.
I
think
david.
Does
that
sound?
A
B
B
Api
is
consistency
well,
api
negotiation
might
be.
I
don't
know
if
it's
only
a
cluster
thing
or
because
I
don't
know
in
the
future
how
we
would
consider
importing
apis
from
other
logical
clusters,
yeah
yeah
yeah.
It
mainly
depends
on
on
what
would
be
the
use
cases
in
this
direction
if
it
would
be,
but.
A
Yeah
sorry,
either
way
wherever
that
code
lives,
shouldn't
affect
joaquin's
use
case
or
my
use
case
for
cluster
for
deployment
splitting
right
it
shouldn't
yeah.
B
Matter
yeah,
I
mean
I
I
somehow
agree
with
you
about
separating
not
having
kcp
depend
strongly,
indirectly
on
on
kubernetes
on
a
given
version
of
kubernetes
apis
that
would.
Finally,
you
know
that
transitively
constrains
everyone
to
use
an
old
cuban,
it's
a
version
of
play
and
go,
and
that's
really
a
problem.
It
seems
to
me.
A
Is
you
have
much
more
experience
with
our
fork
of
kubernetes
than
I.
B
A
B
I
I
think
it's
maybe
two
days
of
work
or
three.
I
don't
know
I
mean,
or
even
even
possibly
less,
but
I
know
that
there
were.
It
seems
to
me
that
I
saw
that
there
there
are
some
areas
touched
around
that
changed
a
beat,
but
I
don't
think
that
should
be.
You
know
terrible
really,
so
I
was
mainly
thinking
about
taking
that,
as
in
as
one
of
the
next.
A
It's
a
it's
like
a
medium-term
solution
to
the
problem.
It's
not
as
grandiose
as
solution
as
breaking
everything
apart
again
and
we,
if
we
do
it
regularly,
each
each
update
of
the
fork
will
be
easier
and
we'll
need
to
update
the
fork.
When
we
come
to
you
know
proposing
caps.
We
want
to
make
sure
that
the
cap
we
propose
is
based
on
and
updated
for,
but.
B
A
So
yeah
short
term
I'll
reach
out
to
you
and
try
to
figure
out
a
time
to
go
through
this
and
see
if
we
can
figure
out
like
what
dependencies
we
need
to,
pin
or
whatever
updating
the
fork
would
be
helpful
into
something.
We
need
to
do
eventually,
anyway,
not
like
a
high
priority,
but
something
if
I'm
glad
that
you
think
it's
a
couple
of
days
and
not
terrible.
I
have
not.
A
I
don't
have
as
much
context
on
what
is
different
between
118
and
1
22
for
our
needs,
but
and
long
term.
I
think
we
do
want
to
split
apart
things
as
we
contribute
more
of
kcp
upstream
kcp
disappears
as
a
thing
right
like
just
becomes
regular.
Kubernetes
can
be
used
in
this
way
and
all
of
the
other
stuff
around
it
should
be
its
own.
Repo
cluster
controller
should
be
its
own
repo,
etc.
So
yeah
does
that
sound.
A
C
Well,
my
time
zone,
for
today
you.
A
Necessarily
reach
out
to
you
and
figure
out
how,
when
to
help
you,
how
yeah
that's
perfect,
yeah!
Okay,
thank
you,
yeah,
no
problem!
Thank
you
for,
for
taking
a
look
and
and
doing
this,
you
are
the
third
person
on
earth
to
try
this
after
after
david
and
myself.
So
thank
you
all
right
with
that.
Unless
there
are
any
late
breaking
topics,
I
think
we
can
end
10
minutes
early.
A
All
right
have
a
good
week.
Everyone
tune
in
for
the
live
stream
tomorrow
and
see
me
talk
about
kcp
with
slides
I'll,
also
share
those
slides
all
right.