►
From YouTube: Community Meeting October 5, 2021
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello
and
welcome
to
kcp
community
meeting
october
5th
2021.
The
agenda
is
up
here.
David
had
some
items
already
on
the
agenda,
and
I
will
I
will
add
some
while
he's
talking
to
talk
about
after
david.
Take
it
away.
B
Hello,
so
maybe
yeah,
it's
mainly
about
two
points.
The
first
one
is
the
statues
of
the
kcp
ingress
work.
So
in
this
regard
I
would
probably
let
just
you
I
can
say
that
some
words,
because
we
we
mainly
work
together
on
this.
C
Thank
you.
So,
basically,
we've
been
trying
to
join
the
kcp
ingress
controller
with
the
network
spaces
controller.
So
also
we
have
been
preparing
the
kcp
ingress
to
work
locally
for
the
demos
we're
planning
to
do
so.
Basically,
the
current
status
is
it's
still
using
the
splitter
pattern,
so
we
are
still
creating
a
secondary
object
based
on
the
root
ingress
object.
C
We
will
discuss
this
later,
as
it
introduces
some
issues,
some
challenges
about
who
owns
what
and
the
concept
of
ownership
right
now
as
we
need
it
for
local
development
and
local
demos.
We
included
like
an
emboy
xds
server,
so
it's
like
it
can
configure
an
emboy
instance
that
runs
locally
in
your
laptop
and
it
will
parse
a
small
subset
of
the
functionality
of
an
b1
ingress
of
the
spec
and
it
will
configure
that
envoy.
C
Well,
basically,
that's
the
current
state.
Honestly,
it's
kind
of
working
you
will
see
now
in
the
movie,
but
then
there
is
some
next
steps
that
I'm
starting
to
take
a
look
at
which
is
integrating
with
external
dls
dns
controller.
For
more
a
cloud
approach.
You
know
to
deploy
that
in
aws
and
get
root,
53
to
work
with
kcp,
ingress
and,
of
course,
proper
testing
and
improve
the
code
and
everything
you
know
that
we
should
be
doing
always.
B
Thank
you
very
much,
so
there
is
a
demo.
In
fact,
I
posted
the
link
also
here
in
in
the
issue
the
mobile
dev
workspace
on
top
of
kcp,
with
the
kcp
ingress
controller.
So
the
main
difference-
I
don't
know
where
it's
ten
minutes,
because
I
just
recorded
that
also
for
anyone
with
who
did
not
see
the
first
demo
to
to
be
able
to
understand.
B
So
I
don't
know-
and
maybe
we
don't
want
to
to
read
that
fully
here-
I
don't
know-
or
we
have
time
but
anyway,
the
result
of
this
is
really
that
we
have
a
single
url
to
access
the
workspace
that
workspaces
parts
from
either
whatever
be
the
physical
cluster
it's
living
on,
so
the
demo
mainly
shows
you
switching
the
workspace
from
one
physical
cluster
to
the
other.
B
The
ingress
follows
this:
follow
this
move
and,
finally,
the
writing.
Grass
is
created
in
the
cluster
east
instead
of
west
and
and
also
linked
to
the
invoice
proxy,
so
that
everything
is
transparent.
Mainly
you,
you
just
have,
let's
say,
20
seconds
your
work,
space
and
available
and
then
after
20
seconds
in
the
same
you're
in
the
same,
you
know
browser
and
your
workspace
is
available
again
but
just
running
on
a
distinct
cluster.
B
So
I
mean
just
just
summarize
like
that.
I
don't
want
to
take
too
much
time
if
we
have
some
other
topics
here,
but
anyone
can
have
a
look
to
the
demo.
I
will
probably
post
it
as
well
on
the
channels
and
possibly
also
on
the
on
the
devtools
side,
because
I
assume
that
might
be
interesting
for
them
to
have
a
quick
insight
on
the
fact
that
still
things
go
go
continue
when
and
things
seems
to
be
a
bit
more.
B
You
know
concrete
now
that
you
have
a
single
url
to
access
your
workspace
and
being
able
to
abstract
the
the
relocation.
Yes.
A
It's
pretty
amazing,
for
I
mean
obviously
20
second
outages
is
not
something
you
would
want,
but
if
it
means
I
don't
care
that
the
cluster
behind
here
disappeared
and
moved
over
there,
then
that's
pretty
good.
Yes,.
B
Well,
I
I
mean
I
said
20
seconds.
I
didn't
really,
you
know
measure
the
time,
but
it's
quite
you
know
just
the
time
for
the
for
the
the
the
new
deployment
to
start,
which
is
mainly
you
know,
pulling
some
images.
That
might
you
know,
of
course,
depend
on
on
the
network,
but
on
the
kcp
side,
it's
it's
somehow.
B
You
know
just
immediate
that
and
david
right.
A
D
When
there's
state
you're
talking
about
exactly
the
minimum
time
necessary
to
get
the
minimum
pv
and
then
have
everything
attached,
which
of
course
is
the
k
native
problem
of
you
know,
how
long
does
it
actually
take
to
start
a
pod?
So
we'd
have
to
think
through
that
yeah.
B
But
but
I
was
about
to
say
to
say
yes
that
of
course
it
doesn't
take
in
account
any
process
for
first
starting,
the
second
one
before
stopping
the
first
one,
which
is
what
we
would
do
in
in
real
life
like
now.
It's
just
you
know,
switching
false
forcibly,
switching
the
the
the
cluster
label-
and
it's
already
quite
quite
nice,
to
see
that's
great.
A
Yeah,
if,
if
us
east
went
down
and
all
I
noticed-
was
20
seconds
of
unresponsive
earnest
while
I
switched
from
another
cluster,
I
would
be
incredibly
happy
with
that
20
seconds
up
to
a
minute.
Up
to
you
know
five
minutes.
D
D
To
facebook
they
should
be
using
kcp,
and
actually
that
jason,
that's
in
a
really
interesting
point
that
I
think
like
when
we're
thinking
about
the
meta
problem,
we're
trying
to
get
at
is
not
necessarily
that
everyone
all
the
time
would
use
all
the
movement.
But
then
movement
is
just
a
normal
fact
of
life
and
you
start
putting
yourself
in
a
situation
where
all
the
problems
around
movement
start
to
become
obvious,
at
which
point
you
solve
them
once
then,
you
don't
have
to
think
about
them.
A
Yeah
I
mean
yeah
the
same
paradigm
shift
as
when
a
node
goes
down
right.
Kubernetes
meant.
That
means
that
you
don't
care
when
a
node
goes
down,
because
it
just
automatically
works
over
here.
Now
we're
just
leveling
that
up
to
the
next
level,
yeah.
B
Yeah,
I
I
think
as
well
on
a
on
a
different
approach.
Such
a
demo
might
resonate
also
for
you
know,
dev
tools
or
quality
workspaces,
interested
people,
because
mainly
that
has
been
a
problem,
let's
say,
for
example,
with
sandbox
you,
you
have
a
number
of
workspaces
that
we
have
to
spread
among
various
clusters
because
of
you
know
load,
but
what
you
want
is
having
a
single
url
to
access
this,
without
even
having
the
knowledge
that
they
are
running
on
on
distinct
clusters.
A
Yeah
and
I've
had
similar
conversations
with
with
folks
in
the
tecton
context
of
you
know.
If
you
run
tecton
on
top
of
kcp
today,
what
do
you
really
get
out
of
it?
Because
you'll
still
have
some?
Oh
excuse
me.
You'll
still
have
some
downtime,
while
it
reschedules
everything
over
there,
but
especially
in
tecton
like
as
in
code,
ready
workspaces,
10
seconds
20
seconds.
A
minute
of
downtime
is
not
going
to
really
bother.
You
you're
going
to
be
happy
that
you
didn't
care
that
that
your
cluster
went
down
so.
B
One
thing
that
will
be
that
would
make
this
demo,
let's
say
nearly
complete,
at
least
on
the
quality
workspaces
side,
is
pvs,
because
for
now
I'm
just
working
with
ephemeral,
workspaces,
and
so
of
course,
if
you
move
that
you
just
lose
your
work,
but
if,
if
it
would
be
possible
to
set
up
something
like
you
know,
the
project
that
we
discussed
the
last
week
that
mainly
synchronizes
pvs
through
our
sync
on
the
background,
then
that
becomes
really
really
nice.
D
Think,
like
that's,
probably
a-
and
I
was
thinking
as
we
were
going
here
and
talking
like
what
are
some
of
the
phase:
three
prototype
demo
goals
and
definitely
pp
movement
starts
getting
in
there.
D
Maybe
even
I
was
trying
to
think
about
this
too,
from
the
perspective
of
what
are
the
things
that,
if
you
could
move
them
around
quickly
across
different
domains
or
spread
them
across
different
domains
cheaply,
it
would
be
easier,
like
we've
definitely
got
a
k-native
or
a
tecton
job
use
case
in
phase
three.
Maybe
there's
some
things
outside
of
the
cube
scope
that
we
could
also
think
about
right,
because
if
you
can
schedule,
I
don't
know
that
maybe
falls
into
phase
three,
but
I'm
starting
to
get
into
questions
of
like
if
you
could
put
yeah.
D
I
I've
got
some
ideas.
I
don't
want
to
talk
about
them
now,
just
because
they're
too
out
there,
so
we
can
come
back
to
those
later.
Maybe
we
should
start
accruing
a
phase
three
list,
though,
with
ideas
that
we
generate
in
these
like.
Maybe
we
should
just
create
the
new
issue
for
a
phase
three
prototype
and
and
start
adding
notes
there.
As
comments.
A
D
A
D
A
To
surface
that,
I
don't
know
how
like
technically
it
is.
It
is
relatively
easy
to
move
objects
back
and
forth.
You
know,
technically,
that's
that's.
You
know
only
a
hard
problem.
The
extra
hard
problem
is
going
to
be
putting
cost
controls
in
place
or
having
some
way
to
even
signal.
You
know
this
is
the
cost
in,
in
general
terms,
of
the
of
keeping
these
two
things
in
sync,
yeah.
D
And
this
gets
a
little
bit
into
like
conditions
and
the
idea
that
objects
have
costs
associated
with
them.
We
really
haven't
dealt
with
movement
costs
like
cube.
Preemption
has
some
of
the
basic
models
of
effectively
the
cost
to
preempt
and
some
of
movement,
movement
kind
of
looks
like
preemption.
If
you
squint
hard
enough,
but
yeah
surfacing
that
and
summarizing
it,
I
would
agree
anything.
That's
not
free!
You
don't
do
by
default.
Unless
I
mean
creating
service
load
balancers
to
run
a
cube
cluster
or
creating
a
service
with
bounce
center.
D
D
We
just
need
to
think
about
an
example
of
a
pattern
where
we
materialize
the
cost
that
somebody
else
can
take
into
account
whether
it's
a
human
or
a
machine
honestly
with
tvs
like
a
status
field
on
a
pv
that
tells
you
how
much
data
there
is
is
a
little
weird
usage.
D
Maybe
this
is
just
a
way
of
fancy
way
of
saying
that
pv
usage
is
representing
some
factor
that
we're
just
not
taking
into
account
whether
it's
one
like
gravity
or
latency
like
we
haven't,
talked
about
speed
of
light
modeling
and
there's
plenty
of
prior
art
there.
But
some
of
the
assumptions
of
a
cube
definitely
depend
on
homogeneity
within
the
cluster
and
close
proximity
we're
breaking
both
of
those
and
then
we
need
to
model
them
absolutely.
D
Types
of
movement
and
where
you
move
it
to
would
factor
into
that,
but
in
theory,
it's
a
little
bit
like
speed
of
light
and
latency
like
the
cost
to
go
down
a
pipe
between
two
physical
clusters
is
roughly
proportional
to
speed
of
light
bandwidth
and
cost
yeah.
Maybe
there's
a
fundamental
like
weight
times,
cost
times
movement,
certainly
like
ingress
routing
rate.
If
you
gain
three
milliseconds
of
latency
moving
someone
across
or
30
milliseconds
of
light
see
moving
someone
from
cluster
a
to
b.
D
A
Yeah,
so
so
the
missing
link
in
my
head
was,
if
somebody's
going
to
enable
copy
that
you
know
make
sure
this.
This
pv
is
also
available
in
these
in
this
cluster
as
a
backup
they
would
want
before
they
turn
that
on.
They
would
want
to
see
how
much
how
much
data
is
in
there,
which
roughly
corresponds
to
how
much
cost
it
would
be
to
move
it
there
and
back.
D
And
roughly,
if
you're
doing
active,
passive
replication,
there's
two
things,
which
is
how
far
you
be,
are
you
behind
in
time?
And
how
far
are
you
behind
in
bytes,
whatever
those
translate
to
in
your
domain
model
is
almost
irrelevant,
so
something
we
can
come?
I
think
it's
a
phase
three
discussion,
even
if
we
maybe
don't
prototype
it
in
phase
three
phase,
three
might
be
enough
to
the
prototype
might
be
enough
to
trigger
the
problem
that
we
didn't
say
man.
D
We
really
should
have
a
concept
that
you
know
brings
into
the
distinct
it
brings
into
clarity,
spread
decisions
across
clusters
and
intra
cluster
latency
and
intracluster
bandwidth
into
cluster
cost.
Cube,
doesn't
model
within
az
are
across
az
costs,
but
they're
there.
You
know,
maybe
there's
some
some
analogs
that
we
could
we
look
at
like.
D
We
have
talked
about
we're,
trying
to
help
you
model
resiliency
and
failure,
domains
with
these
chunks
of
homogeneous
capability,
maybe
workplace
capacity
pool,
which
is
what
we're
talking
about,
is
like
the
underlying
object
for
a
location
is
really
just
a
failure
domain.
Maybe
that's
what
we
should
call
it.
A
Yeah,
I'm
also
trying
to
think
of
a
a
test
for
that
phase.
Three
thing
so
so
going
back
to
a
couple
points
ago
was
that
right
now
rescheduling
basically
only
happens
or
is
only
really
planned
to
happen
right
now,
when
a
cluster
disappears,
when
a
cluster
dies
or
is
unavailable,
and
that's
like
a
reschedule
me
signal
of
of
weight
100
where
you
cannot
be
here,
you
must
leave
and
what
we
want
is
for
there
to
be
reschedule
signals
of
weights
between
zero
and
100.
Where
you
could
say
you
know.
D
A
Raises
their
prices,
you
should
move,
or
you
know
this.
This
is
acting
slower
than
normal.
You
should
move,
but
not
urgently.
You
know.
The
cost
of
this
being
here
has
increased
and
might
take
you
over
a
threshold
that
you
care
about
and
might
trigger
a
move.
I'm
trying
to
think
of
simpler
cases
that
we
could
simply.
A
A
Is
that's
that's
simpler
than
detecting
latency,
which
is
pretty
hard
and
detecting
cos
and
having
some
model
of
actual,
like
you
know,
dollar
cost.
Is
there
something
we
could
do
for
a
test?
I
mean
we
could
just
say.
Like
you
know,
cluster
west's
pricing
cost
now
equals
a
thousand
dollars
a
second
and
watch
your
your
deployment
move
away,
because
you
know
too
rich
for
my
blood
or
whatever.
D
Yeah
costs
don't
change
all
that
often
I
mean
maybe
like.
I
think
this
is
a.
This
is
a
use,
a
fundamental
use
case,
problem
of.
Why
does
someone
what
what
does
someone
expect
or
want
out
of
their
application
decisions
like?
What's
the
fundamental
question,
we're
trying
to
talk
about
we're
kind
of
circling
around
a
couple?
D
One
of
them
is,
you
know,
treat
individual
failure,
domains
model,
individual
failure,
domains
in
a
way
that
allows
you
to
mostly
ignore
them
or
to
think
about
them
as
fungible,
which
is
not
the
case
today
for
cube,
it's
not
the
case
today
for
cloud
regions,
it's
not
the
case
for
clouds,
it's
not
the
case
for
private
clouds
or
on-premise
data
centers
another
one
I
mean
and
jason.
I
think
you're
kind
of
circling
around
it
is
like
modeling
the
cost
of
resiliency
or
trade-offs.
D
I
think
could
be
something
important
or
something
that
they're.
I
don't
know
if
we
can
reduce
it
that
trivially,
but
it's
worth
asking
it's
worth,
I
think
investigating,
which
would
be.
How
much
does
active
active
cost
me?
And
what
is
what
do
we
mean
by
cost?
It
could
be
price,
it
could
be
latency,
it
could
be
user
experience,
it
could
be
opportunity,
cost
right
running
something
active
active
means
running
twice
as
much,
which
means
you're
running
less.
D
I
don't
know
that,
like
it
ultimately,
we've
said
a
couple
things
like
we
want
to
expose
the
true
costs
in
a
way
that
makes
them
you
can
plan
around
them.
We
want
to
expose
resiliency,
so
you
can
plan
for
it
right
if
your
dependency
is
only
running
in
u.s
east
zone,
one
there's
zero
purpose
in
running
it
outside
of
that
there's
zero
value,
making
it
globally
redundant.
D
D
Of
that
or,
let's
see,
maybe
the
other
way,
it
could
be
the
other
way
around.
Certainly
a
lot
of
people
are
transitioning
from
like
if
you,
if
you
watch
the
via
migration
stories,
people
move
from
traditional
workloads
and
vms
to
containers.
Sometimes
they
go
straight
to
functions
when
you
move
to
a
function,
you're
also,
typically,
taking
advantage
of
new
capabilities
like
object,
storage
or
you
know,
serverless
dbs
or
db's
database
is
accessed
as
an
operational
from
a
serverless
from
the
end
user's
perspective.
D
Maybe
there's
some
other
trade-offs
there
that
we
could
talk
about
like
going
from.
You
know
from
mean
time
to
recovery
as
your
resiliency
strategy,
which
is
the
cube
default
for
a
replica
set,
which
is
how
quickly
can
cube,
detect
the
nodes
failed
and
create
a
copy
of
it.
Then
there's
active,
active
or
active
passive,
depending
on
the
use
case,
then
they're
sharded,
most
people
move
up
data
stores
like
everything
today
you
have
to
plan
for
this
ahead
of
time,
which
is
great,
there's
an
argument
like.
D
D
Account
or
a
gcp
project,
those
are
parts
of
the
factor
of
who
can
access
what,
but
generally
your
services
and
your
applications
are
composed
of
multiple
of
those
components.
So
you're
already
kind
of
assuming
someone
sets
those
up
securely
kind
of
what
we're
talking
about
is
stitching
together
that
that
graph
of
all
the
connected
components
and
trying
to
be
a
an
assister
for
that
so
yeah,
maybe
prototype
three
needs.
It
needs
to
have
something
I
think
along
these
lines.
It
could
be
the
resiliency
tradeoff,
it
could
be
the
cost
trade-off.
D
It
could
be
the
you
know.
We
need
to
do
some
more
searching
and
and
and
ask
what
problems
are
people
facing
that
this
abstraction
or
what
new
abstractions
would
help
people
bypass
those
problems
like
techton
the
easy
one
is
like
techcon
k,
native
and
code
ready,
we've
already
identified
three
very
concrete
use
cases,
which
is,
I
really
just
don't
want
to
care
about
the
cluster.
A
Yeah,
I
think
I
think
the
evolution
from
so
phase
two
is
just
be
able
to
move
when
the
cluster
dies
right
like
like.
I
actually
watch
watch
things
pick
up
and
move
when
the
cluster,
underneath
it
completely
dies
the
diff
to
phase
three
could
just
be
in
addition
to
volume
stuff,
which
I
think
ignoring
volume
stuff
for
now.
Have
it
take
into
account
some
some
quantitative
value
and
say,
like
I
am
willing
to
put
up
with
three
units
of
stress
whether
that's
cost
or
latency
or
something
three
three.
A
You
know
stress
bucks
and
so
then
assign
a
value
to
the
cost
of
it
and
then
show
you
know
the
I've
lowered
the
cost.
I've
10x
the
cost
of
this
cluster
or
0.10
x,
0.1
x,
the
cost
of
this
cluster
and
watch
it.
You
know
move
not
because
the
clusters
are
down,
but
because
the
clusters
are
cheaper
or
more
expensive.
D
I
think
you
kind
of
hit
it
on
the
head
jason
like
it's,
the
big
lie
with
kubernetes,
which
isn't
a
lie.
It
was,
I
don't
know
everyone.
I
know
we
explicitly
stayed.
I
remember
talking
to
brendan,
you
know
very
early
where
he's
like
a
cube.
Cluster
should
only
ever
be
one
failure
zone
and
he's
right,
because
a
lot
of
the
assumptions
and
clusters
make
that,
but
the
things
that
gave
us
a
lot
less
to
say.
A
D
Let's
take
failure
zones
out
of
the
picture,
so
demo,
two
or
prototype
two
is
saying:
how
do
we
make
the
cluster,
which
is
the
itself
the
thing
that
provides
that
not
a
failure
or
we're
not
faded
with
the
cluster
and
then
the
second
step
is?
Is
there
the
next
example
of
either
assigning
a
cost
or
assigning
a
trade-off
so
that
you're
not
fated
to
other
types
of
problems
or
faded
to
other
failure
domains?
A
I
think
the
phase
three
will
also
be
the
smallest
possible,
like
toe
into
the
huge
arena
of
these
cost
evaluations.
Right,
like
we
can
step,
one
is,
is
be
able
to
report
these
things
to
people
step
two
is
allow
people
to
make
policies
based
on
the
data
we
give
them
and
then
to
automate.
It
is
after
that
right,
so
we
might
just
say
like
alert
your
thing
just
got
more
expensive.
Please
click
this
button
to
to
save
money
and
then.
A
D
It's
a
great
point
like
because
cube,
even
though
cube
has
now
grown
some
of
the
take
resources
into
account
throughout
the
life
cycle
of
a
cluster,
it
doesn't
really
right.
Like
most
distributions,
don't
turn
on
the
scheduler.
By
default,
most
clusters
don't
do
automatic
draining.
You
know,
gcp
finally
got
close.
Gcpf
has
support
for
ephemeral
nodes,
but
there's
still
like
a
bunch
of
problems
with
it.
That.
D
For
a
stream
on
the
life
cycle
of
what
does
it
mean
when
a
when
a
node
fails
before
it
has
a
chance
to
check
in
who's
responsible
for
cleaning
up
that
life
cycle,
so
we're
kind
of
adding
the
closure
of
the
loop
even
seven
years
into
cube
and
so
eight
years?
Is
it
eight
years
already
it's
getting
close.
The
closure
of
the
loop
on
resource
may
just
never
happen.
D
D
The
magic,
auto
scaling
is
something
that's
like
a
one
percent
kind
of
problem,
whereas
standardizing
all
your
deployments
and
being
able
to
take
advantage
of
that.
If
you
wanted,
is
a
99
problem.
A
Yeah
there's
another
there's
another
scheduling,
input
case
that
I
think
we
also
could
address
in
phase
three,
which
is
available
resources
so
right
now,
if
cluster
is
available,
but
with
no,
you
know
the
api
server
is
accessible,
but
there
are
no
available
resources
to
schedule
anything
kcp
and
even
like
the
namespace
scheduler
will
gladly
put
stuff
there
that
we'll
never
actually
schedule.
A
So
we
need
some
some
pushback
from
the
cluster
that
says
like
I'm
here
and
I
might
and
I'm
not
gone
forever,
but
please
don't
give
me
more
work
right
now.
A
Yeah
I'll
just
suggest
some
of
these
yeah.
I
will
do
that
in
these
notes
and
then
I'll
move
into
an
issue
as
we
work
on
that
david,
you
also
had
a
sinking
corner
cases.
You
understand.
B
Yeah
yeah-
maybe
just
there,
was
one
point
in
the
previous
bullet
discussion
about
the
main
challenge,
status,
thinking
and
status
ownership.
You
you
just
mentioned
it.
You
are
kim
previously.
B
Maybe
you
you
want
to
to
give
more
insights,
go
ahead,
go
ahead,
yeah
mainly
in
the
case
of
where
it
made
sense
for
for
the
current
workspaces
or
where
workspaces,
and
we
mainly
just
create-
have
to
create
an
ingress
on
only
one
cluster,
because
there
is
it's
not
a
case
where
we
want.
You
know,
load,
balancing
or
any
stuff
like
that.
We
just
want
to
have
one
cluster
assigned
to
the
workspace
living
on
only
one
cluster,
and
so
mainly
was
the
current
way.
B
We
had
to
still
create
a
derived
ingress
from
the
main
ingress
that
is
created
by
the
the
workspace
controller
and
on
this
derived
ingress
then
set
the
cluster
level
so
that,
finally,
when
it
is
synced
back
from
kcp
from
the
physical
cluster,
you
get
in
the
ingress
status,
the
host
name
or
the
ip
that
was
set
by
the
physical
cluster
and
then
from
this.
This.
B
Yes,
and
mainly
the
problem
that
we
we
saw,
is
it's
quite
painful
to
create
derived
objects,
because
this
mainly
just
was
messing
with
the
initial.
You
know
external
con,
the
workspace
controller,
which
mainly
just
did
not
expect
a
second
ingress
and,
on
the
other
hand,
we
wanted
to
maybe
just
set
the
cluster
level
on
the
main
ingress.
B
But
then
you
have
a
problem
of
ownership
of
the
status
because
the
status
will
be
set
back
by
the
sinker
as
being
the
one
said
by
the
underlying
physical
cluster,
but
then
from
this
status
that
is
mainly
the
the
status
as
viewed
by
the
physical
cluster
layer
from
these
status.
You
want
the
kcp
ingress
controller
to
derived,
you
know
a
sort
of
abstract
status,
the
final
status,
which
mainly
will
be
the
host
that
points
to
to
the
envoy
proxy.
B
So
mainly,
we
have
in
fact
two
distinct
statues,
in
the
same,
let's
say
abstract
resource,
which
is
the
ingress
that
was
created
by
the
consumer,
the
consumer
controller,
which
is
the
dev
workspace
controller.
B
D
A
B
You
you
think
it
should
be
visible
by
the
end
user,
because
I
mean
that
to
me
that
mainly
relates
to
to
what
we
discussed
last
week.
Typically,
if,
even
if
you
add
an
annotation
on
the
on
the
initial
object
that
was
created
by
the
consumer
controller,
the
deborah
space
controller.
B
Typically,
this
will
you
know
this
might
create
a
new
event
in
the
in
the
consumer
controller,
because
you,
you
change
the
object
I
mean
in
in
a
context
that
is
als
that
is
initially
owned
by
the
consumer
controller,
and
so
obviously
this
might
not
be
harmful
for
the
controller
if
it
does
nothing
when
when
an
annotation
changed,
but
I
mean
theoretically,
you
just
changed
the
flow
of
of
the
consumer
controller
of
the
external
controller,
and
it
seems
to
quite
you
know
a
source
of
it
will
be
a
source
of
problem.
B
D
But
even
in
even
in
the
ingress
case,
you
know
certainly,
like
you
know,
every
time
I've
dealt
with
large
scale
load,
balancing
setups,
having
look
c
names
that
point
to
specific
subsets
of
the
workload
has
been
valuable.
Even
if
it's
you
know,
even
if
you
have
like
a
global
load,
bouncer.
A
D
D
So
I
could
see
there
being
scenarios
where
you
might
be
interested
in
the
c
name
of
the
underlying
ingress
in
each
chunk.
I
I
don't
think
that's
the
most
important
thing
I
mean
it
could
point
to
a
modeling
problem.
You
know,
there's
an
assumption
like
open
shift
routes,
supported
multiplicity
of
status,
does
gateway,
assume,
there's
one
or
does
it
support
multiple
most,
that
multiple
possible
names
that
you
could
be
exposed
under
or
multiple
possible
ingress
controllers
that
could
expose
you.
B
Yeah,
I
don't
know
I
to
be
fair.
I'm
more
concerned
about
you
know
the
transparency
of
the
whole.
You
know
pattern
and
multi-cluster
pattern,
transparent,
multicultural
platform.
I
mean
that
if,
if
we
mainly
change
the
the
the
object
in
a
way
that
it
is
visible
for
internal
purposes,
in
a
way
that
is
visible
from
outside,
it
seems
that
this
somehow
breaks
the
transparency,
because
this
might
conflict
with
the
expectations
of
of
the
the
the
external
controller.
That
is
not
aware
that
yeah
fundamentally.
D
An
ingress
can
have
multiple
names,
some
of
which
are
like
today.
Ingress
exposes
zero
names
and
what
I
was
getting
before.
Like
open
shift
routes,
you
could
have
n
names,
zero
to
n
names
if
gateway
is
zero
to
one
if
http
route
and
contour
is
zero
to
one.
Those
are
kind
of
like
that's
almost
a
modeling
question,
which
is,
if
you
can
have
an
ingress
with
multiple
names,
is
the
current
shape
of
the
api
appropriate
for
representing
that
and
what
do
you?
This
is
basically
what
you're
asking
what.
A
D
Do
when
the
modeling
of
a
resource
you
would
like
to
use
transparently
is
not
actually
transparent,
yeah,
and
that
might
be
things
like
I
mean
at
some
level
the
sinker
nothing
about
the
sinker
necessarily
says
that
the
schema
at
the
high
level
has
to
be
the
same
as
the
low
level.
Although,
ideally,
the
deviation
is
small
right,
like
we
planned
for
some
level
of
deviation
between
high
level
representation
and
low
level.
You
know,
one
percent
to
five
percent
is
ingress,
something
where
a
five
percent
deviation.
D
B
And
could
it
make
sense
to
have
something
quite
generic
for
this
with
something,
like
you
know,
an
additional
sub
resource
that
we
expose
only
to
kcp
components,
the
synchro
of
the
stuff,
and
that
allow
you
know
stirring
back
the
statues
as
seen
by
the
as
sent
back
by
the
physical
cluster,
but
that
would
not
be
seen
by
normal
clients
as
the
status
and
then
the
scene.
The
the
some
other
controller,
like
the
kcp
ingress
controller,
which
is
also
part
of
the
overall
tcp
machinery,
would
be
able
to
just
do
whatever
it
wants.
D
I
mean
it
could
be,
and
this
is
an
open
question.
It
could
actually
be
that
the
ingress
controller
is
an
example
of
where
the
ingress
controller
might
point
at
the
control
plane,
but
it
might
also
look
at
individual
clusters
or
have
a
component
that's
on
individual
clusters
as
well,
and
so
it's
not
actually
the
sinker's
responsibility
to
make
that
transformation.
D
It's
something
specific
to
that.
We
haven't
really
talked
about
that.
So
as
part
of
so
a
couple
folks
were
asking
like
what
topologies
for
controllers
and
patterns
and
like
since
you
kind
of
have
the
high
level
control
plane
a
source
of
truth,
and
then
you
want
to
bring
low
level
down
and
have
it
be
resilient
right.
You
want
your
sub
control
planes,
whether
those
are
clusters
or
more
kcp
instances
or
maybe
in
the
in
the
further
future,
or
something
that
runs
on
a
node.
D
If
it
doesn't,
what
would
be
our
approach
and
we
should
we
should?
We
should
work
through
the
example
and
say
like
okay,
these
are
the
fields
we
have
to
copy
back.
What
would
an
end
user
expect
what
would
be
the
most
resilient
and
what
would
require
the
fewest
number
of
deviations
from
the
pattern?
Maybe
that's
our
rubric
for
what
what
a
good
model
is.
A
Yeah
yeah,
so
I
remember
having
the
conversation
about
modifying
types
and
stuffing
other
information
like
stuffing
kcp
information
into
them,
adding
fields
or
adding
sub-resources,
and
I
thought
we
all
decided.
That
was
madness
and
we
could
do
it
like
we
can.
We
can.
We
can
modify
the
type.
However
we
want,
we
can
do
whatever
we
want
with
it,
but
it
would
require
too
much.
A
It's
not
technically
difficult.
It's
like
difficult
for
developers
to
get
to
that
information,
including
us,
because
if
you
add
a
field
to
something
and
it's
not
in
the
go
type,
then
you
know
it's
it's
gone
for
you.
A
Pretty
sure
someone's
not
going
to
come
for
it,
but
that
doesn't
seem
as
hard
as
the
the
like
practical
issue
of
having
to
modify
each
type
and
each
go
struct
to
be.
B
And
why
not
having
some
sort
of
virtual
resource
that
would
mainly
just
be
available
from
the
kcp
side,
and
you
know
a
virtual
resource
location
status
that
is
related,
that
is
that
points
to
or
is
related
to
to
to
any
resource
existing
at
the
kcp
layer
and
that
defines
the
status.
But
then
it's
just
a
virtual
resource.
That
means
that
in
kcp
we
just
inter
intercepts
that
and
store
the
statues
sent
back
from
the
physical
cluster
in
some
way.
I
I
have
no
idea
here
so.
D
That
that's
an
interesting
point
david
because,
like
what
we're
effectively
saying
is,
there
is
a
set
of
data
that
is
owned
by
the
sinker
that
multiple
sinkers
may
need
to
coordinate
on.
So
what
we
were
most
talking.
D
A
D
A
D
Like
I've
handed
off
life
cycle,
responsibility,
kind
of
transactional
way
and
then,
in
our
case
thinker,
two
picks
it
up
in
the
cube
case
when
a
node
goes
into
terminal
state
technically,
like
the
pod,
garbage
collection,
controller
takes
over
ownership
and
at
that
point
like
it's
a
time
space
so
like
you're
handing
off
ownership.
D
This
is
a
different
problem
almost,
which
is,
you
are
summarizing
information
from
a
sinker
or
a
part
of
a
program
running
with
the
sync
or
another
controller
loop
or
an
adapter
or
strategy
or
whatever.
That
needs
some
higher
level
coordination.
In
summary,
because
it's
part
of
a
higher
level
application
model,
I
think
what
we
should
probably
do
is
use
this
as
a
working
example
and
right
walk
through
the
options
in
detail
and
talk
about
what
the
tradeoffs
are,
because
this
will
form
one
of
the
patterns,
which
is
it's
broader
than
a
sinker
pattern.
D
So
if
david,
I
mean
that's
you
and
joaquim,
and
I
can
help
and
jason
if
he
wants
to
jump
in
and
we
can
go
through
some
of
the
same.
We
can
work
through
some
of
the
same
flow.
A
Yeah,
I
think
I
think
this
this
type
of
problem
is
going
to
exist
for
every
resource,
roughly
like
like
for
a
split
deployment.
We
want
to
be
able
to
say
this.
Many
replicas
are
ready
over
here
and
this
many
are
ready
over
here.
We
need
to
stuff
that
information
somewhere
right,
yeah
and
once
again,
maybe.
B
D
This
is
effectively
where
federation
v1
really
struggled,
which
was
it's
hard
to
do
this.
We
have
more
tools.
I
think
we
said
this
a
lot
of
the
earlier
meetings
like
we
have
more
tools
at
our
disposal.
The
default
option
should
be
to
not
make
it
visible
to
the
end
user
unless
we
have
to
probably
because
I
think
that's
one
of
the
sources
of
complexity
in
cube,
fed
where
cube
fed
found
itself
having
to
change
the
definition
of
an
object,
we're
trying
not
to
change
the
definition
of
fields.
D
A
So
I
completely
agree
about
not
changing
the
definition
of
a
field
or
the
semantic
like
data,
because
that
seems
bad.
I
do
want
to
push
back
on
hiding
things
from
the
user
by
default,
like
I
think,
if
I'm
a
user
who
creates
a
deployment
or
an
ingress
and
it
gets
split
across
two
clusters,
there
is
no
reason
I
shouldn't
be
able
to
figure
out
like
don't.
A
Don't
don't
put
that
in
the
status
where,
where
I
just
want
to
look
at
it
like
it's,
a
regular
single
cluster
deployment,
but
I
also
don't
want
to
have
to
ask
super
nicely
or
go
through
some
side
door
to
be
able
to
get
sure.
D
A
Matter
what
you're,
aggregating
and
summarizing
in
that
deployment
status?
Absolutely
right!
I
want
to
use
this
dashboards
the
same
cli
as
the
same
everything
to
see
that
10
of
my
20
replicas
are
ready,
but
then,
when
I
want
to
go
see
that
how
many
are
over
here
and
how
many
here
I
don't
want
to
have
to.
D
Well,
but
I
maybe
maybe
that's
what
I
was
actually
saying
is:
maybe
how
many
are
over
here
and
how
many
are
over.
There
is
actually
the
wrong
problem
in
some
cases
and
we
have
to
work
through
each
example
but
like
just
for
deployment.
D
Be
maybe
split
types
of
deployments
with
different
replicas
is
the
wrong
problem
to
solve
with
a
single
deployment
object,
because
there's
nothing
that
prevents
you
from
creating
two
deployment
objects,
setting
their
scales
individually
and
then
setting
a
an
affinity
rule
that
prevents
them
from
being
scheduled
on
the
same
capacity
pool
like
that,
may
be
a
more
effective
mechanism
or
introducing
a
new
object
like
because
again,
at
the
end
of
the
day,
if
we
do
our
jobs
right
types
become
more
fungible,
people
should
feel
like
they
can
use
the
right
tool
for
the
job.
D
Getting
someone
to
a
point
where
they're
like
cool,
I
see
the
limitations
now
I'll
go
use
the
karmata
globally
distributed
deployment
object,
which
does
let
me
do
all
this
craziness,
because
I
get
this
benefit.
Maybe
that's
kind
of
what
that's
just.
What
I
meant
is
like
use.
The
friction
that
we
see
as
guidance
to
say,
if
we
can't
stick
within
the
bounds,
go
only
a
little
bit
outside
the
bounds,
stop
and
then
help
someone
get
to
that
better
option.
D
If
we
can
imagine
what
the
better
option
is
like,
we
want
to
be
able
to
compose
types
more
effectively.
Like
part
of
the
goal
is
I
can
take
a
service
and
deployment
and
compose
that
with
service
dependencies
across
globally
distributed
use
cases
and
get
the
right
outcome.
I
want
to
be
able
to
compose
a
sas
service
or
a
lambda
with
a
you
know,
cube
native
app
running
on
an
aws
zone
composed
with
a
kafka
topic
running
across.
You
know,
a
kafka
ingress
topic,
that's
fed
across,
like
three
geographic
regions.
D
What
would
that
composition
need,
and
so
like
today?
You
could
probably
argue
composing
deployments
and
other
types
of
deployments
is
too
hard
if
we
fix
that
or
we
make
it
so
that
you
can
use
a
deployment
or
global
deployment
or
a
global
charted
deployment
just
as
easily.
You
know
from
tech
time
or
from
tools.
Maybe
we've.
Maybe
we've
succeeded
by
going
around
the
problem
and
saying
well.
Composability
is
the
real
goal.
A
D
A
D
Though
is
like
how
many,
what
percentage
of
all
deployments
created
in
all
cube
clusters
would
be
candidates
for
a
mixed
split
versus
an
even
split
and
so
like?
If
you
say,
if
you
can
do
even
split
transparently,
then
effectively,
what
you've
done
is
you
said
well,
I've
solved,
90
only
need
one.
Nine
percent
will
be
fine
with
even
and
one
percent
want
two
hey
one
percent.
D
You
have
a
pattern
that
can
work
for
you,
we're
just
not
focused
on
anything
other
than
99
of
workloads,
yeah.
D
Splitting
but
even
auto
scaling
would
be
like
if
you've
got
two
different,
auto
scalers.
Is
that
too,
maybe
to
take
a
step
back
even
does
hpa
summarize
status
effectively.
What
other
things
would
summarize,
you
know
hpa,
there's
always
going
to
be
a
problem
that
mapping
one
to
n.
It
doesn't
work
in
cube
very
well
right,
like
we've
already
talked
about
adding
index
jobs
because
of
problems
like
that
staple
set
is
adding
fields
that
stable
set.
Has
we've
discussed,
adding
sub
fields
that
break
up
the
rules?
D
If
we
can
get
those
rules
in
the
base
types?
That
means
that
the
use
case
is
aminable
to
it.
Right,
like
with
someone
who's
using
a
deployment
today
on
a
single
cube
cluster
who's.
Trying
to
do.
Canaries
actually
prefer
to
have
a
deployment
strategy
that
ends
up
creating
a
replica
set
of
canaries
and
and
drive
it
like
that's
hard
to
do
through
cube.
Today,
it's
not
extensible,
but
there's
the
argument
there's
the
counter
argument,
which
would
be
like
well,
if
you
really
want
to
go
fix
it,
if
that's
really
the
problem
go
switch
it.
D
If
all
you
had
to
do
to
add
a
new
global
deployment
type
was
you
know
just
fire
up,
this
simple
integration
against
your
kcp
server
and
you
got
magic
geo
distributed
deployments
which
can
compose
nicely.
There
might
be
a
lot
of
people
who
switch
to
it.
Who'd
be
perfectly
happy
to
use
that
instead
of
raw
deployment,
the
switch
cost
just
has
to
be
low,
and
that
means
keeping
the
concepts
close
enough,
but
they
don't
necessarily
have
to
even
be
identical.
A
Right,
that's
sort
of
that
has
reminds
me
of
hints
of
k-native
where,
like
candidate.
D
If
we
can,
that
is
maybe
investing,
maybe
not
part
of
prototype
three
or
maybe
it
is
maybe
there's
just
two
tracks
in
prototype
three,
which
is
we're
trying
to
help
people
bring
together
a
large
set
of
apis.
What
would
be
the
first
example-
and
the
first
example
might
just
be
a
more
advanced
case
of
what
happens
when
a
cube,
primitive
that
makes
sense
in
a
single
cluster
no
longer
makes
sense
in
an
existing
cluster.
Maybe
ingress
isn't
the
best
option
for
it,
but
maybe
there's
a
global
ingress
object.
A
A
A
D
David,
you
y'all
feel
like
you
could
work
through
this
and
then
maybe
I
can.
I
can
give
you
an
example
of
kind
of
the
style
we
were
using
for
the.
What
games
you
guys
have
we'll
see
the
the
stepping
through
the
application
dock.
If
we
can
create
an
example
of
that,
for
how
do
we
coordinate
and
draw
out
the
problems
and
the
different
approaches.
A
So
we
only
have
five
minutes
left,
but
I
just
lost
it.
What
was
I
going
to
talk
about?
Oh
the
the
having
to
require
like
opt
in
for
global
deployment
would
also
disrupt
some
of
the
controller
use
cases
we're
talking
about
where
we
want
to
say
like
when
you
install
tekton
on
kcp.
A
It
will
detect
it's
a
controller.
It
will
take
your
regular
deployment.
This
is
like
goes
to
david's
transparency.
Point
like
if
we
get
a
deployment
object
and
all
we
know
how
to
do
is
global
deployments.
What
do
we
upgrade
them
transparently?
Like?
Do
we
get
transparency
back
by
just
upgrading
them
deleting
your
deployment
creating
a
global
deployment
instead
and
going
from
there?
B
A
B
A
D
I
will
be
at
kubecon
if
there's
anybody
who
will
also
be
a
cubecon
on
this
or
listening
to
this
feel
free
to
reach
out
by
the
ktp
channels.
We
might,
I
might
actually
suggest
we.
If
there
are
people
who
are
in
person,
I
may
do
something
informal.
We
don't
have
any
lightning
talks
planned.
I
plan.
A
D
Do
hallway
track
and
go
around
people
who
have
interesting
use
cases.
Who've
talked
to
me,
so
I
may
say
I'll
send
something
out
to
the
kcp
dev
list
about.
You
know
come
find
me
at
cubecon
or
we'll
try
to
find
a
meetup
if
anyone's
interested
in
chatting
about
different
use
cases
and
and
things
that
we
may
not
be
focused
on.
But
people
would
be
interested
in.
A
A
Wednesday
to
friday
and
I'm
giving
a
talk
thursday.
So
also
if
anyone
is
hearing
this
and
it
wants
to
grab
me,
you
have
wednesday
friday
I
will
for
now.
I
guess
assume
we
are
going
to
have
a
kcp
meeting
next
week,
although
I
might
cancel
it
because,
oh
my
god,
kukan
preparation
and
stress
and
crazy,
so
yeah.