►
From YouTube: 2021-06-08 Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
B
So
this
is
the
june
8th
meeting.
This
is
recorded.
There's
a
couple
of
topics:
jason's
gonna
be
a
little
bit
late.
What
his
topic
was
we
brainstormed
yesterday
about
some
next
steps
for
the
prototype,
what
we're
aiming
for?
He
was
going
to
cover
that
in
the
lead
into
that.
So
devin
you
had
a
question
about
external
fcd
and
kind.
Do
you
want
to
go
over
that.
C
Yeah,
I
was
just
curious
if
that's
something
that
would
be
an
acceptable
pr
to
put
up
with,
for
just
being
able
to
connect
to
an
external
lcd
cluster
and
as
a
follow-up,
I
was
curious.
If
there's
any
known
reason,
that
kind
would
not
work.
If
we
were
to
try
to
do
that.
B
So,
on
the
first
question
I
feel
like
in
last
meeting.
We
definitely
act
that
you
know
having
a
couple
of
flags
that
allow
you
to
bypass
the
embedded
lcd
and
connect
to
an
external
one
like
the
minimal
set
of
flags.
That
was
something
we
all
agreed
to
be
reasonable.
Okay,
good,
first
kind,
there's
no
reason
it
couldn't
I'm
a
little
hesitant
to
like
we
kind
of
said
that
a
couple
times
like
we
think
you
should
be
able
to
there's
no
reason
right
now
it
wouldn't
it's
not
a
direct
primary
prototype.
B
Use
case,
but
we
would
absolutely
I
put-
and
I
actually
have
this
going
on
right
now
in
the
some
fleshing
out
a
little
bit
more
of
the
api
server
use
cases
and
I've
recorded
the
you
know.
The
minimal
api
server
should
be
interfaces
that
let
you
start
a
cube
api
server
and
inject
your
code,
things
I
have
written.
B
Actually
I
don't
have
it
pushed
as
a
pr
yet,
but
I
was
kind
of
writing
down
some
of
the
examples
folks
have
given
them
things
they'd
like
to
stub
so
today
to
stub
the
cube
api
server.
You
gotta
cut
a
bunch
of
stuff
out
and
you
can
start
layering
it
that
layering
is
pretty
complex.
It's
somewhere
between
yeah,
I
probably
say
to
realistically
build
a
cube
api
server
that
does
a
bunch
of
the
stuff.
B
You've
got
four
or
five
different
levels
of
abstraction
of
interface
injection
and
then
you're
cobbling
together,
a
bunch
of
stuff
that
doesn't
really
want
to
my,
I
think,
like
the
I'm
starting
to
like
think
about
how
we
would
go
down
this
path
in
the
in
cube
and
what
the
cut
lines
would
look
like.
We've
done
a
couple
of
experiments
like
this
in
the
past,
but
effectively
you
know
the
the
list
I
have
is
like
you
know
it's
moderate
boilerplate,
50
to
100
lines
of
code.
I
can
start
acute
compliant
api
server.
B
I
can
add,
or
I
can
bring
in
my
own
custom
types,
which
would
just
be
like
you
know,
standard
built-in
cube
types
similar
to
what
you
do
in
an
aggregated
api
server.
So
that
means
we're
using
like
the
registry
and
bulls
registering
it
into
api.
Installer
I'd
be
able
to
do
just
crds,
so
I
could
cut
all
everything
but
crds
out.
B
I
could
be
able
to
aggregate
api
servers.
I'd
be
able
to
reuse
all
the
quota
rate,
control
party
and
fairness
stuff,
that's
pretty
straightforward,
but
some
of
that
is
coupled
to
the
fact
that
it
is
a
cube
api
server.
So
if
you
wanted
to
hard
code
some
of
those
or
have
a
different
implementation,
you'd
effectively
like
you'd,
be
dealing
with,
like
the
I
don't
say,
rats
nest,
but
it's
the
rat's
nest
of
the
cube,
apr
server
start
so
clean
cut
lines
where
you
could
layer
types
make
a
choice,
figure
out.
B
The
cut
lines,
if
you
wanted
to
replace
some
of
the
deeper
interfaces
and
so
like
deeper
interfaces.
The
examples
I
was
thinking
of
were
storage.
So
that's
kind.
If
you
wanted
to
replace
that's
the
stuff
and
again
like
the
cube
interface
that
exists
today,
for
that
is
the
ncd
storage
interface.
That's
designed
to
work
with
ncd
kind
of
shown
that
you
can
do
with
others
we've.
You
know
I've
seen
a
couple
prototypes.
I've
spent
time
looking
at
what
it
would
take.
B
You
don't
actually
have
to
support
watch
technically
on
the
server
side
in
order
to
make
an
informer
work.
You
just
have
to
have
the
right
approximate
behavior,
so
there's
some
there's
some
things
that
we
could
do
there
to
improve,
but
yeah
the
storage
interface
would
be
one
possible
stub.
Our
back.
B
This
is
good
because
I'm
not
reviewing
my
dock
as
I'm
going
I'll
put
this
up
later.
Maybe
I
didn't
that's
at
the
top.
Our
back.
B
Emission
control,
potentially
http
middleware,
the
core
cube
controllers,
like
there's
a
couple
of
them
that
do
things
like
register
themselves
in
ncb
with
least
objects
so
that
you
can
get
you
know,
service
implementation
and
know
what
the
api
servers
are.
That's
almost
like
an
optional
bit
today.
That's
all
deeply,
coupled
with
package
master,
so
just
trying
to
get
like
enough
to
where
I
can
start
sketching
out,
custom
requirements
and
then
the
doc
would
then
I'd
start
to
propose.
B
Maybe
some
interfaces
but
yeah
like
today,
dev
and
I'd,
say
you
could
stub
it
out
if
you
wanted
to
in
your
own
fork,
I
don't
know
we
want
to
merge
it
yet,
but
if
some,
if
there's
enough
people
who
wanted
to
play
around
with
it-
and
we
wanted
to
use
it
as
an
example
of
what
the
interfaces
would
need
to
be,
I
think
that's
worth
it.
I
do
think
all
right.
B
I
think
that
that's
not
completely
out
of
bounds,
I
probably
want
to
say
like
it,
should
have
a
purpose,
that's
connected
to
one
of
the
the
the
goals,
and
that
could
be
like
the
high
scale,
logical
clusters,
for
instance,
which
we've
talked
about
and
it'd
be
good
like
if,
if
somebody
wants
to
go,
do
that
prototyping
and
try
that
out
to
come
up
with
a
really
concrete
use
case?
B
That
is
the
reason
for
it
and
tie
it
to
like
one
of
the
investigations,
and
then
we
can
like
use
that
as
the
avenue
for
exploring
it
like.
I
know
you
and
I
were
talking
about
the.
What,
if
I
wanted
to
have
a
one
million
ncd
objects,
would
you
be
able
to
use
the
controller
pattern
against
it?
What
kind
of
controllers
would
you
run
against
it
or
you
wouldn't
want
to
run
controllers
with
it,
but
you
still
want
the
declarative
story.
B
What
does
what
does
that
do
for
the
model?
That's
like
a
really
good
meta
discussion
that
I
think
you
know
as
we
get
crisper
about
what
use
cases
like
a
kcp
like
thing
or
control
planes
might
want.
That's
gonna
be
a
good
one.
There's
another
open
question.
This
was
brought
up
yesterday,
actually
by
somebody
talking
about
the
same
topic
was.
B
If
we
have,
if
you,
if
you
have
like
a
kcp
kind
of
like
control
plane,
but
you
want
a
system
of
record
that
integrates
well,
maybe
we're
thinking
about
it
wrong
in
that
the
kcp
layer
isn't
necessarily
the
is
the
source
of
truth.
But
it's
not
the
high
scale
layer,
the
high
scale
layer
as
a
service
and
instead
of
making
you
know
like
instead
of
us
figuring
out
how
we
go.
B
Make
controllers
work
against
super
high
scale,
cube
api
servers
like
10
million
or
100
million
objects.
We
instead
flip
it
around
and
say
here's
how
you
can
get
a
controller
from
a
whole
bunch
of
different
control
planes
and
here's
like
the
controller
pattern
that
you'd
use,
where
you
have
a
system
of
record
for
millions
of
objects
or
something
where
you're
not
pulling
the
whole.
Like
so
each
say
you
have
a
hundred
thousand
little
control
planes
sharding
different
parts
of
your
problem
domain.
B
Could
you
pull
all
those
together
and
use
that
as
an
input
for
a
controller
that
also
is
talking
to
different
backend
apis?
That
might
actually
have
millions
or
tens
of
millions
of
objects
without
having
to
pull
all
of
them
in
memory
so
like
that
could
be,
I
would
probably
say,
that's
under
logical
clusters,
because
logical
clusters
is
the
only
one
right
now
that
talks
about
sharding,
really
that's
yeah,
so
we
talk
about
sharding
in
there.
B
So
maybe,
if
you
want
to,
if
you
go
down
that
path,
if
you
do
it,
if
you
think
about
it,
trying
to
figure
out
a
way
to
frame
the
problem,
you're
trying
to
solve
in
in
one
of
the
investigations
as
a
symbol
would
be
really
helpful.
C
Okay,
thank
you,
yeah.
It
was
mostly
something
that
we
wanted
to
play
with
we're
going
to
try
to
set
aside
a
little
time
to
experiment.
I
don't
know
how
we'll
find
enough
time,
but
we're
going
to
try.
So
that's
good.
Thank
you
that
doc
sounds
interesting.
Yeah.
B
It
keeps
coming
up
like
we
don't
there's
nothing
really
that
talks
about,
like
sharding
use
cases
of
how
you'd
move
data,
and
I
think
it
comes
down
to
what
types
of
apis
make
sense
as
declarative,
config
based
and
what
types
of
apis
don't
even
a
list
of
some
examples
in
both
columns
there
so
like.
Why
do
I?
Why
are
my
services
on
a
cube
cluster
like?
Why
are
my
deployments
a
list,
because
most
people
have
a
small
number
of
deployments.
B
Conversely,
the
example
we
were
using
before
was
like
identity
management
and
a
company.
I
have
millions
of
group
user
bindings.
It's
not
really
a
great
model
to
put
into
a
cube
thing,
but
you
can
you
just
know
that
you'll
hit
scale
limits
at
some
point
either.
We
would
say:
oh
well,
this
just
isn't
a
problem
that
fits
because
no
one's
doing
declarative
user
mapping
they're
doing
that
through
ldap
or
through.
B
B
And
yeah
devin,
if
you
want
to,
if
you
want
to
tackle
the
the
the
external
lcd
thing,
I
think
everybody's
in
favor
of
it
okay,
cool,
okay,
then
the
next
topic,
jason.
Now
you
showed
up.
Do
you
want
to
cover
the
summary
that
you
were
going
to
give
of
our
chat
yesterday.
D
Yeah
yeah
apologies
in
advance.
If
you've
already
talked
about
any
of
this,
we
talked
yesterday
about
sort
of
next
steps.
Next
demos,
rather
than
I
think
I
think
I
have
decided,
rather
than
try
to
go
very
general
right
away.
D
I
think
it
would
be
useful
to
get
deployments
working
and
moving
demon
sets
working
and
moving,
maybe
one
or
two
or
three
other
types
to
get
sort
of
one
types
that
we
know
will
have
different
different
multi-cluster
scheduling
characteristics
and
then
from
there
try
to
generalize
to
say
like
oh,
this
is
the
pattern
we
would
want
to
use
for
this
type
of
cod
thing,
or
this
is
the
type
of
thing
we
want
to
be
the
default
strategy.
D
When
somebody
doesn't
tell
us
one
really
useful
thing
that
came
up
in
that
talk
that
we
had
yesterday
was
so
I
had
I've
been
hitting
a
wall
trying
to
get
the
dependency
that
the
detected
dependencies
of
objects
so
like.
If
I
have
a
pod,
instead
of
just
randomly
scheduling
it
schedule
it
to
where
its
service
account
is
scheduled
to
where
its
volumes
are,
or
at
least
schedule
them
all
together
that
got
really
complex.
D
Talking
about
our
back
rolls
and
roll
bindings
and
cluster
rolls
and
cluster
bindings
cluster
role
bindings,
especially
because
cluster
roles
and
cluster
role
bindings
are
cluster-wide,
as
you
might
expect
from
the
name
and
that
got
kind
of
messy
to
think
about.
So
instead,
I
think
we
decided
we're
just
going
to
say
for
now
at
least
cluster
roles
and
cluster
role.
D
Bindings
are
not
in
scope
that
probably
cuts
out
a
lot
of
well,
a
certain
number
of
controller
use
cases
for
now,
like
you,
wouldn't
be
able
to
give
kcp
a
controller
workload
and
have
it
transparently,
multi-clusteredly
scheduled
if
it
expects
to
be
able
to
talk
back
to
that
kcp
and
talk
to
things
across
namespaces.
D
That's
not
going
to
work,
but
I
think
that's
a
useful
d
scoping
to
get
to
be
able
to
unblock
us
to
be
able
to
make
other
progress,
and
then
we
can
come
back
to
how
to
write
a
controller
against
kcp
a
a
multi-name
space,
a
cross,
a
cluster.
What
is
a
cluster
anymore,
but
a
cross
namespace
wide
controller
against
kcp
probably
belongs
running
directly
against
that
kcp
from
external
to
it.
D
Instead
of
giving
the
deployment
describing
that
controller
to
kcp
having
it
schedule
it
to
a
cluster
or
a
location,
whatever
we
call
it
and
then
having
it
talk
back
to
kcp
to
try
to
schedule
other
stuff,
I
I
don't
think
that's
going
to
be
the
path
we
go
forward
with,
so
that
the
scoping
and
prioritization
should
unblock
some
progress,
which
will
be
good,
and
I
think
we
have
some
vague
sort
of
seeds,
of
ideas,
of
how
to
write
controllers
or
how
to
run.
D
B
And
I'll,
I'm
going
to
note
these
in
the
end
of
transparent
multi-cluster
like
possible
simplifications,
and
the
other
one
that
I
was
thinking
of
is
the
one.
B
The
other
thing
was
like
scheduling
policy
like,
let's
think
about
the
simplest
possible
policy,
which
is
a
global
default,
maybe
a
per
logical
cluster
one,
and
then
we
know
that
we
might
want
one
for
namespace
like
we
could
imagine
there
being
a
equivalent
of
that
for
the
namespace,
which
would
be
for
in
the
example
use
case
that
we
were
giving
was
like
a
tecton
pipeline
might
have
one
name
space.
That's
targeting
your
dev
clusters,
one
name
space
is
targeting
your
stage
clusters.
One
link
space
is
targeting
your
two
prod
clusters.
B
D
Yeah
and
even
even
the
simplification
of
saying
you
know
every
let's
have
whole
namespaces
go
to
certain
clusters,
it
sort
of
isn't
as
flashy
and
exciting
as
the
demo
as
the
aj
demo
of
a
give
it
a
deployment
and
it
splits
it
across
two
clusters.
But
it
would
be
useful
in
a
lot
of
cases
and
relatively
easy
to
build.
B
D
B
E
I
have
couple
of
questions
on
that
to
continue
with
the
tech
town
example.
In
the
case
of
tekton,
I
understand
where
the
how
where
the
pods
and
deployments
are
scheduled,
but
where
is
the?
Where
is
the
actual
crd?
E
D
No,
so
so,
when
you
install
tekton
on
the
kcp
when
you,
when
you
tell
kcp
about
tekton,
tekton
is
just
some
crds
and
a
controller
that
knows
what
to
do
with
it
and
web
hooks
asked
to
risk
web
hooks,
everyone
hates
webhooks.
So
when
installing
tekton
you
would
say,
kcp
here
are
the
crds.
I
want
you
to
know
about,
and
it
would
it
could
either
pass.
Those
here
to,
you
know,
store
them
itself
and
pass
them
down
to
some
clusters,
or
it
could
just
hold
on
to
them
itself.
D
I
think
we
don't
need
it
to
go
down
to
the
cluster,
because
the
controller
running
against
that
kcp
would
watch
for
new
task
runs
when
it
sees
a
new
task
run,
it
would
create
the
equivalent
pod.
It
would
tell
that
kcp,
here's
a
pod
and
kcp
says
oh,
I
know
I
know
what
to
do
with
pods.
I
send
pods
down
to
a
random
cluster
where
it
executes
and
tells
me
it's
status.
D
B
B
Today,
most
people
assume
that
the
inputs
and
the
outputs
are
the
same,
and
they
only
have
one
cube
config
the
right
outcome,
probably
for
almost
everyone
down
the
road
is
and,
like
you
know,
service
load
balancer
as
a
controller
has
inputs
from
the
cluster
and
an
output
to
a
cloud
api.
So
we
already
have
controllers
that
talk
to
two
different
types
of
apis.
Another
way
of
thinking
about
it
would
be
where
the
controller
runs
is
irrelevant.
B
The
controller's
inputs
come
from
one
cluster
or
setup
clusters
and
the
outputs
go
to
a
set
of
clusters
or,
and
all
that
and
the
the
mapping
there
is
mostly
a
client
problem
done
right
and
so
like
the
sinker
is
an
example
of
that,
but
another
example:
a
deployment
controller,
there's
no
reason.
The
deployment
controller
has
to
target
the
same
cluster
if
the
other
assumptions
that
a
deployment
like
when
it
creates
replica
sets
as
an
example.
B
Even
though
I
don't
think
we'd
do
this,
you
could
have
a
deployment
controller
looking
at
deployments
on
one
cluster
creating
represents
on
another.
That's
not
really
interesting
today,
because
the
clusters
are
the
same,
but
the
moment
you
could
say
well,
I'm
actually
taking
my
high
level
intent
here,
realizing
his
lower
intent.
Suddenly,
like
the
mental
model
where
controller
runs
is
irrelevant,
but
the
two
different
like
inputs
and
outputs
being
different,
starts
being.
D
Really
powerful
so
in
in
tecton's
specific
case,
its
input
is
task
runs
and
its
output
is
pods,
but
just
to
be
clear
it
the
controller
talks
to
kcp
for
both
of
those
it
doesn't
it's
not
responsible
for.
I
want
this
is
a
question
phrased
as
a
statement.
Sorry,
if
that's
confusing,
I
want
to
make
sure
we're
all
on
the
same
page,
so
tekton
running
against
kcp
would
hear
about
new
task
front
objects
coming
in
and
would
translate
that
to
a
pod
and
tell
kcp
about
that.
D
Pod
and
kcp
would
be
responsible
for
kcp
and
the
scheduler
sinker
magic
would
be
responsible
for
pulling
it
down
to
a
cluster.
It's
not
techton's
responsibility,
tecton's
controller
responsibility
to
tell
to
identify
the
clusters
that
being
responsible
and
and
send
them
pods,
and
do
the.
B
Is
making
it
so
that
other
objects
don't
have
to
know
about
clusters
most
of
the
time
so
like
batch
frameworks
that
want
to
run
like
if
you
have
a
batch
definition,
that's
run
10
billion
of
something
good
luck
running
that
on
a
single
cluster.
B
But
for
instance,
you
can
imagine
moving
that
up
to
the
definition
of
the
kcp
level,
and
then
your
batch
controllers,
saying
things
like
run
a
hundred
thousand
jobs
on
this
cluster.
Oh
the
way
that
I
do
that
is,
I
create
maybe
a
job
object
that
we
copy
down,
or
I
create
a
second
crd,
which
is
like
my
sharded
task
run
and
then
a
custom
controller
then
goes
and
says:
oh
well,
I
can
reuse
some
of
the
same
input.
Qcp
knows
about
but
I'll
leverage
kcp,
where
it's
good
and
then
I'll
do
the
rest
myself.
B
A
Right,
in
fact,
the
yeah,
the
the
border
between
application
level
objects
and
and
physical
cluster
level
objects
is
quite
moving.
In
fact,
you
can
just
define
your
own
level
of
what
you
want
to
keep
at
the
application
level
inside
kcp
and
and
and
what
you
want
to
send
to
the
underlying
physical
cluster
layer.
I
mean
everyone's
going
to
have
a
application
layer.
B
Right
and
so
like
what
that's
the
best
part
about
this
is
that
we
can
imagine
like
the
simplistic
which
is
just
transparent,
cube
objects.
There's
the
next
step,
which
is
you
start,
focusing
only
on
the
high
level
objects
and
there's
the
next
level,
which
is
you
say?
Oh,
I
could
design
objects
that
are
explicitly
designed
to
work
as
like
intent,
which
is
the
cube
model,
and
I
don't
really
have
to
get
too
worried
about
multi-cluster.
How
could
I
use
that
yeah?
That's
like
where
we're
kind
of
just
starting
to
really
scratch
the
idea,
bad.
E
B
B
B
D
A
D
You
you
can't,
you
shouldn't,
have
to
know
that
you're
talking
to
a
kcp
that
is
multi-cluster
you
if
you
know
that
you
are-
and
you
want
to
take
advantage
of
that,
you
should
be
able
to
do
something,
but
I
definitely
don't
want
tekton
to
have
to
have
a
multi-cluster
version
of
tecton
of
the
tekton
controller.
Specifically,
it
should
be
able
to
do
that
against
kcp
and
kcp
makes
it
multiplicity.
That's
the
multiplier
effect.
Is
we're
we're
looking
for
something
with.
B
Transparent
multi-cluster,
that
is
an
actual
multiplier
versus
that
you
just
got
to
go,
do
more
work
to
get
more
work,
so
it
would
be
if
we
can
use,
we
can
leverage
existing
abstractions
pods
deployments
services
if
we
can
cheat
and
bend
the
rules,
logic
in
the
sinker
or
if
we
can
selectively
bring
in
a
small
set
of
levers,
widgets
policies
around
those
that
let
you
do
a
very
cheap
tweak
and
suddenly
you
get
more
powerful,
that's
what
we're
aiming
for
with
transparent
multi-cluster.
B
If
it
doesn't
multiply,
and
then
I
think,
like
the
thing
would
be
down
the
road
if
we
do,
if
we
succeed
really
well
at
getting
transparent
multi-cluster,
I
firmly
believe
people
just
write
controllers
against
the
kcp
level,
because
you'd
say
like
hey,
go,
get
me
all
of
the
objects
that
are
that
I
should
have
access
to
across
all
logical
clusters,
and
I
don't
really
care
which
logical
cluster
they
come
from.
I
just
need
to
do
a
little
bit
of
a
look
up
and
map
them
to
the
right
place
or
map.
B
What
devin
was
talking
about
my
global
api,
for
you
know
user
role
bindings
that
once
we
get
to
the
point
of
like
having
enough
use
cases,
those
other
use
cases
come
more
naturally,
because
then
you're
like
well.
If
I
have
5500
applications
and
I
need
to
go
check
the
security
rules
that
apply
to
the
network
policies
of
all
those
objects.
Oh,
I
could
write
a
controller
that
then
says
calculate
the
network
policy,
for
that
would
subdivide
these
calculate
the
visualize
that
come
up
with
a.
A
Yes,
so
here
there
is
two
two:
there
are
two
levels
as
well.
If
I
understand
correctly
as
you,
you
spoke
initially
clayton
about
having
a
controller
work
against
one
logical
cluster.
This
would
be
the
the
typical
case
where
it
would
be
transparent.
At
first
I
mean
where
existing
cluster
would
be
able
to
work
exactly
the
same
way
with
kcp
pointing
to
a
single
logical
cluster.
A
But
then
you
have
additional
use
cases
like
being
using
one
controller
to
watch
every
logical,
cluster
or
number
of
logical
clusters,
and
then
we
start
adding
additional
use
case.
That
would
probably
require
you
know,
specific
implementation,
which
are
really
kcp,
related
implementations
or
kcp
specific,
but
but
for
the
typical
use
case,
where
you
just
point
to
a
given
logical
cluster,
I
don't
see
why
controllers
could
not
just
point
to
kcp
yeah.
B
And
I
think
it'll
be
like
conceptual
stuff
right.
So,
like
an
example,
I
was
trying
to
think
of
examples
that
I'd
write
down
so
like
in
openshift.
There's
the
machine
api,
the
machine
approver
controller,
which
looks
at
node
objects
and
compares
them
to
machine
objects.
Those
are
busting
cluster
scope,
but
it
only
makes
sense
because
the
output
of
it
is
approving
a
csr.
B
So,
like
you
know,
you
create
a
machine
which
results
in
a
vm
being
created
that
comes
up.
Cubelet
regis
creates
the
node
object,
creates
a
creates,
a
csr
request
and
then
the
approver
says
like
okay,
I
see
that
you're
a
node.
I
know
that
you
were
created
by
this
machine,
the
ip
you're
coming
from
matches,
and
also
this
you
know,
whatever
data
I
carried
through,
that
I
can
check
in
the
system
it
matches.
That
controller
then
says:
okay
go
see.
B
B
Yet
this
is
the
one
where
it's
like
the
transparent
pass
through
or
the
merging,
like,
I
think,
by
adding
I'll
and
I'll
get
the
constraint
added
to
the
dock
like
I'll
say
something
like
95
75
of
application
controllers
when
targeted
at
kcp
just
working
I'll
use
the
scp-1
as
an
example
which
will
help
constrain
where
we
go
with
like
a
second
or
third
prototype
step.
But
we
don't
want
to
do
it.
B
D
I
guess
so
did
we
already
did
I
miss
the
part
where
we
talked
about
discuss
server
or
service
and
traffic
writing.
B
Yeah,
so
that's
so
joaquin
jason-
and
I
were
talking
about
this-
you
had
done
the
demo
a
long
time
ago.
Two
clusters
both
have
ingress,
they
both
self-registered
and
we
just
showed
like
simple
traffic.
We
called
this
out
of
one
of
the
issues
I
don't
know
which
one
it
was.
I
think
you
opened
it
is
that
correct,
and
then
we
know
that
kind
of
the
first.
B
So
what
we
were
saying
yesterday
was
the
first
step
is:
could
you
move
a
deployment
from
cluster
to
cluster
and
then
the
next
step
would
be?
Could
the
traffic
flowing
to
that
from
outside
the
cluster
hit
that
and
so
that's
a
combo
of
dns
load,
balancing
per
cluster
and
grass
per
cluster
service,
etc?
So
that's
a
good
it's
a
good
problem
area
to
go
explore
because
a
we
know
it's
possible
to
do
b.
B
We
probably
want
to
use
existing
objects
as
much
as
possible
and
see.
Given
those
constraints.
Could
we
then
say,
like
here's,
a
minimum
path
to
a
prototype,
just
showing
the
concept
that
then
sets
up
a
working
group
of
further
investigation?
Probably
it
would
be
a
subset
of
transparent
multi-cluster,
which
would
be
now
you
can
move
a
workload
between
clusters.
B
Is
that
something
like?
How
much
have
you
thought
about
this
recently
and
like?
Are
there
other
new
implications
in
your
head?
Looking
or
new
things?
You've
thought
through
recently.
G
Not
really
I
mean
it's,
I
was
just
writing
in
that
issue.
The
happy
past.
You
know
it's
just
north
south
traffic
and
checking
your
comments.
I
could
see
the
kubernetes
global
balancer
this
project,
no,
which
is
interesting.
B
Yeah
and
the
the
history
of
this
was
cube,
fed
had
this
problem
and
then,
when
we
said
you
know
what
q
fed's
just
too
hard
like
we're
too
early
for
cube
fed,
we
couldn't
make
all
the
abstractions
work.
It
was
push
model.
It
had
root
access
on
all
the
clusters.
B
It
was
dealing
with
api
incompatibility,
the
the
set
of
folks
involved
with
sigma
multiverse
federation.
At
that
point,
reorganized
around
like
getting
traffic
too,
and
so
google
had
some
projects.
The
global
load.
Balancer
project
exists.
Some
people
were
using
external
dns,
so
it
was
kind
of
a.
I
feel
like
there's
a
research
phase.
B
I
really
would
hope
that
if
you
have
an
ingress
at
the
top
level
in
the
kcp,
we
should
be
able
to
carry
ingress
through
when
gateway
spec
comes
in.
We
should
be
able
to
do
gateway,
spec
carry
that
through.
If
you're
doing,
service,
mesh
or
services,
we
should
be
able
to
at
least
fake
out
enough
of
that.
So,
like
you
have
two,
this
is
the
problem.
B
We're
not
solving
with
the
second
prototype
will
be
in
the
third
prototype
phase
would
be
two
services
in
two
different
name:
spaces,
the
same
logical
cluster
or
two
services
in
a
logical
cluster.
B
They
declare
a
dependency
on
each
other
mechanism,
unknown.
They
both
get
placed
to
clusters,
and
then
somebody
needs
to
say.
If
you
hit
service
a
from,
if
service
a's
pods
hit
service
b
service
from
their
pod,
how
do
they
get
to
service
b?
How
do
we
fool
dns?
How
do
we
do
like?
Do
we
do
service
linkage
rewriting?
B
B
We
were
using
the
search
path
so
that,
if
you
said
I
want
to
talk
to
kcp
like
if
you
talk,
if
you
wanted
to
talk
to
service
foo
on
the
search
path,
would
be
a
prefix
that
would
route
you
to
the
other
cluster
if
everything
was
set
up
correctly.
B
Spoofing
man
in
the
middling
should
be
problematic
or
difficult
at
best
or
at
worst,
and
at
best
it
probably
logically,
like
you
want
to
have
some
level
of
whether
it's
submariner,
you
know
direct
service
to
service
reachability,
which
I
know
some
people
said
like.
So
this
is
kind
of
that
whole
like
tip
of
the
iceberg,
where
what
are
the
axioms
we
would
put
in
place.
B
My
thought
was
to
put
actions
in
place
around
what
we
want
the
user
to
think
about,
so
they
don't
have
to
change
their
assumptions
and
then
the
goal
of
all
the
technology
is
to
go.
Make
those
assumptions
work,
whether
we
layer
or
whether
we
go
and
take
huge
chunks
out
of
cuba,
we're
like
q,
sucks,
we're
going
to
fix,
cube
and,
like
I
think,
being
an
orchestrator
of
cube,
gives
us
a
few
advantages
like
we.
We
need
to
our
goal,
isn't
to
directly
manifest.
B
What's
in
the
kcp
level
in
the
underlying
clusters,
we
are
allowed
to
change
them
as
long
as
the
semantics
are
preserved,
so
the
sooner
we
get
to
like.
What's
the
actual
semantic
we
want
for
everyone,
like
jason,
I
think
that's,
the
the
our
back
example
is
a
semantic
which
is
the
semantic.
Is
you
have
access
to
a
service
account
that
lets
you
talk
to
the
control
plane
that
doesn't
mean
that
you're
going
to
be
able
to
talk
to
the
control
plane
of
the
cluster.
You
end
up
on
as
a
controller.
B
You
actually
do
need
to
tell
us
which
control
plane
you're
talking
to
so
what
we've
kind
of
said
is
we're
not
going
to
magically
sync
our
back,
because
the
mechanism
for
telling
us
who
you're
going
to
talk
to
requires
some
opt-in.
I
think
there's
something
similar
for
services
network
like
if
I
want
to
punch
through
all
the
firewalls
and
run
and
talk
to
an
ingress.
I
should
probably
reference
the
ingress.
How
do
I
do
that
on
a
cluster?
B
You
know?
No
one
really
does
that,
because
there's
no
feedback
loop
through
the
gateway
api
through
ingress
today,
like
routes
and
openshift,
has
it
where
you
can
go,
get
the
public
name
or
the
internal
name
does
gateway.
Have
that
I
thought
we
added
that
to
gateway
via
status.
G
B
B
A
few
people
have
added
some
of
those
like
service
binding
and
the
submariner
work.
Sig
network's
got
the
external
service
concept.
Services
can
also
be
pointed
at
so
it's
like
some
variation
of
those
could
be.
We
want
to
think
about
what
the
vehicle
is,
and
the
service
mesh
obviously
has
this,
and
that
gets
into
questions
of.
B
So
I
would
probably
say
we
may
want
to
file
this
as
just
a
sub
item
under
transparent
multi-cluster
for
now,
but
it
feels
like
it's
going
to
spin
out
into
its
own.
Like
it's
the
first
of
the
here's,
the
use
case,
here's
the
assumptions,
here's
the
deep
expectations
we
would
put
in
place
like
services.
Just
work
ingress
just
works,
that's
the
design
constraint.
Can
we
do
it?
If
we
can't
would
we
have
to
change
about
ingress
service
gateway
to
make
that
happen.
D
Yeah
one
thing
that
stuck
out
to
me
when
you
were
saying
that
was
you
reminded
me
that
we
talked
about
this
yesterday.
Also
not
just
not
sinking
cluster
rolls
in
our
back
down,
but
don't
bother
sinking
anymore
back
down,
because
if
you
would,
if
you
need
our
back
if
your
pod,
if
your
pod
service
account
needs
our
back,
we
assume
you
will
be
talking
to
a
to
the
api
server
and,
if
you're
doing
that,
you
should
just
talk
from
outside
to
kcp
directly.
D
I
want
to
call
that
out
to
remind
myself
to
note
that
the
aside
from
getting
texting
to
work
against
kcp,
I
really
want
to
see
k
native
work
against
kcp,
and
I
think
all
of
this
networking
stuff
is
like
really
the
only
hard
part
good
news.
The
only
hard
part
it's
really
hard
is.
D
Yeah
yeah
sh
should
be
a
problem
should
be
no
problem
at
all,
but
I
definitely
think
that
the
k
native
controllers
should
talk
to
kcp.
They
they
do
the
same
thing.
That
techton
does
right
that
a
lot
of
controllers
do
they
take
their
crds,
move
them
around
and
then
hand
back.
You
know
regular
kubernetes
services
and
regular
kubernetes
deployments
and
regular
pods.
So
they
don't,
they
don't
do
magic,
except
in
translating
you
know,
crd
to
regular
kubernetes
type.
D
B
Yeah-
and
I
mean
you
know-
there'll-
be
some
interesting
assumptions
and
probably
like.
We
need
a
couple
of
these
use
cases
anyway,
because
there's
some
assumptions
that
we
may
just
not
remember
like
I've
been
thinking
mostly
about
the
pod
assumptions
like
a
pod
can
make
a
dns
call
without
any
external
dependency,
to
the
short
name
of
the
service
in
its
in
its
in
space.
What
would
it
take
to
fix
that?
You
know
most
people
use
either
the
local
name
or
something.namespace.whatever.
B
What
would
it
take
to
fix
that?
What
would
it
take
to
make
service
reachability?
But
then
the
next
level
for
a
techton
or
a
k
native,
is
in
tecton's,
probably
a
little
bit
more
useful,
because
it's
a
different
kind
of
problem
of
what
are
the
implicit
dependencies
that
a
tecton
pod
has
on
a
cluster
that
are
distinct
from
kind
of
the
general
assumptions
that
we
would
need
to
be
thinking
about
for
other
types
of
workloads
like
hey
you've
got
a
controller
you
want
to
make
it
work
against
a
control
plane.
B
Oh,
here's
the
three
easy
assumptions
that
you
can
either
fix
and
then
just
stop
doing
that,
like
just
be
explicit
and
most
controllers,
can
be
more
explicit,
like
use
a
service
ralph
use
like
for
pods
and
environment
variables
is
like
take
an
environment,
variable
and
say
pod
spec
and
from
you
know,
end
source
service
value.
B
That
has
a
lot
of
advantages,
because
then
we
know
that
something's
referencing
it
there'll
be
some
others
that
we're
just
not
thinking
about
like
the
reference
to
an
ingress
which
came
up
like
four
or
five
years
ago,
we
spent
a
bunch
of
time
talking
about
how
pots
could
automatically
figure
out
the
full
name
of
an
ingress,
and
we
just
said
no.
It's
just
too
hard.
B
Openshift
went
a
little
bit
further
than
we
did
in
base
cube
because
we
had
the
data
available,
but
it
was
still
almost
too
hard
of
a
problem
to
really
solve
at
the
time.
Maybe
it's
time
for
us
to
go
back
and
say:
oh
well,
maybe
we
need
pods
to
support.
B
You
know
some
level
of
you
know:
data
injection
that
is
mostly
agnostic
to
the
the
pod,
but
like
a
controller
running
on
the
physical
clusters,
could
actually
go
materialize
if
it
had
to,
or
maybe
the
sinker
can
do
it.
D
D
B
And
in
practice,
most
teams
already
are
doing
some
form.
I
mean
every
team
does
some
level
of
explosive
dependency
to
another
team
in
a
microservices
setup,
they're,
just
everybody's
mechanism
for
signaling
that
explicit
dependency
is
different.
Sometimes
it's
a
security
boundary.
Sometimes
it's
a
like.
Sometimes
it's
security
through
physical
network
access,
sometimes
it's
being
placed
into
the
same
geographic
region
where
you
can
access
the
data,
sometimes
that
you
get
a
token
that
allows
you
to
access
it.
Sometimes
it's
you
rely
on
a
cute
primitive
to
do
that
which
does
summer.
B
All
of
those
we
probably.
B
B
There
are
things
that
could
be
tested
or
done
in
the
community
in
these
projects,
then
we
just
leverage
them,
but
all
we
have
to
do
is
suddenly
tweak
them.
So
a
little
bit
more
useful.
Here's
the
five
mechanisms
we
detect,
because
those
five
mechanisms
actually
work.
Everything
else
is
remarkable
use
one
of
those
five
mechanisms
yeah,
although
I
do
I
do
kind
of
like
the
idea
of
forcing
people
to
at
least
declare
some
interdependency,
like
all
of
our
magic
happens
inside
a
single
context.
B
If
you
want
to
cross
contexts
forcing
people
to
explicitly
declare
across
context,
connectivity
has
a
lot
of
advantages
because
you
get
a
place
that
you
can
hang
other
meaning
off
of
now
it
can't
be
too
generic
or
it
falls
into
the
like.
Why
would
I
set
this
up
trap?
It
just
has
to
be
just
useful
enough
that
you're
like.
I
would
do
this
all
the
time,
because
it's
so
useful
because
it
provides.
D
That's
that's
where
the
the
default
fallback
to
just
scheduling
a
whole
name.
Space
onto
a
cluster
is
useful
because
you
get
you
get
something
for
free.
You
get
a
non-aj
resiliency
for
free,
but
if
you
just
give
us
a
little
bit
more
information,
we
can
make
this
really
really
great
minimizing
the
amount
of
information
you
have
to
give
give
us
to
do.
The
really
great
thing
is
ideal,
but
we
should
be
able
to
do
some
amount
of
magic
for
free
the.
What
I
was
thinking
of
something
else.
D
While
you
were
talking
about
the
native
infection,
oh
tecton
is
also
not
doesn't
care
as
much
about
because
of
its
like
relatively
asynchronous.
You
know
a
batch
workload
model,
so
techton
is
going
to
be
even
easier
to
support
for
kcp,
because
we
can
just
put
it
all
in
one
cluster,
and
if
that
cluster
goes
away,
you
know
your.
Your
tasks
won't
run
for
a
couple
of
seconds.
While
we
reschedule
you
to
another
cluster,
but
it's
not
like
you're,
not
serving
traffic.
It's
just
so.
B
That
is
the
magic
of
cube,
which
is
the
99
of
the
value
of
cube,
is
within
a
few
seconds.
The
right
thing
happens
and
then
most
times
you
don't
think
about
it.
That
is
our
job,
like
your
problem.
Multi-Cluster
feel
like
that
and
not
feel
like
a
burden.
So
I
do
think
that,
like
I
think
this
is
capturing
philosophy.
Maybe.
B
Yeah,
maybe
we
don't
have
a
explicitly
calling
this
out,
but
it's
like
the.
What
is
the
magic
we
want
is
the
magic
because
it
just
works
most
of
the
time
and
the
mindset
is
yeah
like
the
the
retry-ness,
the
failure
like
any
time
we
see
one
of
those.
I
think
we
should
hang
on
to
it,
be
like
that's.
Actually,
the
thing
that
makes
techton
tecton
could
lean
more
into
that.
What's
the
right,
retry
behavior,
how
do
you
encode
that
in
the
tecton
apis
in
tecton's
case
it
already
is
in
some
other
cases?
B
You
know
people
write
objects
that
don't
think
about
retry,
okay,
let's,
let's
get
you
into
a
world
where
you
think
about
that,
then
that
just
takes
a
whole
class
of
problems
off
the
table
and
an
admin
can
come
and
be
like
hey.
I
go,
cancel
this
job
and
like
destroy
this
whole
cluster,
and
everything
just
works
is
our
like.
Is
our
magical.
D
B
D
Yeah
and
the
the
goal
being
that,
if
you
are,
if
you
are
architecting
your
app
already
to
take
advantage
of
of
multi-node
clusters,
it
should
not
require
any
or
much
new
effort
to
support
multi-cluster
things
with
kcp.
If
you're
already,
it's
certainly
possible
to
write
a
multi-service
architecture
that
actually
everything
has
to
be
on
the
same
node
and
you
you've
done
it
poorly,
and
in
that
case
you
will
still
end
up
on
the
same
node
of
the
same
cluster
in
kcp.
D
But
if
you
decompose
things
so
that
they
can
work
across
nodes,
it
should
also
work
across
clusters.
D
H
Know
I
missed
part
of
the
conversation
to
apologize
if
this
is
restating
it,
but
in
order
to
get
to
that
outcome,
if
you
address
sharing
the
pods
that
are
scheduled
behind
a
common
load,
balancer,
that's
accessible
across
the
set
of
clusters
and
perhaps
also
being
able
to
to
guarantee
that
clusters
to
which
pods
or
schedules
have
network
connectivity
between
them,
whether
because
they're,
publicly
routable
or
because
they
have
some
private
network.
Vpn
mesh.
H
H
You're
going
to
require
a
much
more
white
box
approach.
I
mean
what
we
built
with
open.
Cluster
management
is
very
much
assuming
that
you
you've
got
a
white
box
approach
that
you
can
view
the
inventory
of
the
fleet
that
you
can
schedule
work
across
members
of
the
fleet
that
you
can
establish
network
meshes
and
then
there's
not
a
lot
of
assumptions
made
about
kind
of
the
front.
End
load
balancers
if
kcp
is
meant
to
present
an
api
that
makes
more
of
this
part
of
the
underlying
fabric.
H
You've
got
to
address
exposing
the
global
load
balancer
for
pods
that
are
scheduled
across
multiple
clusters
and
some
level
of
guarantee,
or
at
least
the
ability
to
communicate.
If
there
is
a
requirement
that
pods
can
communicate.
Clearly,
stateless
services
maybe
won't
care,
but
if
you
ended
up
having
mongodb
replica
sets
and
you've
got
a
replica
in
cluster
one
and
replica
and
cluster
two
and
a
replica
on
cluster
three,
how
do
you
ensure
that
those
three
clusters
can
adequately
communicate
with
the
right
levels
of
latency,
anti-affinity
and
scheduling?
B
Yep,
I
I
michael,
I
think
you
hit
it
on
the
head.
It's
that
white
box
versus
black
box,
and
actually
one
of
the
arguments
is
even
cube.
Has
that
because
you
can
absolutely
go
build
horrific
and
by
horrific
I
mean
useful
and
productive
things
that
are
completely
node-aware
and
a
huge
amount
of
people
that
I
know
that
are
doing
the
complex
stuff
in
cuba
are
doing
stuff
like
that
right.
B
They
machines
have
meaning
they
have
physical
devices,
they
have
characteristics
and
then,
on
the
other
extreme
you
have
the
black
box
approach,
which
is
just
a
pool
workload
we'll
always
have
both
kcp
should
tilt
towards
the
black.
The
transparent
multi-cluster
is
tilting
towards
the
black
box
side,
and
the
ocm
approach
is
actually
complementary
to
that,
because
it's
focusing
on
the
white
box
devin's
problem,
you're
thinking
about
what
you
do
with
like
cluster
api,
and
this
is
stuff
that
was
brought
up
in
the
cluster
api
nested
is.
B
Is
there
a
black
box
approach
to
some
of
the
infrastructure
side?
That's
orthogonal
to
the
outside?
Probably,
but
it's
a
separable
problem
like
we
don't
we
don't
have
to
solve
both
of
them.
The
same
way,
we
don't
have
to
use
the
same
tools
for
them,
but
we
should
have
tools
that
can
be
used
in
multiple
ways
to
solve
different
black
box
problems
like
the
black
box.
B
That's
going
to
lead
very
naturally
to
things
like
cluster
pools
and
capacity
pools,
and
it
will
also
lead
to
policy
based
pools
that
still
need
to
be
implemented
by
a
real
white
box
approach,
because
somebody
at
the
end
of
the
day
has
to
say
like
how
do
I
take
a
cluster
and
make
sure
it
fits?
It
was
cool.
H
I
I
will,
I
will
maybe
poke
a
little
bit,
though
I
agree
with
the
majority
of
that.
Maybe
the
one
thing-
and
I'm
not
sure
if
this
is
the
point
where
we
are
disagreeing
or
just
to
clarify
the
surface
area,
for
how
a
consumer
interacts
with
a
black
box
versus
a
white
box,
multi-cluster
story.
The
fact
that
those
might
have
different
api
services
is
completely
on
point,
the
primitives
that
we
use
to
orchestrate
cluster
behavior
of
the
physical
clusters
right
of
what
the
logical
clusters
are
managing
as
much
as
possible.
H
I
think
it
is
a
desirable
characteristic
that
we
normalize
those
api
and
those
primitives,
because
when
we
operate
this,
we're
still
going
to
want
going
to
want
to
have
insight
into
its
actual
behavior
right.
The
operators
are
going
to
still
want
some
visibility
that
below
the
kcp
there
do
exist,
these
x
amount
of
physical
clusters
hypershift
or
otherwise,
and
that
the
those
physical
clusters
have
a
certain
characteristic
of
health
and
availability
and
versions
of
inventory,
etc.
H
So,
while
the
open
cluster
management
is
much
more
of
a
white
box
approach,
if
kcp
is
presenting
a
black
box
api
and
surface
area
for
consumers,
that's
great
if
we
can
look
to
normalize
either
by
making
adjustments
in
what
we
do
in
open
cluster
or
stimulating
or
influencing
how
kcp
evolves
to
continue
to
define
these
models.
H
Trying
to
get
those
to
converge,
I
think,
is
a
desirable
outcome
so
that
whether
I
care
about
that
api
of
what's
happening
in
open
cluster
or
I'm
simply
trying
to
run
an
infrastructure
that
is
operating
underneath
the
kcp
layer
that
I
can
do
that
in
a
consistent
way.
B
Yeah,
I
think
convergence
on
actuatable
things
is.
I
think
how
I'd
summarize
that,
second
bit,
it's
like
we
want
to
have
things
that
are
orthogonally
actuatable
and
we
want
to
have
tools
that
work
well
and
then
we
want
to
make
sure
that
you
don't
have
to
relearn
the
same
concepts
over
and
over.
So
even
when
you
are
doing
you
want
to
have
the
minimal
set
of
concepts
for
multi-cluster,
but
you
also
want
them
to
have
kind
of
a
strong
I
unfortunately
have
to
drop.
B
This
is
a
great
chat
today,
jason
david,
it
was.
If
you
want
to
do
any
follow-up,
I
would
have
to
leave
with
that.
So
sorry,.
D
Yep,
the
only
other
thing
left
on
the
agenda
is
tgi
k,
tgi
kates
this,
thank
goodness
it's
kubernetes,
I
guess,
is
on
friday
it's
a
stream
they're
gonna
talk
about
kcp
and
dive
into
it
and
see
if
they
can
use
it
or
break
it
or
I'm
not
exactly
sure
what
to
expect,
but
I'll
be
there,
and
hopefully
it
goes
well.
So
if
you're
interested
in
watching
make
some
popcorn
come
join
us,
if
not
it's
recorded,
and
you
can
watch
it
later.
Also
thanks.
D
That's
I
think.
That's
it
and
we'll
we'll
see
you
on
the
slack
or
here
next
week
or
issues
or
pull
requests
or
anything
you
want
thanks
for.