►
From YouTube: Community Meeting, April 12, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Right,
hello,
everyone
welcome
to
kcp
community
meeting
april
12
2022..
We
have
a
couple
of
things
on
the
agenda
or
one
thing
somebody
mentioned
these
other
projects
that
might
be
interesting
to
discuss.
Although
I
don't
see
him
here,
if
you
are
here
e
guy
e.
B
So
that
guy,
I
don't
think
so
on,
yet
I
could
be
a
representative.
A
B
Yeah,
so
these
two
suggestions
potentially
are
solutions
for
the
problems
that
we
were
talking
about
when
it
came
to
migrating
persistent
data
from
one
cluster
to
another
in
a
scheduled
type
format.
B
I
know
that
ball
sync
has
about
three
mechanisms:
there's
pretty
much
a
one
to
one
and
one
to
many
as
I'm
not
as
familiar
with
the
cozy
cap,
but
these
are
potential
solutions
that
would
allow
for
us
to
keep
multiple
sites
in
a
replication
pattern
to
allow
for
us
to.
You
know,
transition
the
workloads
from
one
cluster
to
another.
A
Yeah,
so
this
is,
this
is
related
to
sort
of
longer
term
stuff.
We
want
to
do
with
our
multi-cluster
scheduling,
where,
if
you
have
a
stateful
application
running
in
you
know
us
east,
or
something
and
usc,
gets
hit
by
a
meteor,
and
you
need
to
move
it
to
u.s
west.
A
We
should
capture
that
capture
the
fact
that
it
is
a
stateful
workload
and
where
that
state
is,
and
keep
a
copy,
keep
a
backup
currently
constantly
in
sync
between
us
east,
where
it's
running
and
somewhere
else
where
it
should
be,
could
be
running
and
then
right.
So
so
keeping
it
constantly
in
sync
is
something
that
it
sounds
like
bolsync
might
be
able
to
help
us
with,
which
is
great,
because
I
feel
like
that's
a
lot
of
hard
work.
We
didn't
really
have
a
good
idea
for
how
to
do
so.
A
If
somebody's
already
looking
into
it,
then
excellent.
We
won't
have
to,
I
think,
we'll
still
have
to
figure
out
how
to
express
in
in
the
scheduling
inputs
like
run
this
here.
A
You
know
run
this
any
of
these
three
places
wherever
it
is,
that's
the
up-to-date
one
but
then
keep
two
copies
in
sync:
elsewhere:
keep
n
copies
in
sync
elsewhere
to
feed
into
bull,
sync
or
whatever,
to
actually
do
that,
but
definitely
encouraging
to
hear
that
other
folks
are
already
looking
into
this,
because
I
think
that
was
something
that
was
sort
of
intimidating
for
me
personally
to
think
of
having
to
write
ourselves.
What
is
sean
asks?
What
about
something
like
ramen?
What
is
ramen
aside
from
a
tasty
noodle
dish.
C
Ramen
is
odf's,
metro
and
async
dr
solution,
that
will
do
all
the
like
the
data
recovery
operations
across
stuff
and
it's
baked
in
at
least
I
I
could
be
wrong.
I
thought
that
josh
was
on
so
he
might
have
a
much
better
understanding
this,
but
I
thought
it
was
actually
in
acm
as
well
and
being
used
for
spoke
cluster.
Dr.
D
That's
right:
it's
it's
not
baked
into
acm
is
in
your
deploy
cm
and
you
get
it.
But
it's
part
of
the
the
platform
plus
piece
and
it
works
with
acm's,
app
delivery,
and
actually,
as
ryan
was
talking
about
ball
sync,
it
will
it
they're
integrating
robin
with
wall.
Sync
now
as
well.
A
Yeah
cool
jose:
you
have
your
end
up.
E
Yep,
I'm
also
from
odf
engineering
as
it
were,
yeah
by
and
large
ramen
is
targeted
initially
to
be
the
disaster,
recovery
solution
for
acm
right
and
it's
built
on
top
of
a
number
of
specific
features
upcoming
and
existing,
including
ceph
volume,
replication
integration
with
volsync
and
different
strategies
thereof,
depending
on
whether
it's
regional
disaster
recovery
or
metro,
wide
disaster
recovery,
the
metro,
dr
actually
being,
I
believe,
synchronous
volume
replication,
for
example,
but
these
are
all
dependent
on
the
storage
technology
underneath
for
something
like
kcp.
E
It
may
be
more
viable
to
look
at
object,
storage
based
solutions
of
which
I
think
volsync
is
capable
of
doing,
but
otherwise
we
would
at
present
need
to
bring
in
the
full
load
of
acm
to
really
make
use
of
it.
A
Yeah,
I
think
I
think
this
exposes,
like
the
question
really
in
terms
of
kcp,
is
how
will
kcp
make
this
someone
else's
problem
where
someone
else
might
be
ocm,
or
you
know
some
combination
of
ocm,
full
sync,
raman
et
cetera,
but
kcp's
responsibility
is
not
to
do
any
of
this,
but
to
orchestrate
and
signal
to
other
things.
How
to
do
this?
To
tell
ocm
I'm
going
to
put
this
here,
but
also
keep
it
back
up.
There
is
that
right
now,
other
people
are
understanding.
The
problem
too,.
A
Cool,
but
this
is
super
interesting
I
mean
like.
I
think
this
is
this
is
something
we
haven't
really
deeply
explored,
except
beyond
the
the
hand,
waving
phase
of
design.
You
know
definitely
I'd
say
that
the
phases
for
transparent
multicluster
are
schedule,
anything
and
then
schedule
stateless
things
without
down
time
like
where,
where
you
can
move
things
around
without
without
downtime
and
then
beyond
that
stateful
workloads
are
definitely
next.
A
F
Sinker
is
moving
towards
being
mostly
purpose-built
just
to
sync
things
like
it's
there's
a
scheduling
component
that
will
will
or
is
split
out
and
evolve
so
that
it
handles
scheduling
and
the
sinker
handles
syncing.
Okay,.
C
F
F
F
Of
where
we
are
at
this
point
is
we
have
the
location
apis
that
are
in
design,
and
I
know
that
we're
going
to
be
working
on
advancing
some
scheduling
bits
so
trying
to
have
the
the
folks
with
the
storage
knowledge.
Look
at
that
and
then
think
through
what
sort
of
prototyping
could
be
done
to
make
some
of
the
storage
migrations
happen.
A
Yeah
josh
josh:
will
you
have
your
hand
up
first?
I.
D
Was
just
gonna
say
just
you
know
sitting
here
as
we
talk
about
in
the
last
five
minutes.
I
I
totally
see
sean
like
at
the
scheduling
layer,
something
with
you
know,
ramen
being
one
way
to
to
put
this
put
it
together,
and
you
know
keeping
with
some
of
the
way
you
know
we
expect
kcp
to
expose
to
the
user
layer.
You
know
there's
a
possibility
that
we
almost
you
see
nothing.
You
know
we
see
nothing
about
it
and
just
like
splitter
splits
deployments,
you
know
your.
You
know.
G
D
Deployment
your
staple
deployment
is,
you
know,
split
between
clusters
and
it
fails
over
as
needed.
You
know
using
the
ramen
pieces
or
something
based
on
the
ramen,
the
ramen
technology
and
one
of
the
sync
pv
sync
tools
under
the
covers
versus
then
there's
always
the
management
side
where
you
do
expose
those
up
to
kcp,
but
it's
it's
about
more
about
the
fleet
and
the
the
admin
roles
in
it.
I
think
there's
lots
of
opportunities
to
do
neat
things
here.
A
Yeah
was
it.
E
Right,
it's
a
matter
of
sort
of
defining
these
storage
use
cases
that
are
needed
or
are
wanted
right,
so
more
or
less
anything
that's
stored
in
etcd
can,
just
by
default,
be
replicated
fairly
simply
via
config
map,
secrets,
etc.
E
And
if
you
have
quote
stateless
applications,
just
accessing
object,
storage,
the
management
of
credentials
becomes
highest
priority
for
which
the
cosy
standard
might
be
viable.
But
also,
I
don't
remember
what
state
of
discussion
the
cosi
kept
is
in
and
whether
and
when
it
may
actually
merge
into
kubernetes.
So
that
would
be
something
to
keep
in
mind
and
then
beyond
that,
it's
a
matter
of
figuring
out
how
you
how
we
want
to
manage
pvs
right.
E
It's
the
pv
part
that
always
gets
tricky,
and
one
simple
handwave
solution
is
to
have
storage
outside
of
our
managed
kubernetes
clusters.
Right
have
some
other
kubernetes
cluster
or
just
some
other
thing
entirely.
Dealing
with
storage
and
just
connect
remotely
that
way
or
because
yeah
trying
to
manage
converged.
Storage
as
it
were
via
kcp,
is
quite
the
rabbit
hole
just
off
hand.
A
Yeah
yeah
cool-
I
guess
if
anybody
is,
is
listening
to
all
of
this
and
wants
to
go,
play
and
explore
and
see
what
they
can
make
work
this.
This
is
a
there's
a
bit
like
a
like
a
longer
longer
future
thing
that
I
think
kcp
is
planning
for,
but
that
just
means
you
have
more
time
to
play
and
explore
and
prototype
stuff.
It's
definitely
interesting
to
us.
It's
definitely
something
we
will
want
and
need
to
be
able
to
do
this
with
stateful
workloads,
but
yeah
this.
A
This
looks
really
cool
nick
says
potential
use
case.
Blue
green
changes
to
logical
cluster
configuration
picture
pivoting
between
logical
clusters,
replicating
a
key
space
and
volumes
to
n,
plus
one
logical
clusters.
Yeah,
that's
interesting.
A
Cool
yeah,
let
me
know
or
reach
out
on
the
slack.
If
you
want
to
play
with
this
more
and
we
can,
we
can
help
point
you
in
a
direction.
Otherwise
this
looks
really
cool
yeah.
I
guess
real,
quick.
An
update
on
me.
Friday
is
my
last
day
at
red
hat,
and
so
I
will
no
longer
be
joining
these
meetings,
but
I'll
still
be
around
in
the
various.
You
know
the
the
four
dozen
slacks
we're
all
a
member
of
feel
free
to
reach
out
to
me.
A
If
you
have
any
questions
ping
me
on
kcp
stuff,
if
you
want
and
I'll
help,
however,
I
can
but
otherwise
yeah
it's
been.
It's
been
great
working
with
all
of
you
and
we
can
move
on
to
the
next
item,
which
is
the
point
for
status
check-in.
G
A
Good
question:
I'm
not
continuing
this
role
on
kcp,
I
would
say
probably
andy
and
stefan
continue
to
be
the
the
main
point
people
for
stuff
in
kcp
arenas.
Thank
you
for
that
affirmative
act.
A
Otherwise
I
feel
like
I'm
just
throwing
responsibility
at
people,
but
yeah.
F
A
Thank
you
yeah
I
mean
I'm,
I
you
never
go
far
and
it's
a
small
community.
So
if
you
see
me
at
a
kubecon,
throw
something
at
me
and
we
can
say
hi
all
right,
yeah
andy,
do
you
want
to
do
the
the
status
check.
F
Yes,
so
I
was
looking
at
the
milestone,
there's
59
issues
in
pr's,
currently
in
it,
several
of
them
are
things
that
just
got
thrown
in
it
when
we
shifted
from
03
to
o4,
so
I
will
probably
take
a
pass
and
a
lot
of
these
things
will
just
get
bumped
because
they're
not
blockers
for
the
milestone.
But
I
am
curious
for
folks
who
are
working
on
things
that
are
in
the
0.4,
demoscript,
doc
and
or
the
work
packages
doc.
F
C
Sean
go
ahead,
I
don't
know
if
it
was
part
of
the
demo
for
script,
but
using
validating
web
hooks
with
api
bindings.
C
F
I
am
right
now
is
I'm
evolving
api
bindings
and
api
exports
based
on
what's
in
the
demo
script,
so
api
exports
are
going
to
get
identities
and
we
need
a
virtual
workspace
so
that
controllers
can
connect
to
it
to
see
only
the
things
that
they
should
see
based
on
their
exported
identity.
F
So
I'm
going
to
work
with
david
on
getting
that
in
place,
and
you
and
I
sean,
should
sync
up
and
talk
about
what
that's
going
to
look
like
for
the
validating
web
hooks,
because
I
now
have
a
better
idea
than
when
we
last
spoke
about
how
that's
going
to
look
in
both
scd
and
in
the
apis.
So
I'll,
be
cautiously
optimistic
that
we
can
get
everything
aligned
at
the
right
time
and
have
it
work
out.
Sounds
good
to
me.
Like
I
said
orange
flag.
F
Very
good
question,
though,
so
for
the
other
things,
let
me
get
my
demo
doc
up
there.
It
is
I'm
not
gonna
share,
I'm
just
looking
real,
quick,
so
the
location
work.
I
know,
stefan,
is
the
point
on
that
and
he's
out
this
week.
Is
anybody
working
with
him
on
the
location,
demo
and
the
location
apis.
F
Or
is
that
mostly
just
him,
mostly
just
him?
Okay,
so
we'll
come
back
to
that
joakim!
The
advanced
scheduling
bit
how's
that
going.
H
Well,
I'm
planning
to
start
that
tomorrow
I
would
say,
because
I'm
still
working
on
the
kind
and
end-to-end
tests
and
getting
some
well
finishing
some
tests
that
I
owe
you
all
from
previous
prototype.
So
I
expect
to
be
able
to
start
working
on
that
this
week.
I
Maybe
sorry,
maybe
I
can
add
just
one
point:
we
discussed
with
that
about
that
this
morning,
with
your
kim,
in
fact,
in
in
the
context
of
the
you
know,
work
of
future
exploration
about
thinking
strategies
so
which
would
be
obviously
for
p5.
I
I
had
to
to
do
some
changes
on
the
sinker,
especially
to
manage
the
you
know,
removal
from
a
location
flow
which
is
in
fact
quite
the
same
as
the
the
provisional
one
we
want
to
set
up
for
before,
which
means
that
the
changes
on
on
the
sinker
in
fact
would
possibly
be
some
sort
of
you
know
skeleton
or
nearly
the
same
code
that
what
we
would
use
for
p4.
So
I
I'm
in
the
process
since
I've
come
back
to
you
know,
preparing
the
basic
thinker
virtual
workspace
for
before
as
well.
I
I've
just
cherry
picked
this
this
commit
and
I'm
currently
testing
the
sinker
with
the
basic
thinker
virtual
workspace.
So
it
might
be
that
in
fact,
we
can
sync
with
you
akim
and,
and
have
that
part
I
mean
the
the
single
part
quite
quickly
down.
Then,
of
course,
there
is
the
the
scheduling
part
and
the
you
know:
placement,
annotations
and
and
flow
that
has
to
be
implemented
on
top
of
that.
But
for
the
thinking
part
at
least
we
might
be
nearer
than
than
we
thought.
F
Okay,
cool,
thank
you
for
the
rest
of
the
doc,
there's
an
authorization
one,
but
I
don't
see
either
the
folks
on
for
that.
So
I'll
follow
up
offline
and
then
the
remaining
two
were
the
web
hooks
and
bindings
that
sean-
and
I
were
talking
about
in
terms
of
demos-
we've
got
the
towards
the
pull
mode.
Sinker
maru.
I
know
you're
actively
working
on
that
still
feeling
pretty
good
about
the
date.
For
that.
J
I
don't
know
if
I
have
a
hardcoded
date
on
that,
but
we
are
making
good
progress
working
on
getting
a
plug-in
kind
of
a
semi-automated
plug-in
deployment,
and
the
intention
is
to
actually
use
that
directly
in
testing,
so
we'll
be
validating
the
path
the
users
will
be
using
against
kind
and
for
the
in-process
syncer,
which
we're
currently
testing
we're
going
to
use
we're
going
to
basically
generate
the
configuration,
apply
it
to
the
cluster.
Even
if
it's
a
logical
cluster
and
then
read
it
out
to
configure
the
in
process
sinker.
So
we're
not.
F
Okay
sounds
good,
and
I
don't
see
antonio
here
for
pod
logs
crdp
research,
steve.
That's
still
going
well.
K
Yes,
no
sorry,
I'm
this
morning
is
wild
yeah
yeah.
K
So
I'm
the
preliminary
results
that
I
posted
last
week
look
really
good
like
we're
within
the
same
order
of
magnitude,
basically
even
when
we're
reading
and
writing
with
secondary
indices
right
now,
I'm
working
through
deploying
h
a
databases
for
both
and
then
looking
at
the
impact
there
and
then
I'll
start
doing
like
sort
of
the
full
suite
of
benchmarks
which
is
going
to
be
with
and
without
the
watch
cache
we're
also
going
to
be
looking
at
performance
as
a
function
of
index
selectivity
and
then
we're
going
to
be
looking
at
some
scaling
numbers
as
well.
K
K
These
benchmarks,
which
I
don't
expect,
I
think
the
expectation
is
we're-
going
to
go
forward
with
it,
because
it
does
simplify
our
life
quite
a
bit
on
the
scaling
piece
and
then
I
think
the
the
largest
next
inflection
point
is
a
presentation.
That's
going
to
be
coming
soon
to
sig
pa
machinery.
K
What
are
the
approaches
that
might
let
us
put
this
entry
and
understanding
whether
or
not
that's
going
to
be
viable
will
give
us
really
good
insight
into
how
much
maintenance
burden
this
is
going
to
give
for
us.
Is
this
something
that's
going
to
live
in
a
set
of
patches
or
if
it's
you
know
just
going
to
be
consumed
directly
from
trees?
K
I
think
so,
one
of
the
things
that
when
I
first
did
the
implementation
actually
on
our
with
our
cube
fork,
there's
a
medium
to
large
amount
of
craft
in
parts
of
cube
that
we're
not
actively
using,
and
I
there's
a
lot
of
like
exploratory
work
that
was
done
about
a
year
and
a
half
ago
that
broke
a
bunch
of
stuff
and
getting
crdb
to
work
with
that
fork
of
cube
took
a
lot
of
time,
and
so
I
think,
before
we
choose
to
do
a
deployment
like
that,
we
might
have
to
think
about
how
do
we
undo
some
of
those.
K
Sorry,
I'm
very
scatterbrained,
I
think,
kyle's
your
question
so
I'd
like
to
land
the
changes
in
the
fork
in
a
testable
manner,
and
in
order
to
do
that,
we
need
to
make
that
repo
the
hygiene
a
little
bit
better.
So
I
think
it's
a
little
bit
out
still
cool.
F
Okay,
well,
thank
you
for
that
update
steve.
That
was
definitely
what
I
was
looking
for.
So
much
appreciated
the
last
one
on
the
list
is
the
scoped
client
and
informer
generation
varsha.
Nick
and
fabian.
Are
you
all?
I
know
we
talked
yesterday,
but
for
the
group
here,
could
you
all
give
an
update.
M
Yeah
absolutely
so,
we've
been
working
on
getting
the
client
regeneration
working
with
the
controller
tools
framework.
I
think
we're
on
a
pretty
good
track
with
that
we're
also,
at
the
same
time,
looking
into
the
changes
that
we
needed
in
the
kubernetes
library
upstream
to
support
wrapping,
listers
and
informers,
because
there's
some
plumbing
there.
That
needs
to
be
done
so
that
we
can
pass
the
the
cluster
where
key
function.
M
All
the
way
through
so
nick
is
investigating
that
right
now
and
and
we're
just
kind
of
plotting
along
getting
the
client
wrappers
generated
at
the
moment.
I
don't
anticipate
that
we'll
have
trouble
hitting
the
end
of
the
month,
but
we'll
have
a
much
better
picture
of
all
the
work
that
will
be
required
by
the
end
of
this
week.
F
Okay,
thank
you.
If
I
missed
anybody,
apologies
just
wasn't
in
the
dock,
so
please
be
or
err
on
the
side
of
over
communicating.
So
if
you're
running
into
problems
with
your
work,
please
come
chat
in
slack
or
wherever,
and
hopefully
folks
can
help
out.
If
you,
if
it
looks
like
something's,
going
to
take
way
longer
and
going
to
miss
the
end
of
the
month,
please
let
us
know
otherwise
keep
on
hacking
and
thanks
for
everybody's
hard
work.
A
N
A
Which
I
could
make
sure
before
I
say
that
definitely
no!
There
was
a
conversation
in
slack,
though,
that
I
wanted
to
at
least
surface
here
and
maybe
continue
here.
Andy,
you
and
meru
were
talking
about
scheduling
a
namespace
to
apply
to
a
p
cluster
hacking
things
around
so
that
you
could
force
a
namespace
to
schedule
to
a
particular
p
cluster.
Is
that
yeah
correct?
Reading
of
that?
Is
that
right.
F
Yeah,
I
was,
I
don't
know,
thinking
about
stuff
over
the
weekend,
for
whatever
reason-
and
this
idea
popped
into
my
head-
that
we
don't
have
any
security
right
now
around
the
way
that
namespace
scheduling
and
syncing
is
happening
in
various
directions.
So
I
think
that
from
kcp
to
a
workload
cluster,
I
don't
think
you
can
really
override
that,
because
if
the
system
does
it
based
on
the
logical
cluster
name
and
the
namespace
name,
and
so
that's
kind
of
that's
deterministic,
and
I
don't
think
you
can
really
hack
that.
F
But
that
translates
to
data
that
I
believe
is
stored
in
an
annotation
in
the
namespace
in
the
workload
cluster.
So
if
you
have
permission
to
edit
your
namespace
and
you
change
that
annotation
value
to
a
different
workload
cluster
or
a
different
name
space,
then
you
potentially
could
hijack
things
and
try
and
land
in
a
like
sync,
your
status
back
to
a
different
location
inside
of
kcp,
and
so
I
was
just
thinking
like
that's
one
example
of
a
potential
attack
vector
or
you
know,
escalation,
so
to
speak.
A
F
F
So
we
could,
there
is
a
hash
and
we
certainly
can
validate
that.
That's
a
really
good
idea.
I
think
we
also
had
discussed
potentially
changing
the
strategy
for
how
we
name
the
namespaces
in
the
workload
clusters
to
not
use
a
hash
or
do
something
different.
So
I
don't
know
that
that's
percent
set
in
stone
but
yeah
like
that.
That's
a
great
idea.
A
Validate
that
hash
on
every
sink,
when
kcp
receives
some
update
check
and
make
sure
that
it's
for
the
correct
name,
space
and
workspace
name
and
the
hash
matches
right.
Don't
just
trust
the
hash,
but
also
validate
that
it
matches.
F
Yeah,
I
don't
think
it
is.
I
don't
know
like
off
the
top
of
my
head,
like
the
hash
lives
downstream
in
the
workplace
monster.
So
like
you,
it
gets
translated
back
to
a
workspace
and
a
name
space
and
the
sinker
is
just
connecting
to
kcp
saying
go
to
this
workspace
go
to
this
name
space.
So
I
don't
know
that
the
hash
really
exists
at
that
point,
but
it's
worth
exploring
yeah.
I
On
the,
on
the
other
hand,
the
approach
of
the
virtual
thinker
of
the
synchro
virtual
workspace
is
to
provide
really
only
what
the
physical
cluster
has
the
right
to
see.
So
I
mean
if,
for
example,
some
something
was
changed
on
the
on
the
annotations
on
the
physical
side.
That
would
you
know,
change
the
logical
cluster
and
stuff
like
that
or
or
the
namespace
yeah
I
mean
in
any
case
there.
There
should
be
some
checks
on
on
the
virtual
workspace.
F
F
Do
you,
if
you
want
like
true
security
boundaries,
do
you
need
to
run
sinkers
that
are
isolated
from
each
other,
because
I
could
imagine
like
within
an
or
you
could
run
a
sinker?
But
maybe
you
don't
want
somebody
in
a
name
space
to
be
able
to
sync
back
to
a
different
workspace
in
that
org.
I
You
know
consolidating
workspaces
on
a
single,
you
know
think
of
virtual
workspace
endpoint,
but
on
the
other
hand,
also
having
the
ability
to
provide
the
only
one
thinker
with
several
urls,
several
endpoints,
of
virtual
sync,
virtual
workspaces
and
having
you
know
several,
you
know
single
clients
on
on
the
downstream
side,
so
we
have
the
ability
to
do
both
and
then
to
adjust.
I
According
to
the
use
case,
and
especially
according
to
to
do
boundaries,
you
know
domains
bid
security,
domains
or
shard,
domains
or
stuff.
Like
that,
that's
that's.
I
mean
in
the
design
at
least
it's
it's
envisioned
and
opened.
N
If
I
understand
correctly,
it's
not
just
what
the
sinker
can
see,
it's
that
it
can
modify
things
that
other
components
like
the
scheduler
will
pick
up
right.
So
it's
not
a
constraint
of
like
a
pen
hole
through
the
virtual
workspace,
because
it's
the
annotation
on
something
that
it's
perfectly
legitimate
for
it
to
change.
A
Yeah,
it
doesn't
have
to
be
able
to
see
it.
It
has
to.
It
can
modify
that
hash
to
some
to
point
to
something
you
can't
see,
but
then
it
will
start
syncing
stuff
over
there,
where
it
shouldn't
be
able
to
see.
I
mean.
F
I
But,
but
to
take
back
the
the
idea
of
of
the
hash,
if
we
don't
keep
the
hash
only
on
the
sinker,
but
we
expect
the
the
the
sinker
when
updating
the
status
to
the
virtual
workspace,
also
adding
the
hash
in
some.
You
know
place
annotation
or
anything
else
that
the
virtual
workspace
is
waiting
for.
Then
that
means
that
and
that
that
it
would
enable
check
doing
this
check,
I
mean
we
could
it's
just
a
question
of
convention
that
the
virtual
workspace
is
waiting
for
the
hash
for
any
update.
F
I
I
I
sorry
either
just
a
hash,
a
value
that
is
like
a
crc
check,
value
or
or
you
completely
replace
the
namespace
with
the
hashed
one.
When
discussing
to
the
virtual
workspace,
I
think
the
both
options
will
work.
F
Yeah
we
should
file
an
issue,
so
we
don't
forget
to
do
that.
A
It's
okay:
this
is
recorded
if
we
ever
need
to
refer
back
to
this
conversation,
we'll
just
find
minute.
36
of
the
of
the
recording
it's
perfect
anyway.
Nick
has
a
question:
was
there
ever?
Was
there
an
alternative
to
the
sinker
pattern,
maybe
shared
key
space
between
logical
and
workload
clusters
for
certain
resources,
also,
an
apology
about
it
being
obvious
and
nonsensical.
I
don't
think
there's
any
such
thing.
F
Yeah,
I'm
sorry,
I'm
not
trying
to
like.
Second
guess,
like
already
like.
O
No
offense
andy,
so
I
like
the
interesting
thing,
is
you
have
a
scheduler
on
the
workload
cluster
and
you're
trying
to
replicate
resources?
If
I
understand
it
correctly
for
different
views,
so
those
resources
exist
then,
like
the
deployment
exists
in
the
logical
cluster
key
space
as
a
separate
entity
than
the
deployment
on
the
workload
cluster
and
the
syncer
handles
that
right.
That
relationship.
F
O
Yeah,
okay,
so
what
if
I
mean,
has
it
been
considered
when
you
create
a
deployment
and
a
logical
cluster?
That's
let's
say
configured
against
the
workload
cluster
that
instead
of
doing
a
replication,
there's
just
one
resource
in
ncd
under
a
shared
key
that
both
the
logical
cluster
and
the
workspace
cluster
can
access
they're,
two
different.
O
Right
like
like
kcp,
is
smart
enough
for
certain
resources
to
do
a
pass-through.
O
For
a
given
resource,
am
I
making
sense
here.
O
A
For
certain
resources,
so
I
think
we
will
do
something
like
that
for
pods,
for
instance,
or
we've
talked
about
doing
something
like
that
for
pods,
where
the
the
actual
authoritative
source
of
truth
isn't
kcp.
But
then,
when
you
ask
about
the
pod
status,
it
forwards
it
down
to
the
cluster
or
maybe
not,
who
knows
that's
not
written
in
stone.
I
think
another
confounding
difficulty
of
this
is
we're
thinking,
you're,
describing
it
in
terms
of
kcp
and
one
workload
cluster.
A
F
G
A
So
it's
not
it's
not
a
bad
idea.
I
think
we
will
probably
do
something
like
that
for
certain
resources,
but
I
think
only
resources
like
pods
that
are
you
know:
pods
have
different
semantics
than
deployments.
A
You
can't
have
a
pod
running
on
two
clusters
and
it's
pretty
clear
that
they're
just
different
sort
of
beasts
than
deployments,
but
I
think
some
of
the
virtual
workspace
stuff
that
david
is
is
doing,
may
make
it
possible
to
have
that's
where
we
sort
of
inject
the
smarts
where,
where
the
work
the
workload
cluster
can
say,
upload
object
in
namespace
hash
and
the
virtual
workspace
knows
how
to
reverse
that
and
say.
Oh
what
you
meant
is
update
this
in
workspace
x
and
namespace
y,
and
that's
reminds
that
sorry
excuse
me.
Jason.
I
This
reminds
me
a
bit
the
work
that
is
being
done
on
pod
logs,
for
example,
where
we
are.
You
know,
setting
up
a
bridge
between
the
logical
cluster
between
kcp
and
the
physical
cluster,
and,
if
I
remind
correctly
remember
correctly,
there
was
the
idea,
for
you
know:
user
experience
to
see
your
logs,
for
example,
related
to
a
deployment
which
can
be
spread
across
various
clusters.
I
Typically,
we
would
use
virtual
workspaces,
you
would
just
have
you
know
a
new
type
of
workspace,
which
is
just
a
link
and
where
you
can
see
the
pods
and
in
fact
the
the
real
parts
they
are
living
in
the
workload
clusters.
But
then
you
would
have
a
kcp
virtual
workspace
that
would
just
gather
the
information
and
present
that
and
allow
you
to
to
have
then
the
right
urls
to
point
to
the
logs.
I
A
Yeah,
nickel,
head.
O
Just
a
quick
follow-up
question:
if
that's
okay
yeah,
so
what
is
the?
O
O
You
have
to
add
this
controller
or
add
this
shim
or
network
configuration
into
your
workload
cluster
to
make
to
make
it
work
properly,
with
kcp.
A
I
think
that
the
current
target
is
that
you
would
install
the
syncer
deployment
and
our
back
for
it
as
necessary
and
then,
when
it
wakes
up
when
it
when
it
starts
for
the
first
time
it
connects
to
kcp
and
says
I'm
sinker
for
cluster
x,
give
me
work
and
then
also
it's
a
two
phase
thing,
because
you
need
to
be
able
to
create
this
workload.
Cluster
object
in
the
kcpn
to
say
this
workload
cluster
is
named
x.
A
Give
it
this
work,
that's
a
very,
very
high
level
summary,
which
is
probably
inaccurate
in
20,
but
yeah.
I
think
no
one
is
screaming
at
me
that
that's
wrong.
So,
let's,
let's
assume
it's
mostly
right.
A
O
All
right
and
what
I'm
sorry
this
is
like
my
final
question:
what
is
the
tolerance
to
relocating
all?
Well?
Actually,
this
is
a.
This
is
not
a
good
question,
so
I
redact
it
all
right.
Thanks.
A
All
right,
yeah,
I
mean
yeah,
don't
think
that
any
of
your
questions
are
not
worth
asking
feel
free
to
ask
either
here
or
on
slack
or
anything
else.
If
it's
a
question
that
we
should
have
documentation
for
it's
a
good,
a
good
opportunity
to
go,
write
documentation
so
yeah
did
we.
We
came
away
with
some
action
item
to
describe
the
problem
and
potential
solutions
right
to.
Maybe
we
can
solve
this
in
virtual
workspace,
shenanigans
or
at
least
be
aware
that
there's
a
problem
right.
A
The
normal
thing
we
do
with
any
remaining
time
is
go
through
on
milestone
issues
any
business
before
we
go
and
do
all
that
all
right
in
that
case
feel
free
to
drop.
If
this
is
uninteresting
to
you-
or
if
this
is,
you
know
not
not
how
you
want
to
spend
the
next
13
minutes,
you
know
try
to
oh
yeah.
I
should
present
this
as
well.
A
This
is,
am
I
presenting
there,
we
go
there,
we
go.
This
is
unmilestoned
issues
test.
Two
sounds
like
fun.
F
I
I
think
the
test
makes
sense.
I
don't
think
that
we
have
time
to
do
that
unless
folks
have
spare
time
and
they're
not
already
assigned
to
stuff
for
o4,
so
I
would
either
put
it
in
tbd
or
o5.
A
Audit
events,
it's
a
link
to
slack.
Do
you
think
this
is
a
tbd
of
the
five.
F
I
think
it
would
be
if
we're
gonna
start
using
audit
logs
then
we'll
need
this
so
I'll
be
aggressive
in
adding
things
to
o5,
and
then
we
can
always
kick
them
out
later.
If
we
need
to
reasons.
F
This
one,
I
think
sean
was
going
to
work
on
it,
but
he's
got
the
api
bindings
or
the
validating
webhook
stuff.
First,
so.
F
F
A
F
At
least
that
was
the
debate
in
the
pr,
so
I
think
we
need
to
resolve
that
before
we
can
really
decide
what
to
do.
A
I
Cld
puller
is,
is
only
I
mean,
as
a
command
line
is
only
useful
if
you
don't,
if
you
want
to
have
your
apis
before
joining
your
physical
cluster
right,
then
you
can.
You
know
that
that's
the
only.
I
J
F
Yeah,
anyway
yeah,
maybe
we
should
ask
the
reporter,
if
or
how
they're
using
it
or
if
we
get
rid
of
it,
would
that
be
a
huge
burden.
I
Yeah
and
and
to
to
to
add
to
this,
if
I'm
not
mistaken
in
the
in
in
the
pier
it's
already
working,
the
only
thing
is
that,
instead
of
having
the
you
know,
resources
to
think
arc,
you
just
pass.
You
know
command
line
args
without
an
arg
name
right,
and
so
this
I
mean
the
feature
is
already
there.
If
you,
if
it's
just
that
in
terms
of
your
common
line,
ux,
it's
just
not
100
good,
because
it's
not
the
same
arg
name
then
other
commands.
A
Yeah,
if
it's
just
a
ux
bug
in
this
plumbing
command
line,
I
think
I
care
a
lot
less
about
it
do,
but
for
milestones,
maybe
tbd
just
because
we
think
we
might
close
it
anyway
or
four.
So
we
get
credit
for
closing
it.
A
Okay,
I
will
make
a
note
to
respond
to
that.
A
This
one's
from
before,
oh,
I
think
you
asked
for
clarification.
That's
probably
not
new
flakes
point
four,
or
is
this
oh
still
occurring
yeah.
A
Kyle,
do
you
think
you
want
to
do
this,
or
does
somebody
reading
this
hearing?
This
want
to
do
this?
This
might
even
be
a
good
first
issue.
A
I
would
say
kyle
if
you
think
you
want
to
do
it
in
the
0.4
time
frame,
I
will
point
for
it.
Otherwise,
I'll
probably
0.5.
A
A
A
A
All
right,
let's
just
put
it
in
tv,
to
put
it
somewhere
yeah,
so
it
doesn't
keep
coming
up
in
this.
In
this
query,.
J
Probably
not
because
that
yeah
that
that
kind
of
requires,
I
was
planning
on
doing
a
wholesale
refactor
of
the
sinker,
and
that
was
going
to
be
part
of
it.
But
I
don't
think
that's
going
to
be
a
near-term
task.
Do.
F
We
still
want
to
do
this.
I
think
when
we
get
the
the
work
that
fabian
and
varsha
and
nick
are
working
on
for
some
code
generation
and
scoping,
I
think
that
will
maybe
unlock
some
work.
We
can
do
here
to
try
and
fix
some
of
that,
but
it's
definitely
not
gonna
happen
in
o4.
A
M
A
Also
not
showing
up
here,
I
will
do
this.
M
A
The
app
yep-
that's
my
card
matches
up
all
right
with
that
we're
at
the
top
of
the
hour,
but
thank
you.
Everyone
and
I'll
see
you
on
the
slack
later.