►
From YouTube: Community Meeting, November 29, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hey
everybody
today
is
November
29th.
This
is
the
kcp
community
meeting.
If
you
are
interested
in
adding
anything
to
the
agenda,
the
issue
is
2428
up
on
the
screen
here
and
I,
given
that
I
just
created
this
issue
a
couple
of
minutes
ago,
it
does
not
have
anything
on
the
agenda,
but
I
know
David
had
mentioned
pre-recording.
That
he's
got
a
demo
that
he
can
show
so
David.
Are
you
ready
for
that.
C
Okay,
yeah
it's
a
bit
on
the
fly,
but.
B
So
I
would
yeah
well
that
sure
so
I
would
demo
what
I
call
coordination,
controllers
and
coordination
controller
helpers.
B
If
for
some
of
you,
that
that
you
know
are
on
kcp
for
some
time
there
was
initially
something
called
the
deployment
splitter,
and
the
goal
of
this
was
mainly
that
you
put
a
deployment
into
a
workspace
and
it
would
be
sprayed
among
two
sync
targets:
two
physical
clusters-
maybe
half
replicas,
would
go
in
one
thing:
one
thing:
Target
and
the
other
replica
in
the
of
the
same
Target
and
it
would
be
sort
of
you
know,
magically
every
deployment
on
every
physical
cluster
would
be.
B
You
know,
started
with
the
number
of
replicas,
and
then
there
would
be
some
sort
of
summarization.
Mainly
the
status
of
the
deployment
in
kcp
would
mainly
contain
the
sum
of
all
the
replicas
of
on
on
the
two
SIM
targets.
B
That
was
something
that
was
working.
You
know
with
a
very
different
implementation
as
what
we
have
today,
mainly
really
prototypish
and
and
haki,
and
now
that
a
number
of
you
know
that
work
on
Transformations
on
maintaining
Sinker
specific
views
of
an
object
for
every
resource
which
is
synced
to
one
of
our
supporting
targets.
Based
on
this
work,
it
was
possible
to
rebuild
the
deployments
of
liter
as
a
what
we
call
a
coordination
controller.
B
So
I
will
mainly
just
show
you
quick,
very
quickly
some
code,
and
that
would
be
mainly
the
you
know:
sort
of
model
for
coordination
controllers
in
the
future,
and
obviously
we
could
have
coordination
controls
for
PVCs,
for
example,
PVCs
and
PVS
prepare
the
PVC
link
it
to
the
right
PV
and
move
it
to
a
distinct
location.
B
For
example,
all
these
things
that
will
take
care
of
some
resource
prepare
the
resource
for
syncing
according
to
the
right
place,
the
writing
Target
and
possibly
delay
the
thinking
or
you
know,
change
the
status
even
for
ingresses,
for
example,
you
would
take
the
status
of
the
Ingress
as
it
is
on
the
physical
cluster
and
obviously
changing
the
statues
in
the
kcp
object,
because
you
wanted
to
you
don't
want
to
leak
the
URLs
of
the
physical
cluster,
but
you
want
to
provide
your
own
URLs.
B
For
example,
that's
what
the
the
hybrid
Cloud
Gateway
has
been
doing
also
its
own
way
in
the
past.
So
all
this
type
of
of
controllers
are
coordination,
controls
the
prepare,
the
kcp
resources
for
thinking
and
the
get
the
result
of
the
sinking
to
update
the
main
kcp
errors.
The
manuals
was
in
kcp
and,
finally,
what
we
have
in
such
coordination
controller
is
sort
of
two
queues
or
two
reconciliation
in
the
two
directions.
One
from
the
kcp
object
to
this.
B
What
we
could
call
the
Sinker
view
what
the
Sinker
will
see
and
we'll
try
to
sync:
that's
mainly
what
I
have
in
the
process
Upstream
view
here.
I
get
my
deployment
here.
You
know
on
a
given
Works
kcp
workspace
and
I
get
the
syncing
intents,
mainly
the
various
sync
targets
to
which
I
would
like
to
sync.
This
object
here
and
based
on
that
I
would
prepare
some
div
annotation,
so
for
now
it's
experimental,
but
there
would
be
in
the
future,
much
more
ways
to
Define
Transformations
on
objects
and
resources
to
be
synced.
B
B
I
know
an
annotation
to
instruct
that,
in
fact
you
don't
run
the
total
number
of
replicas,
but
just
a
number
of
replicas,
which
corresponds
to
the
number
total
number
of
replicas
divided
by
the
number
of
same
targets
mainly
just
spread
the
replicas
among
the
thing
targets,
and
so,
let's
just
put
The
annotation
on
every
for
one
annotation
for
everything,
Target
that
we
want
to
sync
on
and
then
just
update
the
deployment.
B
That's
typically
just
some
the
typical
thing
that
a
coordination
controller
would
do
prepare
Transformations
that
will
be
applied
so
that
everything
curve
through
the
virtual
workspace
will
see
its
own
version
of
the
deployment
with
it's
dedicated
number
of
replicas
and
the
other
way
around
is
when
I
get
an
object
from
the
Sinker
that
has
been
synced
by
the
sinker.
B
By
summing,
all
the
replicas
of
all
that
were
reported
by
every
singer
by
each
thinker,
the
same
for
conditions
as
well
I
would
merge
the
conditions
of
the
barriers.
C
D
Well,
what
this
thinker
views
are
maintained?
Are
they
stored
somewhere.
B
Yes,
they
are
strong,
I
mean
currently
they
are
stored
on
annotations
on
the
main
kcp
resource.
It's
I
mean
mainly
in
the
TMC
case,
to
have
everything
self-contained
on
one
resource,
which
is
much
easier,
obviously
to
manage
when
you
delay
the
resource.
For
example,
you
don't
have
to
you
know
garbage
collect
some
other
objects
that
will
be
related,
but
in
full
generality
it's
just
an
implementation
in
detail.
You
could,
you
know,
store
those
things
somewhere
else,
for
you
know
scaling
purposes
or
for
any
other
purposes,
but
obviously
the
implementary
implementation
would
be.
B
You
know
a
bit
more
a
bit
more
tricky,
but
you
know-
and
we
could
even
abstract
this.
The
main
mechanism
would
be
the
same.
Does
it
answer
yeah?
Thank
you.
So
mainly
with
those
two
reconcile
functions,
you
can
achieve
what
I'm
going
to
show
you
now
so
here.
Let
me
just
create
a
user
workspace.
Okay,.
B
Yes,
by
default
so
now
just
to
show
you.
E
B
So
I
have
two
sync
targets
here
in
the
location:
workspace.
B
If
I
show
you
so
on
those
theme
targets
mainly
I'm,
pointing
to
the
basic
standard
Cube
API
export
if
I
come
back
and
I
have
so
the
virtual
workspace
for
each
of
those.
B
Workspace
I
will
bind
this
as
location
workspace
that
contains
mating
Target
and
create
a
first
I
bind
with
the
name
West
and
create
a
placement
here,
and
that
will
only
isolate
the
sync
targets
with
the
region.
West.
B
C
It's
an
old
one,
so.
B
Right
so
I
will
bind
another
placement
as
well
for
the
East
location
and
no
yes,
here,
I
have
this
deployment
coordinator
running
the
Sinker
for
the
East
thing,
Target,
the
Sinker
for
the
West
sync
Target.
B
And
get
deployments
here
so
for
now
it
there
is
only
one
deployment,
it
has
been
scaled
up
and
obviously,
if
we
look
here
directly
on
the
East
kind,
cluster
I'll
find
my
deployment
with.
B
Here
is
here:
you
can
see
that
the
deployment
has
been
scaled
with
one
replica
here
so
now.
Let
me
in
kcb.
B
B
Reconciled
on
the
physical
clusters,
the
pods
are
started,
and
then
it's
reconciled
back
by
the
coordination
controller
to
to
kcp.
And
now,
if
I.
B
If
I
look,
there
I
have
six
replicas
for
this
deployment
here
the
test
on
the
west
physical
cluster
and
if
I
go
on
the
East
physical
cluster
I
have
five
replicas,
so
I
mean
they
are
really
spread
and
the
deployment
coordinator
that
I
for
of
which
I
showed
you,
the
the
code,
mainly
summed
up
the
available
replicas
and,
in
fact
everything
that
is
in
in
the
statues
and
created
the
dedicated
statues
in
the
Upstream
resource.
C
C
B
B
Json
map
and
you
have
a
number
of
fields
by
default.
For
now,
it's
only
the
statues,
because
it's
only
the
statues
which
is
summarized
and-
and
you
know,
brought
back
to
overeating
in
fact
and
brought
back
to
kcp
from
the
Sinker,
but
in
the
future
it
could
be
I
mean
the
all.
The
the
logic
is
generic.
B
That
means
that,
in
the
future,
you
would
have
the
ability
to
override
or
customize-
or
you
know,
perjury
or
per
coordination
controller,
then
various
fields
that
you
want
to
summarize
to
bring
back
from
the
synchron
to
kcp
on
sync
reviews.
So
if
we
think
of
services,
for
example,
typically
the
cluster
IP,
which
is
said
when
the
service
is
created
on
the
physical
cluster,
it
would
be
very
interesting
to
bring
that
back
on
the
Sinker
view.
So
you
would
have
a
distinct
field
here.
B
You
know
typically
spec.cluster
IP,
something
like
that
and
and
then
from
this
coordination
controller
could
grab
the
various
IPS
of
every
service
that
the
bar
each.
You
know
the
IP
that
the
service
has
on
everything
Target
and
build
its
own,
possibly
IP
through
some
sort
of
skipper,
for
example,
setup
or
anything
like
that.
B
We
have
those
annotations
here,
experiment
experimental
perspective,
but
that's
really
the
default
in
the
future.
The
idea
that
you
would
be
able
to
Define
Transformations
from,
for
example,
through
CL
that
would
be
related
to
you,
know
linked
to
the
coordination
control.
So
every
coordination
controller
could
Define
its
own
transformation
that
would
be
applied
by
the
virtual
workspace,
but
for
now
we
have
something
that
you
know
basically
works
for
the
simple
cases
just
by
letting
the
coordination
controller.
C
D
D
B
A
very
good
question:
in
fact:
it's
something
I've
been
thinking
today
that
you
know
I
should
probably
rename.
The
thing
is
that
you
know
it
has
to
be
renamed.
It's
it's
an
implemented
implementation
detail,
sorry,
but
it
also
has
to
be
renamed
on
the
cube.
Fork
I
can
explain
that
after
it's
quite
historical,
in
fact,
it's
a
bit
Legacy.
Initially
it
was
a
Jason
patch.
B
The
thief
of
the
whole
object
between
the
Sinker
View
and
the
main
view.
That
was
the
first
try
you
know
months
ago
for
this
work,
but
obviously
it
didn't
work
because
it
was
not
possible
to
you
know,
maintain
those
view
in
a
consistent,
consistent
way,
and
then
we
switched
to
a
div
which
is
not
really
a
diff.
You
know
Percy,
but
mainly
overridden
Fields.
You
have
the
name
of
the
field
or
you
know
the
path
of
the
field,
and
then
it's
it's
still
a
div.
B
You
know
in
an
abstract
way
it
gives
the
what
is
different
in
The
Thinker
view
from
the
main
object,
but
the
the
way
it's
applied
is
mainly
just
that
those
field
will
be
overridden.
On
top
of
the
current
value
of
the
Upstream
object,
does
it
make
sense?
So
probably
we
should
I
mean
we
could
rename
this
annotation
here
and
give
a
better
name
like
you
know,
Sinker
view
or
overridden
Sinker
overridden,
Fields
or
I'm
I'm.
B
Sorry,
it
has
to
be
managed
by
by
the
the
tube
Fork
as
well,
for
a
small
detail
is
that
we
want
to
be
able
to
update
this
annotation
when
you
modify
when
you
update
both
the
spec
or
the
statues.
So
for
all
the
objects
that
have
a
status
sub
resource,
we
have
to
enable
the
fact
that
this
annotation
can
be
changed.
Even
if
you
only
update
the
statues.
D
When
I
saw
this
the
first
time
it
kind
of
threw
me
off,
because
I
thought
you
had
some
kind
of
main
View
and
then
you're
just
reporting,
it's
not
the
difference.
The
Federal
Saving,
maybe
a
storage
space,
or
something
like
that
right,
because
maybe
only
one
or
two
fields
or
some
few
things
change.
So
you
have
like
you
know
your
main
thing
and
you
just
report
diff,
so
you
save
basically
storage
space.
That
was
my
initial
thought
when
I
saw
this,
but
it
looks
like
it's
the
full
status
right.
B
Well
for
now,
yes,
but
in
fact
in
the
future,
I'd
say
yes,
and
no,
there
is
no
I
mean
there
is
no
strong
requirement
that
it
would
be
the
whole.
The
whole
statues
I
mean
to
make
it
simple
for
now
is
the
statues,
but
you
could
completely
say
that
you
want
only
to
bring
back
statues
dot
available
replicas,
for
example.
You
know
it
would
be
part
of
those
what
I
mentioned
previously.
B
The
fact
that
every
coordination
controller
in
the
future
will
be
able
to
customize
the
way
fields
are
summarized
are
brought
back
from
the
Sinker
to
to
the
Sinker
view
in
kcp.
This
is
what
it's
called
in
the
in
the
code
and
in
the
design.
The
summarizing
rules.
That's
the
list
of
the
past
to
each
field
that
you
want
to
bring
the
Thinker
to
bring
back
from
Downstream
to
the
sync
review
Upstream.
B
E
Thank
you
so
yeah
that
looks
very
good.
David
I
wanted
to
ask
about
the
dependencies
like
secret
and
any
profit
mark.
B
Yes,
so
I
mean,
if
I
understand
the
question
correctly,
because
the
sun
was
not
very
loud
but
the
dependency
between
resources
in
terms
of
thinking,
that's
not
something
that
is
taken
in
account
at
this
layer.
B
Here
we
are
really
at
the
single
resource,
individual
resource
level,
but
probably
at
least
in
some
of
those
use
cases.
That
would
be
something
managed
by
coordination
controllers
if
I
take
the
example
of
the
storage,
for
example.
B
Typically,
a
coordination
controller,
one
or
two
coordination
controllers
would
take
care
of
following
the
link
between
the
PV
and
the
PVC
and
syncing
the
PVS
at
this
on
the
same
sync
Target,
where
the
pvces,
so
you
would
have
coordination
controller,
which
has
the
knowledge
of
you
know
the
specific
Logic
for
the
specific
type
of
resource
and
those
coordination
controller
would
drive
how
the
resources
would
be
synced
and
how
and
when
they
would
be
synced
and
possibly
if
they
need
to
be
transformed
to
to
keep
some
consistency.
B
Typically,
in
the
case
of
PVC
to
PV,
we
have
to
maintain
the
links
between
both
in
a
consistent
way,
both
upstream
and
downstream,
and
for
this
we
need
to
use
transformations.
E
Other
second
question:
if
you
don't
mind
so
it
says
there
are
a
few
resources
like
a
config
map
secrets,
that's
don't
really
have
a
status
and
people
also
develop
a
custom
resources.
This
way,
would
they
be
a
generic
mechanism
for,
for
example,
so
there
is
it
only
to
yes.
B
In
fact,
the
fact
that
you
know
there
are
two
things
for
the
Upstream
to
Downstream
thinking.
You
could
completely
decide
to
add
this
annotation.
You
know
to
add
a
transformation
annotation
even
on
a
config
map
and
seeing
what
a
single
config
map
sync
it
with
different.
You
know
values
or
updated
values
on
every
thinker
according
to
the
Thinker.
That's
that
works
as
well
for
that.
As
for
the
the
statues,
there
is
nothing
that
requires
the
statues
in
all
this
work.
B
We
just
take
this
in
account
and
we
know
when
we
summarize
a
value
from
Downstream
to
Upstream.
We
have
to
know
if
this
value
is
part
of
the
statues
of
or
not
because
then
you
have
to
you
to
call
update
statues
instead
of
update,
but
that's
the
only
thing
if
you
decide
you
want,
you
know
you
have
a
config
map
or
a
secret,
and
you
want
to
or
any
other
object
that
doesn't
have
a
statues,
and
you
want
to
break
back
some
some
field
from
Downstream
to
Upstream.
B
The
only
thing
is
that
you
had,
you
would
add
this
fill
the
description
of
this
field
in
the
summarizing
rules.
I
mentioned
previously,
but
it
doesn't
need
to
be
the
statues.
It
can
be
something
else,
I
mean
in
the
future,
in
full
generality.
Of
course,
in
the
implementation
we
started
with
the
basic
case,
which
is
envisioned
in
TMC,
which
is
mainly
sync,
the
spec
and
and
bring
back
the
statues,
but
it's
it's
not
limited
to
that.
B
I
mean
by
default:
if
you
have
a
customer
Source
it
it
will,
it
will
work.
You
can
Define
your
Transformations
for
Downstream
to
Upstream
and
possibly
you
have
nothing
to
bring
back
in
a
sincere
View.
You
know
if
it's
only
some
information
that
you
want
to
bring
back.
B
If
you
have
a
status
on
your
customer
service,
then
by
default
it
will
be
brought
up
because
it's
these
are
the
defaults
and
Rising
words
that
we
started
with
and
in
the
future.
As
soon
as
we
introduce
the
apis
to
allow
customizing
those,
you
know
summarizing
rules,
you
would
be
able
to
even
bring
back
a
spec
field.
From
your
crd,
from
the
from
your
your
customer
service
from
the
synchron
to
to
kcp
I
mean
it's,
it's
completely
open.
In
fact,.
C
Do
I
answer
this
time?
Okay,
yeah.
C
Welcome
Steve
I
think
it
was
you.
G
You
mentioned
that
there's
a
Carey
in
the
cube
Fork
that
updates
annotations,
irrespective
of
whether
spec
or
status,
is
changing
and
I'm
wondering
if
you
can
talk
about
why
that's
necessary
and
whether
or
not
that
like
At
first
blush,
that
would
double
the
amount
of
Rights
happening
when
status
updates
occur
right.
B
So
that's
mainly
so
I
mean
I
mean
as
a
start,
or
maybe
I'd
say
that
it's
only
for
this
annotation
I
mean
it's
really
only
for
this
annotation.
So
it's
not
a
a
general
mechanism
where
we
allow
these.
You
know
any
sort
of
annotation
to
be
modified
at
the
same
time
so,
but
for
this
annotation,
since
you
know
everything
is
done
in
those
transformations
in
the
virtual
workspace,
those
Transformations
are
done
completely
on
the
fly.
B
So
that
means
that
when
you
do
an
update
status
from
the
Sinker,
the
sync
for
the
Sinker,
it's
completely
transparent,
it
just
thinks
it
updates
the
statues.
What
I
don't
want
is
to
you
know
double
the
is
that
when
you
update
the
statues
on
the
object,
it
would
be
consistent
at
the
time
you
have
at
the
time
the
Sinker
receives
the
you
know
the
return
of
the
update
status
call
the
Upstream
object
should
be
consistent,
so
any
other
third
party
component,
and
especially
the
coordination
controller.
B
G
B
No,
it's
it's!
It's
quite
an
old
I
mean
a
change
we
we
had
discussed
some
months
ago,
I
think
when
testing
the
the
Transformations.
You
know
we
with
the
previous
round
and
it's
mainly
in
the
strategy
of
the
security
registry.
B
B
C
C
G
B
G
Little
concerned
about
that
that
that
patch,
that
seems
very
deep
and
low
level.
B
G
Is
it
I
can't
remember,
will
that
will
that
change
the
generation.
B
F
Don't
think
so
it
depends
again
on
the
registry,
but
the
typical
code
is
looking
at
changes
in
the
spec.
Just
a
spec,
okay,.
G
F
Can
I
also
ask
just
on
this
point?
Why
was
it
done
at
that?
The
registry
strategy
level,
instead
of
in
the
Handler
for
the
update
operations.
B
I
mean
that's
quite
old,
an
old
change,
but
obviously,
if
I
I
mean
as
far
as
I
remember
because
it's
a
month
ago,
the
goal
was
to
do
it
precisely
at
the
place
where
annotations
are
dropped.
When
you
I
can
show
you
maybe.
A
B
Yeah
I
mean
I'm
open
to
any
other
options.
At
the
time
we
discussed
that
I
if
I
remember
correctly,
with
Stefan
I
was
not
aware
of
other
options,
but
well.
F
If
you
define
this
semantics,
not
as
a
diff
but
as
a
you
know,
here's
what
it
is
right,
then
you
don't
have
to
change
it.
When
the
the
Upstream
objects,
status,
changes.
B
B
I'd
like
this,
to
be
directly
reflected
in
the
annotations,
but
without.
F
I
said,
oh
I
think
maybe
I
misunderstood
what
you
said
so
yeah
when
you
said
something
about
the
Sinker
thinks
it's
only
updating
the
status.
So
what
you're
really
saying
is
the
issue
is
that
in
the
Kube
API
Machinery,
when
you
update
the
status
you're,
doing
a
right
to
the
status
sub-object
and
that
can't
that
can't
update
the
annotations
in
the
main
object.
Yes,.
F
A
Yeah,
it's
at
least
worth
proposing
it
and
seeing.
B
Yeah,
on
the
other
hand,
I
mean
in
the
in
the
current
case.
It's
really
related
to
you
know
the
very
specific
semantic
of
storing
a
Synchro
specific
view.
I
mean
doing
things
in
a
way
that
the
Thinker
thinks
it's
doing
and
update
status
and
it
and
from
his
point
of
view
from
the
single
world,
it's
effectively
an
update
statues
right.
But
finally,.
F
G
I
remember
the
historical
conversations
were
about
potentially
sharding
spec
and
Status
into
different
parts
of
the
key
space
and
different
storages
and
stuff
which
never
happened.
I
think
maybe
also
worth
thinking
through.
B
Yes
and
you're
right,
Steve
that
possibly
we
have
to
revisit
that,
and
now
that
you
know
things
have
been
precised
a
bit
more
about
this,
the
the
the
real
content
you
know
of
this
annotation,
as
I
said.
Initially,
it
was
a
real
complete
diff
from
you
know,
between
between
the
the
the
is
a
sinker
object
and
and
the
the
Upstream
object,
but
that
was
not
really
maintainable.
You
know
for
more
than
just
a
demo.
G
B
B
Yeah,
that's
a
more
a
bit
more
I
mean
not
tricky,
but
a
bit
more,
a
bit
less
simplistic
than
that
in
the
sense
that
there
is
also
a
mechanism
of
what
I
we
called
promotion
that
we
had
discussed
with
that
with
Stefan
also
some
month
ago,
in
the
sense
that
by
default,
if
you
have
only
one
sync
Target,
the
plan
is
not
that
you
would
have
this
full
statues
in
an
annotation
and
then
finally,
your
coordination
control.
B
So
when
you
do
an
update
status
from
The
Thinker-
and
there
is
only
one
sync
Target-
the
virtual
workspace
knows
that
and
then
the
value
is
effectively
promoted
to
the
real
statues,
but
you're
still
and
instead
of
having
the
complete
statues.
Here,
you
have
the
flag.
You
know
something
like
that:
I
mean
I,
don't
know
the
value
exactly,
but
so
in
some
cases,
when
you
have
only
one
sync
Target,
which
is
still
at
least
when
you
have
only
one
location,
one
placement.
B
Sorry,
it's
still
the
typical
case
according
to
the
summarizing
rules,
when
it
makes
sense,
once
again,
the
value
of
the
statues
would
effectively
be
promoted
to
to
the
statues
through
and
update
statues,
but
in
other
cases
we
would
so
I
mean
according
to
how
it
works.
Now,
possibly
we
have,
as
you
said
you
have,
we,
we
could
revisit
the
the
real
requirement
and
maybe
some
in
most
cases
change
the
update
statues,
which
is
called
on
the
delegate
client
from
this
sync
and
virtual
workspace
to
do
a
simple
update.
G
Yeah
but
anyway,
I
just
wanted
to
say
I
was
surprised
to
hear
that
it
might
be
worth
spending
some
time.
Thinking
about
it.
B
A
Exhibit
that
was
really
cool
and
a
very
good
conversation
and
discussion
from
everybody.
So
thanks
I
am
not
seeing
anything
else
on
the
agenda,
so
I'm
happy
to
go
through
the
new
items
in
the
project
and
just
do
some
new
issue
triage
here.
We've
got
seven
teen
I
think
I'm
in
the
right.
A
Alrighty,
so
this
first
one
is
oh
right:
the
potential
crash
in
here
Kyle,
it's
been
a
couple
weeks
since
I've
thought
about
this.
Is
this
still
an
issue
or
did
you
track
down
if
we
were
actually
panicking
or
and
crashing
or
what's
the
latest
on
this
one.
J
Oh,
it's
been
a
couple
days
since
I've
thought
about
this.
What
was
it
I
think
that
we
made
it?
We
mitigated
the
crash
yeah,
so
I
don't
know
if
it's
as
big
as
an
urgent
as
an
issue
but
I
think
there's
still
an
open
item
of.
We
could
potentially
fix
the
issue
in
queue
because
there's
a
spot
where
it
does
panic,
but
it
would
have
to
be
fixed
I
think
it
would
need
to
be
fixed.
Upstream
I
would.
A
Yeah
I'm
sort
of
tempted
to
close
this,
but
sergius
go
ahead.
Yeah.
I
As
far
as
I
recall
from
the
stock
discussions
threat,
one
thing
we
wanted
to
make
sure
is
that
we
have
proper
Panic
handling,
set
up
in
a
way
that,
like
the
panics,
don't
Bubble
Up
in
the
way
that
it
makes
the
process
crash.
I
think
that's
maybe
at
least
something
that
we
should
ensure
and
make
sure
that
we
have
the
right
hooks
on
the
code
before
we
close
out
this
issue.
Finally,
well.
A
I'm
happy
to
close
this
one
and
replace
it
with
a
new
one,
about
making
sure
that
we
yeah
don't
crash.
Yep
sounds
good.
Okay,
panic!
A
A
A
We
have
a
flake
around
this
one
I'm
going
to
put
it
in
next
because
yeah
this
one,
oh
sorry,
I
I,
don't
think
we
really
need
to
go
through
it.
I
think
just
yeah,
I'm
gonna
put
it
in
next
Flakes
I'm,
just
gonna.
Stick
in
next
yeah.
C
Knowing
that
probably
it
it
is
probably
fixed,
but
you
know
let's
wait
a
bit
and
if
we
don't
see
it
back.
A
I
A
I'll,
add
you
as
a
reviewer,
yes,
and
this
is
in
paragraphs
or
interview,
Nolan
I,
think
you
were
looking
at
this
right.
A
Yes,
starting
to
all
right
folks,
if
you
wouldn't
mind,
I
know
it,
we
haven't
really
been
doing
this
a
ton,
but
if
you
can
just
flip
the
statuses
for
anything
that
you're
looking
into
that'd
be
awesome.
A
A
A
We
do
still
need
an
enhancement
proposal,
template
I,
think,
probably
basing
it
off
the
kubernetes
kept
template
would
make
sense,
I'm
going
to
put
this
in
the
backlog,
and
if
anybody
is
interested
in
helping
out
here,
please
either
assign
this
to
yourself
or
reach
out
and
we'll
get
you
helping
out.
A
H
A
David
can
I
assign
this
to
you
to
actually
I,
don't.
A
A
A
D
A
A
Okay
yeah,
so
we
definitely
want
to
split
the
TMC
code
out.
A
G
Just
for
clarification
is
next
currently
0,
10
or
0
11.
A
Doesn't
necessarily
mean
zero
ten!
No,
this
this
particular
epic,
is
definitely
not
zero.
Ten,
that's.
D
A
We
haven't
really
been
doing
any
sort
of
formal
process
around
the
different
columns
for
the
statuses
I
mean
typically,
if
we
were
doing
sprints
and
kanban
style
like
one
or
both
you'd.
You
know
when
you
finish
the
task.
You
would
go
look
what's
in
next
and
you
move
next
to
in
progress.
A
Maybe
we
need
to
get
rid
of
some
of
our
status
columns
or
be
a
bit
more
diligent
in
how
we're
using
this.
But
basically
the
goal
here
is
to
get
things
review
everything
that's
in
new
if
it
is
legitimate
that
eventually
at
some
point
we
want
to
do
it.
It
goes
into
backlog
for
things
that
I
know
we
want
to
do
sooner
than
just
eventually
I'm
putting
them
into
next
is
kind
of
what
I'm
doing
right
now.
A
B
Yeah
you'll
have
a
Pierre
already
so.
C
We
can
complete
it
later
on.
It's
part
of
the
yes.
A
Do
this,
regardless
of
the
feature
flag,
because
there's
not
really
any
way
that
you're
going
to
be
able
to
client-side
from
the
Cube
control,
kcp
sync
plug-in
you're
not
going
to
be
able
to
tell
if
the
feature
flag's
enabled
or
not
easily
so
anyways
I
put
it
in
the
backlog,
and
that's
it
for
here.
So
we've
got
like
a
couple.
Minutes
left
any
last
minute
topics,
if
not
happy
to
give
you
all
a
few
minutes
before
your
next
meeting.
If
you've
got
one.