►
From YouTube: Community Meeting January 11, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hey
everybody
today
is
january,
11th
2022.
This
is
the
kcp
community
meeting.
We
don't
really
have
much
on
it
on
the
agenda
today,
although
I
do
see
that
steve
just
added
something.
So
let
me
screen
share
this.
Bear
with
me
just
a
second
here.
A
Okay,
get
this
out
of
the
way
okay.
So
the
first
item
on
the
agenda
is
steve:
with
update
on
single
integer
resource
version
for
workspace,
clip
scoped
client
interactions
before
steve
gets
started.
Let
me
just
paste
the
link
to
this
particular
page
in
the
chat.
So
if
you
do
have
an
agenda
item,
please
feel
free
to
add
a
comment
and
we
will
see
if
we
have
time
to
get
to
it.
Hopefully
we
will
so
steve
over
to
you.
B
Cool
thanks.
So
the
background
for
this
topic
is
when
you're
talking
to
kcp
and
you're
talking
across
multiple
workspaces.
B
We
end
up
needing
to
have
a
more
complicated
resource
version
to
hold
the
state
about
which
specific
workspaces
you're
talking
to,
but
something
that
we
were
trying
to
keep
is
a
single
integer
resource
version
when
you're
just
talking
to
one
workspace
by
itself.
B
B
So
if
I
have
a
config
map,
for
instance-
and
I
know
that
it's
at
resource
version
10,
I
can't
say
anything
about
a
secret
that
I
have
that's-
that
research
version
100..
So
after
rereading
the
code
in
cube
yesterday
that
parses
resource
version,
I
don't
think
there's
any
in-cube
controllers
that
are
doing
that
particular
type
of
comparison.
B
But
I'm
hoping
that
this
maybe
jogs
people's
memory
or
they
think
about
places
where
they're,
using
and
parsing
resource
version.
So
if
there's
examples
of
of
people
doing
that,
I'd
be
keen
to
keen
to
know.
C
B
If
you're,
somehow
keeping
a
clock
and
watching
events
or
you
have
watch
events
coming
in
like
those
are
those
continue
to
be
in
the
right
order,
but
if
you're
just
looking
at
the
resource
versions
on
the
objects
in
the
watch
events,
then
you
can't
say
anything.
B
Yeah,
so
the
equality
comparison
always
continues
to
work
and
in
the
scheme
that
we've
figured
out,
we
can
also
have
comparisons
of
resource
versions
for
a
given
object.
So
I
know
that
this
version
is
newer
or
older
than
this
other
version.
It's
across
objects
that
that
breaks
down.
B
B
A
Thanks
steve,
do
you
want
to
post
this
anywhere
for
broader?
Yes,.
B
Yeah
today,
I'm
going
to
try
to
write
up
the
actual
scheme
that
we're
thinking
of
using
and
then
something
that
for
seagate
machinery.
A
D
Oh
yeah,
so
quick
little
background
on
that.
Stefan
was
nice
enough
to
set
us
up
a
project
within
github
where
we've
kind
of
aggregated
the
items
that
we
came
up
with
in
the
work
packages
document
into
a
milestone.
I
don't
think
it's.
It's
documented
anywhere
other
than
a
slack
thread
at
this
point,
but
the
hope
was
this
would
be
achieved
by
the
end
of
january.
D
So
my
question
for
the
group
was:
do
we
want
to
take
that
same
approach
to
define
what
the
next
refinement
of
that
story
is
as
in
work
it
in
the
work
packages
document
change
the
story,
pull
in
some
of
the
midterm
goals
and
then
move
it
into
another
milestone
or
is
there
another
approach?
People
would
like
to
do
or
are
we
off
track.
A
C
B
Spreadsheet,
somewhere
with
the
different
milestones.
D
Okay
cool
well,
if
folks,
like
that
approach,
then
I'll
I'll,
just
throw
a
section
in
here
where
we
can
do
that
story,
refinement
and
I'll,
send
just
a
reminder
out
to
the
slack
channel
on
that
and
we
can
discuss
it
in
the
next
meeting
or
see
where
we
stand.
Based
on
the
conversation
that
happens
in
the
document.
A
That
works
for
me
so
thumbs
up
from
steve.
So
if
there's
no
more
comments
on
milestone
planning,
we
can
move
on
to
what
staphon
volunteered
me
for,
which
is
an
update
on
trying
to
make
listers
and
controllers
and
clients
work,
both
upstream
and
downstream,
in
a
kubernetes
environment
or
a
kcp
environment
with
logical
clusters.
A
I
still
don't
have
a
demo
to
show
because
I'm
in
the
middle
of
some
code
changes,
but
I
can
provide
an
update.
So
if
you
were
here
last
week,
you
may
remember
that
I
was
adding
methods
to
listers
where,
if
it
originally
had
a
list
method,
there
was
now
a
list
with
context
method
that
took
into
context
and
the
same
thing
with
get,
and
it
was
sort
of
magically
able
to
determine
which
logical
cluster
to
go.
Look
in
based
on
some
setup
that
the
developer
would
do
to
make
all
of
that
transparent.
A
A
So
let
me
share
vs
code
and
make
the
font
bigger,
so
you
all
can
see
it
all
right.
So
this
is
some
code
for
a
controller
in
the
api
extensions
api
server
in
kubernetes,
this
particular
controller
handles
crds
and
whether
or
not
their
names
collide
or
they're
available.
Among
other
things,
and
so
one
thing
that
I
needed
to
do
here
was
similar
last
time.
A
I
need
to
be
able
to
list
all
crds,
but
not
truly
be
all
crds,
because
we
don't
want
to
go
across
every
single,
logical
cluster
in
logical
cluster,
a
and
logical
cluster
b.
It's
totally
fine
if
each
one
declares
the
crd
named
foo
and
they
can
be
different
and
independent
from
each
other,
with
different
schemas
and
and
they
they
are
truly
independent,
and
so
in
order
to
do
that,
I
need
to
be
able
to
do
a
list
call
against
the
lister,
but
only
see
the
logical
cluster
that
we
are
looking
at.
A
So
what
I
had
before
is
this
code.
That's
commented
out
around
decoding
a
key
and
then
creating
a
sync
context,
so
I've
done
something
a
little
bit
different,
but
the
idea
is
the
same.
There
is
a
developer
supplied
function,
called
scope
from
key.
The
idea
with
a
scope
here
is
that
it's
meant
to
subdivide
a
space
into
scopes
or
portions,
and
so
here
the
key
for
a
normal
kubernetes
environment
would
just
be
the
the
name
of
the
custom
resource
definition.
A
So
you
know
this
could
be
clusters.xkates.io.
A
And
for
sorry,
clusters.cluster.
A
So
in
a
kcp
enabled
environment,
this
key
is
actually
going
to
have
whatever
the
logical
cluster
name
is
so
this
could
be
my
logical
cluster
and
then
the
convention
that
I
have
for
right
now
is
it's
just
divided
with
a
dollar
sign.
So
the
first
part
is
the
logical
cluster
name,
and
the
second
part
is
the
key
name.
So.
A
A
Oh
my
code.
Completion
is
not
work
or
jumping's,
not
working.
The
scope
has
a
name
method:
scope
is
an
interface,
and
the
name
in
this
case
will
be
my
logical
cluster.
So
this
code
below
is
a
little
bit
different
from
what
we
had
before
the
mutation
cache.
A
There
now
needs
to
be
one
per
scope,
and
then
I
create
a
scoped
instance
of
the
controller
passing
in
the
scope
passing
in
a
scoped,
lister
or
sorry,
scoped,
client
scopedlister,
the
specific
mutation
cache
in
this
case
for
the
scope,
and
then
I
call
sync
on
this
new
scoped
controller,
and
what
you'll
see
when
I
get
to
the
sync
function,
which
is
here,
is
that
it
can
call
get
on
the
key
and
oh,
I
might
not
have
done
this
right,
but
it
would
be
able
to
to
call
get
on
the
key
and
it
would
function
just
fine.
A
The
this
would
be
a
scoped
get
based
on
the
logical
cluster
and
then
any
any
actions
that
things
like
update
status.
This
would
be
scoped
as
well,
so
it
would
go
against
the
correct
logical
cluster
and
not
against
a
default
or
an
empty
logical
cluster,
and
this
will
work
for
namespace
things
as
well.
So
there
is
a
like
in
the
I'm
in
the
wrong
see.
A
A
Okay,
so
just
to
show
some
more
changes.
I
was
working
on
the
clients
and
I
was
playing
around
with
deployments
in
particular
for
the
deployment
splitter
and
so
there's
now
this
new
scoped
deployments
method
on
the
client
set,
and
what
this
will
give
you
is
a
deployment
scoper
that
can
then
give
you
deployments
for
a
namespace.
A
So
what
I
have
been
working
on
here
is.
A
A
second
to
find-
I
guess
it's
not
in
here
well,
I
was
working
on
the
the
namespace
controller
and
the
namespace
resources
deleter,
which
needs
scoped,
metadata
clients
and
scoped
lister
or
yeah
scope
lister.
So
all
of
this
together,
hopefully,
will
make
things
a
lot
easier,
so
that
even
if
you're
doing
something
like
custom
resource
discovery
and
you're
just
trying
to
serve
up
http,
you
can
say
I
need
the
scope
from
the
context.
A
This
comes
from
the
request
and
you
get
a
lister,
that's
scoped,
and
then
you
can
just
list
everything
or
you
can
do
a
get
and
things
just
work
and
I
haven't
quite
worked
out
the
best
place
to
set
the
the
scope,
but
for
right
now
in
kcp's
handler,
which
looks
at
the
path
and
figures
out
which
logical
cluster
is
the
client
requesting
it
will
create
a
scope
indicate
if
it's
wild
card
or
not
it'll
set
the
the
scope
on
the
context
and
there's
also
a
storage
scope
that
I've
been
working
on.
A
This
might
not
be
the
best
place
to
set
it,
but
with
these
with
these
two
scopes
set,
the
api
server
is
able
to
do
things
appropriately
whenever
you're
using
a
client,
the
listers
will
work
if
you
need
them
to
be
scoped
and
there's
a
couple
places
in
ncd
storage,
where
we
need
to
be
able
to
adjust
prefixes
based
on
the
logical
cluster
name.
So.
A
Now
that
I
have
auto
complete,
I
can't,
or
so
the
these
methods
may
change
names.
They
may
get
split
up,
they
may
be
in
the
wrong
package,
but
the
scope's
name
is
basically
meant
to
be
the
logical
cluster.
A
There
are
some
places
where
I
need
to
take
a
key
that
doesn't
include
the
logical
cluster
name
in
it
and
include
and
attach
it
so
that
it
works
for
pulling
out
of
an
index
or
out
of
the
cache.
That's
what
this
cache
key
function
does,
and
I
can
show
where
it's
used,
the
name
and
controller,
for
example,
when
it
so
it
has
a
crd
and.
A
A
A
I
I'm
trying
eventually
to
get
to
a
point
where
it's
very
easy
to
know
if
you
do
or
don't
have
to
call
a
method
like
this,
but
for
right
now
you
have
to
be
aware,
if
you're
trying
to
work
in
a
multi-scope
environment
or
not
so
this
is
kind
of
a
it's
an
exceptional
thing.
I
wouldn't
expect
most
single
cluster
targets
to
need
to
do
anything
with
this.
A
E
Cache
key
name,
you
know
one
of
the
things
that
jumped
out
of
me
andy
was
like
there's
a
couple
of
analogs
here
to
like
sharding
breaking
up
a
problem
like
we've
already
talked
about
how
controllers
pointing
at
a
set
of
workspaces
might
actually
want
to
have
a
breakpoint
between
them
on
some
characteristic
of
the
workspaces
for
various
reasons:
regional
geographic,
pure
performance
based.
So
I
could
see
there
being
elements
of
key
separation.
E
Sometimes
it's
secondary
attributes,
so
maybe
the
interface.
This
pattern
doesn't
quite
work
well
for
those,
but
then
I
was
also
kind
of
as
you're
pointing
out
with
namespace
like
there
are
places
where
I
could
imagine
controllers
that
are
looking
at
a
subset
of
like
scheduler
and
nodes.
Do
this
pretty
aggressively
where
they
look
at
field
selectors
on
a
cache
and
they're
using
the
indexer
or
they're
passing
a
field
selector
through
there's
some
parallels
there
that
this
at
least
points
in
that
direction.
E
I
don't
have
any
like
concrete
suggestions
right
now,
but
it's
at
least
worse
worth
thinking
through
looking
at
some
of
those
other
use
cases,
as
we
go
a
little
bit
further
and
seeing
whether
even
some
of
those
might
map
like
constructing
a
cache
with
a
field
selector
on
node
for
controllers
that
break
things
down
by
node
is
any
of
the
the
scope
caching
relevant
there
and
the
parallels
to
indexer,
where
you
might
actually
want
an
indexer
and
then
periscope
with
an
aspect.
Yeah
and.
A
When
you
do
like
in
a
list
or
if
you
look
at
a
deployment
lister
for
example
and
you're,
trying
to
list
all
deployments,
maybe
your
ignore
selector
say
it's
everything
you're
trying
to
list
all
deployments
in.
A
If
you
point
the
deployment
lister
at
a
cache
from
a
shared
informer,
that's
doing
a
cross
cluster
list
watch
which
usually
you'd
want
to
do.
Then
this
is
going
to
return
everything
across
all
logical
clusters.
And
so,
if
you
have
a
scoped
lister,
then
it
basically
says
I
need
to
go.
And
actually
let
me
I
was
in
the
wrong
one.
A
If
you
have
a
namespace
lister,
then
it's
going
to
assume
by
default
that
there's
no
scoping
and
there's
no
manipulation
of
the
key
needed
and
so
that'll
be
the
value
that
goes
against
the
namespace
index.
But
if
it
is
scoped
again,
this
is
you
know
the
magic
connection
brittle
the
brittle
connection
here
is
it's
going
to
call
cache
key,
which
is
going
to
put
that
logical
cluster
name
in
stefano,
and
I
were
talking
about
the
idea
of
well.
A
E
E
So
it
isn't
a
bad
assumption,
like
the
the
parallels
between
key
construction,
nested,
scoping
and
then
indexing,
which
is
creating
a
key
with
nested
scoping
and
then
secondary
indexes,
which
are
basically
an
attempt
to
pivot
on
something.
That's
not
the
primary
key
like
so
there's
a
lot
of
analogs.
Here
too,
we
just
need
to
figure
out
what
the
the
most
convincing
argument
is
that
this
is
something
that
actually
fixes
like.
E
Like
safety
right,
like
some
of
the
things
you
mentioned
like
places
where
we
put
keys
into
caches,
is
somewhat
unsafe
today,
I'm
not
advocating
that
would
go
down
generics
this
fast
in
cube,
although
I
don't
know
what
we
haven't
talked
about
it,
jordan
and
I
were
like
at
least
having
some
basic
discussions
about
like.
Maybe
we
wait
a
couple
of
releases
before
we
push
too
hard
on
that,
but
like
there
are
places
where
how
we
put
the
safety
around
what
goes
into
caches
the
safety
around
what
comes
out
of
the
storage
layer.
E
We
are
missing
opportunities
to
be
safer
and
we
do
occasionally
have
bugs
in
controllers
due
to
bad
key
construction
or
error
like
on
an
error
case.
Someone
puts
stuff
back
in
the
queue
in
the
wrong
key,
so
there's
at
least
some
some
elements.
I
could
see
us
getting
down
that
point
and
maybe,
even
at
the
end
of
the
day,
it's
just
about.
E
A
E
A
And
then
one,
oh,
I
was
gonna.
Show
you
the
storage
scope
too.
So
there's
some
places
where
we
we
may
need
to
change
the
key
route
depending
on.
If
it's
a
wild
card
list
watch
or
not,
and
then
so
these
two
to
go,
go
together,
the
from
prefix
and
key
basically
undoes
or
takes
into
account
wild
carding.
A
If,
like
any
time,
we're
doing
a
list
or
a
get
and
or
a
watch
and
we've
got
objects
coming
back,
there
was
code
in
decode
and
append
list
item
that
was
setting
the
cluster
name
on
the
object,
metadata
and
I've
moved
that
into
the
scope.
So
if
there
is
a
scope
that
is
available
for
the
flow,
then
it'll
just
call
post
decode
and
what
that
does
it's
the
same
logic
that
would
over
there.
E
I
was
you
know
in
thinking
about
like
option.
E
Two,
you
know
like
a
little
bit
of
my
thinking
has
been
the
actual
model
that
you'd
use
to
store
this
in
sql
versus
pure
key
value
like
ultimately
the
table
that
holds
you
need
to
make
a
table
that
holds
objects,
and
I
was
kind
of
debating
in
my
head
between
a
column
that
has
the
key
and
a
column
and
three
columns
or
four
columns
resource
cluster
namespace
whatever,
because
ultimately,
some
of
the
asp,
like
your
whatever
storage,
we
would
pick
from
a
database
side
we're
still
basically
just
mapping
onto
a
sorted
lexigraphic
key
space.
E
You
know
the
way
that,
like
cockroach
or
even
postgres,
models,
tuples
and
all
that,
like
you're
still
at
the
end
of
the
day,
getting
to
some
sequence
of
bytes,
that
is
a
prefix
prefix
scannable
for
the
primary
key.
So
there
was
kind
of
some
questions
about
like
if
we
broke
the
columns
up
in
sql
and
a
sequel
type
approach
for
the
back-end
storage,
which
might
live
side-by-side
like
I'm,
starting
to
think
that
there
might
actually
be
benefits
to
both
having
an
ncd
storage
and
sql
storage.
Equally.
E
But
if
you
broke
those
apart,
you
had
those
attributes
that
simplifies
a
lot
of
the
data
modeling
and
potentially
gives
you
opportunities
to
go
and
say
like
instead
of
referencing
resources
by
the
resource
type
by
a
name,
you'd
reference
it
by
you
id,
which
gives
you
other
properties
like
better
foreign,
key
integrity,
and
so
there's
some
open
questions
about,
because
there's
no
name,
we
don't
care
about
prefix
scanning
on
the
resource
name
itself
like
we
don't
do
anything
in
the
keys
in
etcd
to
organize
them
by
group,
then
by
kind
than
by
you
know
like
where
any
kind
of
prefix
would
ever
be
relevant.
E
So
it
was
kind
of
one
of
those
questions
about
what
would
be
the
changes
to
the
storage
layer.
To
do
that,
and
some
of
that
was
places
where
we
pass
around
strings
to
both
cache
keys.
Would
we
pass
around
a
tuple
which
is
very
similar
to
the
rest,
scope
stuff,
which
is,
and
what
we
go
in
cache
keys
is
like
string
is
the
we
all
talked
about.
This,
like
string,
is
the
most
efficient
mechanism
because
it
plays
well
with,
like
you
put
it
into
the
string,
and
then
everything
in
the
go
path
is
simple.
E
Do
a
switch
and
99.9
percent
of
users
wouldn't
even
notice
the
difference,
but
it
would
give
us
some
clarity
and
efficiency
stuff
like
places
where
you
know
it
might
it
might
be
a
good
exercise
just
to
see
if
anybody
breaks
as
you
look
through
the
refactor,
so
I
was
kind
of
playing
through
my
head,
like
all
keys
being
switched
to
some
form
of
you
know
opaque
tuple.
You
know
key
prefix
tuple.
E
Which
means
you're
paying
the
car
like
you're,
actually
like,
because
of
that
you've
got
an
interface
object
in
the
map,
so
you're
already
paying
it
in
directions,
you're
doing
one
more
point
or
access.
So
it's
kind
of
one
of
those
like.
If
we're
going
to
go
to
that
level,
then
the
string
doesn't
really
add
much
anymore.
E
If
you've
got
to
go
to
the
interface-
and
I
was
trying
to
think
through
ways
of
like
cheaply
skipping
that-
and
some
of
it
might
be-
a
a
slice
might
actually
be
relevant,
but
we
need
to
think
about
like
we
need
to
think
about
what
the
typing
looks
like
and
whether
we
even
care
about
the
efficiency.
E
This
like
this
is
a
pretty
fundamental
efficiency
thing
for
caches
so,
but
we
could
certainly
tolerate
some
efficiency
losses
and
caches
if
it
gave
us
better
type,
safety
or
better
checks
when
people
screw
bears
like
the
fact
that
we
drop
strings
into
work,
cues
is
still
kind
of
problematic
like
work.
Cues
are
a
place
where,
if
we
were
going
to
go
do
generics,
I
would
probably
do
generics
over
work
use
or
have
someone
speculatively
look
at
that
and
the
moment
you
do
that
you're
starting
to
get
into
a
spot
of
like.
E
Would
you
want
that
same
type,
safety
elsewhere?
The
answer
is
probably
storage,
interface,
cache
interface,
etc.
So
it
would
be
good.
Maybe
india,
if
you
could
look
at
some,
I
don't
know
if
this
is
easy,
but
like
maybe
do
some
apples
to
apples
with
your
change
to
the
cash
and
just
get
a
ballpark
for
like
say
you
put
10
000
or
a
million
keys
in
a
cash
and
look
at
how
some
of
the
common
list
operations
look
before
and
after
change,
whether
there's
anything
noticeable.
E
Just
to
like
just
to
ask
the
question:
if
we
change
overall
characteristics
might
be
something
just
if
you
can
get
a
quick
experiment
going
yep.
A
All
right,
I
saw
steve,
you
had
a
couple
questions.
One
is
this
more
invasive,
and
would
this
be
easier
to
an
easier
fork
to
keep
or
whatnot?
I
mean
I
still
want
to
get
all
of
this
upstream
if
possible.
So
I
wouldn't
expect
that
this
would
be
a
fork.
We
would
maintain
long
time
for
a
long
time
unless
we
just
fail
convincing
the
sigs
to
take
it
upstream.
So
yes,
it's
it's
more
invasive
than
the
context
methods.
A
But
if
you
had
been
sitting
with
me
when
I
was
debugging
trying
to
figure
out
why
my
custom
crd
lister
wasn't
working-
and
it
was
because
of
some
restrictions
that
I
had
codified
myself
into
the
listers,
you
would
understand
why
I
went
with
this
approach
and
then,
let's
see,
we
talked
about
exposing
shard
dimensions
to
users
when
they're
writing
controllers,
which
looks
fairly
similar
to
scope
due.
A
The
key
root
bit
was
just
for
like
there's
one
special
case
in
fcd
listing
and
watching
where,
if
you're
doing
a
wild
card
list
or
watch
the
key
free
fix
that
con
that's
set
doesn't
include
the
logical
cluster
name,
and
so
when
we
get
an
item
out
or
when
we
get
a
key
from
fcd
and
we
strip
off
the
key
prefix
what's
left
is
the
cluster
name
and
then
a
slash
and
then
the
resource
name
and
so
to
properly
decode
it
and
set
the
cluster
name
appropriately.
A
B
Cool
this
seems
to
like
have
the
like
clarity
and
direct
intent
that
the
current
like
dot
cluster
stuff,
has
and
is
a
more
general
approach.
So
I
like
it.
A
Yeah,
I'm
I'm
gonna,
see
like
once.
I
finish
finding
all
the
places
where
we're
using
the
with
cluster
cluster
name
from
like
the
the
older
code.
Once
I
rip
all
of
those
out
and
replace
them
with
scopes,
then
I'm
going
to
try
and
replace
the
cluster
calls
to
the
clients
with
a
scope
and
see
if,
if
this
will
work.
F
Like
my
use
case,
when
I
was
trying
to
use
the
project
oath
cache,
you
know
all
the
airbag
stuff
that
comes
from
cube
as
libraries
and
they
use
listers
as
as
input
and,
of
course
they
try
to
use
the
listers
without
passing
the
cluster
prefix
in
the
keys,
and
so
this
this
approach
that
you've
taken
also
covers
this.
I
mean
I
can
just
pass
to
some
cube
library
that
is
using
listers
a
scoped
lister,
and
then
it
would
all
the
the
the
cluster
prefixes
in
the
case
would
be
added
automatically.
In
fact,.
A
That
you
wrote
that
scopes
it
down
by
hand
you
ideally
and
theoretically,
would
be
able
to
replace
it
with
this.
A
Yeah
I've
gone
through
so
many
iterations
of
trying
to
get
this
work
and
hack
and
like
hacking
stuff
that
yeah,
what
I
was
telling
staphon
yesterday
is
I'm
gonna
get
it
working
I'll
show
it
or
you
know
we
can
look
at
it,
but
then
I'm
going
to
start
with
a
clean
branch
and
manually
just
copy
stuff
over,
as
it
makes
sense.
F
Yeah,
that's
very
nice
that
that
we
would
be
able
to.
Finally
with
what
you
did
harmonize
or
yeah,
all
the
use
cases
in
a
single.
A
And
I
will
reiterate
that
all
of
this
work
that
I'm
doing,
which
includes
the
older
prototyping,
is
on
top
of
my
pr
to
bump
us
up
to
123..
So
if
anybody's
interested
in
looking
at
the
rebase
pr,
it
is
up
in
the
kcpdev
kubernetes
fork
as
a
full
request,
all
right.
So
that's
all
I
had
for
this.
A
So
thanks
for
your
feedback,
everybody,
if
you
are
interested
in
learning
more
and
want
to
help
out,
please
let
me
know
and
just
looking
at
the
agenda,
I
don't
see
anything
new,
so
we've
we've
got
some
time
if
anybody's
got
questions,
comments
or
wants
to
talk
about
anything
or
we
can
end
early.
D
A
All
right
sold,
you
all
get
22
minutes
back
so
enjoy
the
rest
of
your
day.