►
From YouTube: KCP-Edge Community Meeting, January 12, 2023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hi
everybody
hello
and
welcome
to
the
January
12
2023
first
of
the
New
Year
case
kcp
Edge
meeting,
and
we
have
a
good
size
agenda
today,
I'd
like
to
get
started.
If
you
turn
your
attention,
I'll
share
with
you
the
screen
that
I'm
looking
at
okay
so
just
to
get
started
any
of
the
topics
that
you'd
like
to
discuss.
Please
enter
them
into
the
into
the
into
the
issue.
The
issue
is
tagged.
It's
number
65..
A
If
it's
also
in
the
chat
for
this
WebEx
and
just
to
get
started,
we
have
a
contributor
code
of
contact
and
as
contributors
maintainers
in
the
cncf
community
and
in
the
interest
of
fostering
an
open
and
welcoming
committee
Community,
we
pledge
to
respect
all
people
who
contribute
through
reporting
issues,
posting
feature,
requests,
updating,
documentation,
submitting
pull
requests
or
patches
and
other
activities.
In
other
words,
please
just
be
nice
to
each
other
in
this
forum.
B
A
Like
to
get
started
Mike,
if
you
want
to
go
first
or
sure,
like
yeah,.
C
And
I
think
the
representation
in
the
agenda
is
a
little
bit
thin
compared
to
the
issue.
But
it's
really
it's
not
a
big
deal.
It's
just
there's
a
lot
of
little
niggling
issues
around
the
checking
that
CI
does
about
our
logging.
I'll
use
this
tool
called
log
check
and
there's
been
a
lot
of
issues
around
it.
One
of
them
is
he
called
out
here,
which
is
that
the
CI
in
books
of
script
that
invokes
log
check
twice,
but
without
a
way
to
differentiate
the
output.
C
Now
for
most
of
the
output,
it's
implicitly
differentiated
because
it
names
a
as
a
file
name
in
it.
So
you
can
tell
what
file
it's
talking
about,
but
the
error
messages
that
says
dot,
slash,
dot,
dot
dot
has
no
packages.
You
can't
tell
which
invocation
it's
coming
from,
so
I
I
did
a
PR
to
deal
with.
That
more
interesting
is
the
checking
itself.
C
The
case.
The
main
kcp
repo
is
stuck
on
an
old
release
of
this
version.
0.2.0
and
I've
gotten
some
bug
fixes
in
so,
for
example,
the
first
one
was.
It
was
objecting
to
calling
klog.init
Flags
when
doing
checking
for
contextual
logging,
but
in
it
Flags
is
a
perfect
legitimate
thing
to
call
it's
not
a
violation
of
the
the
principles
of
contextual
logging,
so
I
got
a
fix
in,
for
that.
C
I
also
got
a
fix
in
for
the
error
messages,
so
one
of
so
this
tool
is
checking.
When
you
do
it's.
When
you
check
for
structural
logging,
it's
checking
that
you
see
we're
calling
structural
logging.
C
The
idea
is
the
the
call
to
emit
the
log
has
a
message,
then
a
bunch
of
alternating
key
and
value
pairs
and
in
the
kubernetes
community
they
have
a
page
they've
defined
the
expectations
for
those
keys
and
basically
it
says,
use
lower
camel
case
and
the
way
Logitech
works
is
it
has
a
reg
X
and
that
it
matches
the
key
against
and
if
it
fails,
it
was
giving
an
error
message
saying
you
shouldn't
be
using
special
characters.
C
Well,
that's
not
even
a
decent
error
message
because,
if
you
say
start
with
a
capital
letter,
if
you
do
initial
Capital
case
or
whatever
camel
case,
it'll
fail,
and
the
error
message
is
completely
misunderstand:
inscrutable.
B
C
C
So
you
know:
I've
opened
an
issue
on
the
K
main
kcp
repo
last
I
heard
Andy,
says
yeah
we
like
to
put
a
dot
in
it
and
I
said,
but
I
say
why
are
we
diverging
from
the
community,
kubernetes
community
and
I,
haven't
seen
a
response
since
then,
so
I
think
that
that's
where
the
real
remaining
issue
is
the
other
thing
is,
you
know,
we've
done
a
lot
of
copying
from
the
the
main
kcp
repo.
If
we
want
to
maintain
consistency.
C
So,
for
example,
for
this
thing
about
the
differentiating,
the
output
from
the
two
calls
to
log
check.
Do
you
want
to
try
to
propagate
that
back
to
the
main
kcp
repo?
C
So
those
are
the
the
things
that
I
have
right
now?
Oh
so,
let's
just
relate
so
Andy's
thinking
the
thinking
that
I
got
from
the
main
kcp
people
is
when
you
have
a
controller
manager
that
has
multiple
controllers.
The
thinking
is,
each
controller
is
has
keys
that
identify
which
controller
it
is
so
the
point
of
the
dot.
Is
you
name
the
key
with
controller
name
dot?
You
know
variable
local
to
the
controller.
C
I
I
could
argue
that
that's
redundant
since
then,
generally,
you
have
multiple
keys
in
one
line
and
they
mean
the
controller
and
all
of
them
is
just
redundant
really.
The
point
is
the
message
should
be
identifying
the
controller.
C
C
A
I
know
we
have
joaquim's
on
the
call.
Oh
sorry
go
ahead.
D
C
D
A
Okay,
I
noticed
you
were
on
the
call
and
your
your
your
experience
with
kcp.
Of
course,
any
comments
or
information
passing
along.
B
B
A
All
right
so
something
that
I
wanted
to
speak
to
all
of
you
about
I've,
been
having
I
initially
at
the
beginning
of
the
week,
broke
the
build
and
was
able
to
get
things
back
on
track
over
time
and
I'm
learning
much
the
same
as
as
the
rest
of
you.
So
for
me
there
was,
you
know,
an
understanding
of
what
the
vendor
sub
directory
was
used
for.
A
You
know
for
local
libraries
and
packages
and
having
that
present
was
causing
issues
whereby
the
code
gen
wouldn't
generate
or
wouldn't
find
the
package
that
needed
for
co-generation
to
use
in
its
path.
So
after
removing
that
vendor
directory
that
that
seemed
to
free
things
up
also
some
changes
in
the
code
gen
and
the
crd
generation
scripts,
where
you
know
they
were
referencing.
The
kcp
dev
kcp.
A
A
Think
that
that's
already
been
brought
up
in
one
of
the
issues
Mike
had
commented
and
you
know,
pushed
the
envelope,
let's
figure
out,
let's
see
if
we
can
actually
do
something
with
some
real
API
references,
so
I'm
looking
forward
to
that
I,
don't
so
Mike.
Is
there
anything
in
the
code
generation
aside
from
testing
it
out
with
real
apis
in
our
edgemc?
Repo?
Is
there
any
what
other
items
are
you
looking
to
cover
there?
That
would
help
us
satisfy
and
and
resolve
this.
C
I
have
caught
up
with
the
latest
changes
overnight,
but
you
know,
broadly
speaking,
you
know.
I
do
have
a
an
API
PR
there
right,
so
we
should
be
able
to
get
that
merged
with
code
generation,
be
able
to
proceed
on
the
controller
that
I
started.
Writing
which
right
now
doesn't
actually
do
anything
with
our
API
right.
So
the
next
step
is
to
actually
start
getting
code
that
works
with
our
API
make
sure
it
just
runs
great.
A
D
So
you
you
pick
my
curiosity
about
this
rental
directory.
Was
this
something
generated
by
the
scripts
because
I
didn't
see
this
in
the
kcp
call
repo
I,
don't
know
if
it's
something
that
we
created
or
we
needed
somehow
from
somewhere.
So
where
was
this
vendor
directory.
A
A
What
I
had
done
is
I
was
toying
around
with
kcp
a
bit
and
you
know,
learning
go
and
how
go
operates,
how
it
compiles
and
how
it
does
its
its
generation
and
what
I
found
was
you
know
what
I
was
doing
with
the
kcp
source
is
I
actually
did
a
go
mod
vendor
and
that
pulled
a
vendor
directory.
Okay,.
D
So
you
probably
created-
maybe
not
necessarily
you
know
voluntarily
this
directory.
Just
you
know
using
the
go
commands.
C
Well,
some
existing
examples
occurred
generation,
I
put
in
him
at
kubernetes,
for
example,
which
is
you
know,
has
been
slow
to
move
into
modern,
go
practices,
so
I
think
they're
still
dragging
around
a
vendor
directory,
so
that.
A
A
That
threw
me
off,
and
so
I
was
like
all
right.
So,
let's
so
when
I
was
trying
to
solve
the
code,
gen
issues
that
we
were
seeing,
what
was
happening
is
one
of
the
code
genes,
there's
a
collector
gen
and
that
referenced
the
a
library
in
the
go
mod
but
because
the
vendor
directory
was
there,
it
was
pulled
away
and
says:
oh
I
should
use
the
local
copy
instead
and
couldn't
find
it.
This
was
consistent
in
kcp
too,
because
when
I
ran
the
go
mod
vendor
in
kcp
Source,
it
did
the
same.
A
A
D
But
by
the
way,
one
question
I
see
that
at
least
in
the
kcp
core
I
see
that
in
the
types
they
also
has.
Thank
you
build
a
annotation.
C
Mike,
let
me
refine
the
question,
so
we
want
to
try
building
controllers
without
using
controller
runtime.
We
want
to
use
the
lower
level
facilities
directly.
So
what
I've
asked
Andy
is
there?
You
know
Annie
volunteered
to
look
into
the
problem
that
I've
posed,
which
is
we
want
code
generation
that
doesn't
build
in
unnecessary
dependencies
now,
if,
if
we're
using
Cube
Builder
to
generate
code,
if
that
code
does
not
build
in
a
dependency
and
controller
runtime,
we
can
probably
use
it
if
it
builds
in
a
dependency
on
controller
runtime.
D
Yeah
I
don't
believe
that
in
the
kcp
court
they
have
this
dependency
on
controller
on
time,
but
they
are
actually
using
to
build
their
annotation
in
the
apis
that
they
have
in
the
types.
If
you
take
a
look
right.
C
So
my
question
another
way
putting
it
then
is:
is
this
only
helping
the
code
generation
without
building
in
a
runtime
dependency
on
controller
runtime?
And
if
so,
you
know
if
it's
actually
improving
the
code
generation,
expanding
its
scope
or
making
it
more
convenient
or
anything
like
that,
then
that's
fine.
D
Yeah
because
I
believe
there
are
many
things
that
Q
Builder
doesn't
has
a
code
generation,
for
example,
creating
the
API
schema
and
Status
super
resource,
the
other
stuff
that
you
can
see
from
The
annotation.
That
they're
not
sure
it's
actually
covered,
if
you're,
just
using
the
plane,
client
goal
exactly.
D
D
Clango,
you
know
the
standard
generator
without
Cube
Builder.
You
know
generation.
C
A
Yeah
so
I'd
like
to
see
you
in
the
next
step
right
is:
let's
get
the
you
know
the
controller
that
you've
got
there,
the
skeleton
of
what
what
you've
got
there
Mike
and
then
I'll
associate
that
wooden
with
CR
and
an
API,
and
then,
let's
figure
out,
you
know
what,
where
the
gaps
are,
where
the
gaps
remain.
If
it
helps,
if
it
doesn't,
we
can
adjust
it
accordingly.
A
Okay,
any
other
questions
about
CI
for
now.
I
know
that
there's
a
generic
question
about
CI,
not
being
opaque
enough.
You
know
transparent
enough,
so
I'm
going
to
look
into
that
offline
since
the
folks
that
I'd
like
to
ask
that
question
to
are
not
on
today.
A
B
C
C
Right,
okay,
so
well
yeah
a
lot
of
stuff,
obscuring
stuff:
okay,
it's
a
little
bigger,
so
I'm,
working
on
a
revision.
Let's
see
is
the
oh
yeah
this
right.
So
earlier,
I
proposed
something
and
and
wrote
up
a
Google
doc
about
it,
and
the
idea
was
that
there
would
be
an
API
object,
called
classifier
that
describes
how
to
summarize
the
status
from
a
particular
kind
of
object
or
from
a
particular
object,
and
the
idea
is,
you
would
put
an
annotation
on
that
object
that
points
to
the
classifier.
C
It
says
how
to
summarize,
and
that
would
cause
the
the
The
Edge
system
to
produce
summaries.
So
I
I
produce
a
definition
for
classifier
and
a
definition
for
summary
status
summary,
and
the
idea
was
basically
that
the
classifier
would
describe
how
to
extract
a
feature
Vector
from
each
objects.
C
Each
object
at
the
edge
from
its
generally,
presumably
its
status,
but
could
be
any
part
of
the
object,
extract,
a
feature
vector
and
the
summary
would
then
just
be
a
basically
a
histogram
of
those
feature
vectors.
Let's
just
say
a
map
from
future
Vector
to
count
of
edge
objects
that
have
that
feature
vector
and
a
Constantine
properly
observed.
I
believe
that
this
was,
if
I
recall,
maybe
I'm,
not
sure
who
it
was
I
think
it
was
me
Constantine.
C
This
is
really
a
small
subset
of
what
you
can
do
in
SQL
with
aggregation
in
SQL.
The
key
Concepts
there
are
are
known
by
the
keywords,
group
pi
and
a
select
statement
you
can
Group
by
and
that's
basically
describing
the
feature
Vector
extraction
and
then
in
the
select
statement,
where
you
select
this
that
and
the
other
thing
they
have
more.
What
I?
My
first
proposal
was
basically
equivalent
to
the
count
account
star
or
just
you
know
count,
but
they
have
other
kinds
of
aggregation
right,
yeah.
C
It
was
constantly
because
he
was
proposing
other
kinds
of
aggregation
too.
You
can
sum
you
can
average
you
can
find
the
max
you
can
find
the
Min.
Those
are
the
the
ones
that
are
built
into
SQL
and
I
think
that
in
Edge
we
should
also
allow
users
specify
light
aggregation
functions,
I
know
in
transparent
multi-cluster
or
in
support
of
it.
That
there's
been
some
work
on
allowing
developers
to
plug
in
the
desired
aggregation
function.
I
think
that
for
Edge
we
need
to
allow
the
aggregation
function
to
be
specified
in
an
API
object.
C
Haven't
quite
got
that
far
but
anyway,
let
me
show
you
about
what
I
have
got
in
a
revised
proposal
in
its
current
state.
So
the
idea
is
to
point
to
a
summarizer,
which
is
an
API
object
that
describes
how
to
produce
these
summaries
that
are
more
inspired
by
SQL
statements,
so
it
basically,
it
has
the
usual
object
and
type
metadata
and
then
a
a
list
of
groupers.
C
Each
grouper
here
is
basically
like
an
SQL
statement
with
the
group
by
and
then
the
aggregators
that
describe
the
things
which
you
put
in
the
select
Expressions,
what
to
Aggregate
and
how
so
the
group
buys
a
budget
named
Expressions,
because
everything
in
SQL
columns
have
names,
but
the
the
idea
here.
C
Oh
that
should
have
been
expression,
yeah,
wow,
okay,.
C
So
the
this
says,
you
know
what
two
expressions,
basically,
how
to
extract
a
value
from
an
object
and
the
approach
I'm
currently
taking
is
and
I'm
basically
copy
what
did
in
the
first
proposal,
which
is
two
ways
of
doing
that.
One
is
with
a
Json
path,
expression
for
those
of
you
who
are
not
familiar
with
Json
path.
C
Look
it
up!
It's
it's!
It's
a
fairly
common
way
of
just
extracting
data
from
a
Json
every
all
these
crds
are
represented
in
Json,
so
it's
it's
a
natural
fit.
There
I
also
put
in
this
kind
of
a
switch,
a
feature
which
I
think
it
just
simply
inherited
from
the
first
proposal.
So
the
idea
of
a
switch
feature
is
it's
basically
like
a
case
statement.
So
it's
basically
each
case
of
the
each
arm
of
the
case
statement.
It's
like
a
selector.
C
We
could
do
a
label
selector,
annotation
selector
and
a
field
selector
and
then
the
right
hand.
Side
of
it
is
just
a
name.
So
it's
basically
categorizing
objects
into
categories.
So
the
idea
here
is
this
is
like
you
can
say:
okay,
this
object
is
failed
or
successful,
or
it's
in
one
of
three
phases
or,
however,
you
want
to
categorize
it.
So
that
kind
of
feature
can
let
you
categorize
thing
and
then
count
the
dwarf
objects
in
each
category.
C
So
that's
the
current
state
of
this
proposal,
but
by
naming
an
expression
I'm
really
pointing
at
the
idea
that
maybe
we
want
to
go
farther
and
allow
more
complicated
expressions.
I
think
the
incumbent
here
is
the
common
expression
language
known
by
the
acronym
Cel
pronounced
cell.
That
already
has
some
presence
in
kubernetes
I
asked
about
that
in
this
API
Machinery
Sig
meeting
this
week,
and
if
you
look
in
the
notes
there
there's
a
couple
of
places
where
cell
is
currently
used.
C
The
proponents
of
using
it
are
interested
I,
think,
there's,
there's
more
potential
uses
in
kubernetes,
I
think
so
too
I
think
they
could
get
in
our
way,
potentially
as
I
called
out
in
the
meeting.
So,
for
example,
if
we
talk
about
wanting
to
replace
parts
of
the
API
server
more
of
the
API
server
with
SQL
database
uses
of
SQL
databases,
you
know.
Maybe
this
gets
us
in
the
business
of
mapping
cell
Expressions
into
SQL,
which
may
or
may
not
be
a
problem
more
more
precisely,
SQL
aware
Clauses.
C
So
it's
it's
that
it
would
be
a
non-trivial
mapping
but
anyway,
and
then
the
other
question
here
I'm
currently
is
in
the
current.
In
the
first
proposal.
Every
feature
value
is
a
string,
but
if
we're
anticipating
you
know,
even
in
Json,
everything
is
in
a
string.
It's
easily
convertible
to
a
string,
there's
a
standard
conversion
to
a
string,
but
in
particular
for
anticipating,
maybe
going
to
sell.
C
Maybe
we
want
to
talk
about
more
General
values
than
strings,
so
it's
just
starting
to
look
at
the
the
stuff
that's
been
defined
for
cell
and
see
how
they're
representing
General
values
and
then
one
other
idea
that
Constantine
had
or
one
of
the
things
that's
and
you'll
notice,
also
even
in
the
first
proposal
that
a
summary
really
has
two
things
and
if
let
me
call
it
up
here
right
so
summary
has
two
things:
it's
got
this
a
histogram
here
represented
by
the
slice
of
status,
count
and
it's
got
a
list
of
broken
objects,
a
references
to
Broken
objects,
and
so
the
idea
here
is.
C
C
You
don't
want
to
even
try
to
copy
it
all
to
the
center,
but
if
you've
got
broken
objects,
how
do
you
find
them
right
if
you
only
have
a
count,
you're
kind
of
stuck,
so
the
thought
is
well
if
you've
got
some
broken
objects,
there's
either
a
few
of
them
or
there's
many
of
them,
because
there's
something
going
wrong
over
and
over
again,
there's
a
pattern.
So
if
you
can
get
a
few
instances,
that'll
help
you
debug
the
pattern.
C
So
the
idea
is
that
a
summary
holds
a
list
of
references
to
Broken
objects,
but
the
list
is
limited
length,
so
it'll
only
be
up
to
20
objects
long,
but
that's
enough
to
get
you
started
debugging.
If
you
look
at
these
20
objects,
even
if
you
have
to
go
use
other
tools,
you
know
to
go
remotely
to
look
at
them.
You
you
know
where
to
look.
You
can
find
them
see.
What's
going
wrong
and
now
you
can
fix
the
pattern
and
the
number
of
broken
objects
will
go
way
down.
C
So
that
idea
I've
copied
in
the
second
proposal
as
well,
but
the
struggle
that
I'm.
Having
is
when
you
refer
to
a
broken
object,
you
know
now
in
this,
the
kcp
world
or
at
nude
in
general,
right,
we
don't
believe
everything's
going
to
be
in
one
Coupe
cluster,
so
the
standard
way
of
referring
to
an
object,
it's
implicitly
scoped
by
one
Cube
cluster
or
when
kcp
one
workspace.
Well
in
kcp.
C
You
know
they
have
the
idea
that
things
are
in
workspaces
but
they're
with
sharding
their
workspaces
and
servers,
and
you
know
so.
The
thing
I'm
struggling
with
is:
how
do
you
refer
to
an
object?
That's
not
in
the
same
server,
so
I've
been
talking
a
lot.
I
should
shut
up
and
and
that's
about
what
I've
had
to
say
so
far.
D
C
Oh,
that's
another
good
point:
that's
another
thing:
I'm
working
on
revising
so
yeah
the
so
there
is
in
fact
there's
in
in
Edge
and
I
think
suppose,
I
presume
also
in
transparent,
multicluster
there's
another
layer
of
status
in
normal
kubernetes
right.
An
object
has
a
status
section
produced
by
the
controller
generally
produced
by
the
controller
that
animates
that
type
of
object.
But
we've
got
a
layer
of
management
above
that
right,
that's
creating
these
objects
and
it's
got
its
own
status
for
these
objects.
C
So
in
my
first
proposal,
I
had
kind
of
a
fairly
fixed
concept
of
what
the
status
from
the
management
layer
looks
like
I'm.
Thinking
of
actually
writing
that
out.
So
in
the
first
proposal,
I
had
a
proposed
a
syntax
in
a
string
for
the
status
for
the
management
layer
and
it
and
that's
on
which
a
broker
would
be
judged.
C
In
my
revised
proposal.
I'm
thinking
of
first
off
kind
of
writing
down
a
data
structure
for
the
status
from
the
management
layer
and
then
letting
the
same
technology
for
extracting
and
classifying
be
able
to
look
at
that
data
structure,
as
well
as
the
regular
parts
of
the
object
in
order
to
decide
in
a
programmable
way
which
objects
should
be
considered.
Broken.
D
Okay,
yeah
I
was
a
little
confused
by
the
term
broken
because
I
thought
this
initially
that
there
was
maybe
a
corrupted
object
or
something
like
that.
But
what
I
think
you
mean
is
that
there
is
some
failure,
start
some
kind
of
failure
in
the
status
for
that
object.
C
C
C
D
The
other
question
I
have
when
you
talk
about
you
know.
It
seems
like
to
to
talk
a
lot
about
the
SQL
kind
of
concept,
or
you
know,
aggregation
function
and
stuff
like
that
joints
and
so
on.
So
is
that
somehow
posing
some
requirement
on
the
implementation
that
we
we
want
to
have
underneath.
C
No
I'm
looking
at
this,
as
you
know,
I
want
to
Define
an
interface
that
poses
as
little
requirement
right
gives
as
much
Freedom
as
is
reasonable
to
the
implementers,
and
that
that's
actually
one
of
my
concerns
here.
C
So
you
know
I
kind
of
touched
on
the
fact
that
you
know
in
for
kcp
core
there's
they're
working
on
sharding
I,
don't
fully
understand
the
work
I'm,
not
quite
so,
I'm
not
quite
sure
how
to
reflect
it
and
I'm
also
not
sure
how
much
we
want
to
build
in
a
dependency
on
that
particular
way
of
dealing
with
multiple
API
Services
versus
other
ways
of
dealing
with
it.
C
Right,
yeah
I'm,
looking
at
SQL
for
two
reasons:
one
is
simply
it's
familiar,
something
that
a
lot
of
people
know
about,
and
it's
been
worked
on
a
lot.
So
you
know
it's
already
benefited
from
a
lot
of
design
and
experience
and
it's
got
a
lot
of
Mind
share.
So
that's
one
thing
and
the
other
is
yeah.
We
might
have
some
level
of
implementation
that
maps
to
it.
So
that
should
be
easy
as
well.
D
Right
yeah
I
was
thinking
about
this
kind
of
hybrid
approach.
Maybe
this
is
more
about
implementation
kind
of
details.
We
don't
need
to
talk
about
that,
but
I
saw
that
in
some
Community
project,
like
cargo
workflows,
for
example,
there's
this
notion
of
somehow
based
on
the
size
of
of
resource,
to
somehow
float
give
the
option
if
you
want
to
offload
to
some
kind
of
external
data
store
instead
of
storing
any
tcd.
D
So
I
wonder
if
these
are
kind
of
approaches,
we
may
think
about
TR
when
we
have
to
deal
with
this
very
large
status
files,
and
in
case
we
don't
have
yet
a
good
storage
support
like
the
work
on
crdb,
for
example,
and
we
still
have
to
rely
on
tcd.
We
have
to
consider
maybe
this
kind
of
hybrid
solution
for
offloading
storage
somewhere
else
right.
C
A
A
Okay.
Thank
you
all
right.
Next
up
Constantine,
would
you
like
to
introduce
the
RFC
that
you're
working
with
the
team
and
see
if
we
can
generate
some
interest.
E
Okay,
so
this
is
work
that
we
have
been
doing
with
in
in
our
team
and
with
the
Mike
spritzer,
with
King
die
and
with
Hamid
adebayo.
E
So
the
purpose
of
this
document
is
not
as
much
to
show
an
already
designed
document
that
we're
ready
to
implement,
but
rather
expose
some
ideas
that
we're
thinking
about
some
features
that
we'd
like
to
have
and
get
some
Community
feedback
on.
Whether
these
are
you
know,
useful
features,
or
maybe
some
tweaks
to
the
to
the
functionality
that
we're
proposing,
and
then
we
have
in
the
second
part
of
this
document.
E
We
have
a
couple
of
very
high
level
design
ideas
as
to
how
we
would
be
able
to
implement
this
so
actually,
I
I
took
as
a
matter
of
fact,
this
RFC
I
copied
it
from
the
from
the
istio
working
group
I
had
in
the
past,
interacted
with
them
and
basically
before
implementing
any
feature.
They
require
this
type
of
RFC
document,
and
then
they
review
it
during
their
meeting
and
they
comment
on
the
features
that
are
being
proposed
and
I
tried
to
follow
the
same
flow
here.
E
E
And
then
like
in
in
you
know
like
in
kcp,
they
are
basically
they're
able
to
Define
a
policy
or
a
workflow
and
disseminate
that
workflow
to
or
policy
or
whatever
other
resources
to
several
managed
clusters.
E
And
then
we
want
also
to
be
able
to
retrieve
the
reported
state.
So
once
we
trigger
an
operation
start
running
something
on
a
multitude
of
managed
clusters,
we
want
to
be
able
to
get
back
the
state
from
a
central
location
and
have
a
an
overall
idea.
What's
going
on
without
you
know,
having
to
go
inside
each
mesh
cluster.
E
So
then
we
want
this
solution
to
be
scalable,
because
in
the
case
of
edge,
we
can
have
a
very
large
number
of
edge
cluster.
E
So
we
also
want
it
to
be
basically
able
to
support
an
open
set
of
API
object
types
and
to
be
able
to
propagate
these
types
in
both
directions
from
the
centralized
control
location
to
the
managed
clusters
and
then
also
be
able
to
aggregate
the
actual
State
and
send
it
from
the
managed
clusters.
The
centralized
location.
B
E
In
a
background,
we
have
a
little
bit
of
a
kind
of
historical
motivation
as
to
how
we
started
thinking
about
this.
E
B
E
There
was
actually
a
competing
solution
that
was
developed
for
doing
the
same
thing,
Distributing
and
retrieving
the
state
of
workload
and
would
like
actually
in
this
work
to
ideally
propose
a
solution
that
works
in
both
cases,
for
both
policies
and
workloads
and
have
a
uniform
way
of
Distributing
the
desired
State
and
then
retrieving
and
summarizing
the
actual
state.
E
B
E
Basically,
we
we
try
to
to
summarize
these
objective
as
a
series
of
use
cases
and
the
use
case.
Use
cases
have
been
always
written,
From
perspective
of
an
administrator
of
an
application,
that's
deployed
across
several
Edge
destinations
and
as
such,
an
admin
I
want
to
be
able
to
Define
the
desired
state
for
all
the
API
objects
that
belong
to
my
application
from
a
centralized
location.
E
I,
ideally
I,
like
also
to
be
able
to
prescribe
some
rule-based
customization
of
the
desired
state
for
each
location
and
also
in
my
desired
state.
I'd
like
if
possible,
to
say
whether
a
particular
value
of
an
attribute
is
something
that
is
a
must-have.
It
absolutely
must
be
that
way
like
I
want.
I
do
want
to
have,
let's
say,
I,
don't
know,
SSL
configure
or.
E
It's
a
relaxed
requirement
where
it's
it's
I
like
to
have
it,
but
it's
not
necessary
to
and
then
basically
I
wanted,
of
course,
to
be
able
to
propagate
this
desired
state
to
multiple
destinations.
So
this
is
for
the
desired
State.
Then,
for
the
other
part
of
the
communication
pattern,
I
like
to
be
able
to
to
summarize
this
reported.
F
E
E
Basically
indicate
what
API
objects
for
which
API
objects
I
want
to
have.
This
report
is
State,
propagated
and
summarized
to
the
center
and.
E
I
want
basically
overall,
then
have
the
reported
state
of
the
API
objects
that
belong
to
my
application,
gathered
and
summarized
the
center
location.
So
in
summary,
basically
the
propagation
of
the
desired
end
of
the
reported
States.
We
want
to
have
four
features:
customization
relaxed
requirements,
returning
Associated
objects
and
programmable
aggregation.
C
Can
I
say
those
four
features
are
the
things
that
may
be
a
little
bit
surprising
and
differ,
for
example,
from
the
way
some
systems
handle
these
things
so,
for
example,
Oh?
No,
just
stop
there,
no
all
I,
guess
I'll,
say
right
so,
for
example
by
including
the
relaxed
requirements
and
being
able
to
return
information
about
Associated
objects.
You
know
that,
for
example,
is
what's
in
radical
policy,
but
not
rack
and
workload
so
by
including
that
we
can
do
both
the
policy
and
workload
part.
E
So
before
the
design,
ideas,
I
think
we,
we
have
two
high-level
ideas
that
basically
we
we
like
to
view
them
as
building
blocks.
So
if
we
want
to
implement
a
system
with
a
with
a
you
know,
functionality
and
capabilities
that
we
have
listed.
F
E
We
need
to
have
two
things.
At
least
one
building
block
is
be
able
to
Define
some
kind
of
a
communication
hierarchy,
because
we
think
that
a
single
API
Service
might
not
be
able
to
efficiently
interact
with
a
very
large
number
of
you
know,
managed
clusters
or
workspaces.
E
E
So
if
we
have
this
communication
hierarchy
in
in
figure
two
we're
kind
of
trying
to
show
here
that
if
we
Define
such
a
communication
hierarchy,
basically
we
are
able
to
First
deploy
the
you
know
the
the
the
resources
that
we
want
along
this
hierarchy
and
also
we
can
retrieve,
then
the
state
and
partially
aggregate
the
state
a
given
you
know
a
given
node
in
this
hierarchy
can
aggregate
the
the
state
that
is
receiving
from.
E
So
that
we
can
distribute
a
little
bit,
the
load
relieve
the
the
root
API
service
from
some
of
the
processing
burden,
and
also
this
hierarchy
can
be
defined
in
a
distributed
way,
which
would
give
us
you
know
some
fault,
tolerance
and
capability
to
rebuild
it
following
known
failures
or
not
being
taken
out
of
service
or
whatever
we
could
use
like
a
peer-to-peer
system
or
configuration
system,
or
some
kind
of
a
mixed
approach
for
both
using
some
configuration
and
peer-to-peer.
E
E
We
would
probably
think
about
adding
some
functionality
to
the
sinker,
because
currently
The
Thinker
basically
is
designed
that
in
such
a
way
that
it
receives
information.
But
it
does
not
have
any
capability
to
process
and
integrate
itself
in
a
like.
F
E
E
E
Regarding
the
summarization
of
the
edge
information,
this
relates
actually
to
the
there
are.
There
are
different
ways
to
do.
This
Mike
has
just
presented
before
us
a
way
to
to
to
Aggregate,
and
summarize
this
Edge
information
well.
C
E
E
B
E
Mostly
yeah,
this
is
one
way
there
is
also
a
Google
document
that
shows
another
summarization
methodology.
E
D
I
see
that
David
isia
I
know
that
he's
done
some
work
also
with
the
Thinker
views
as
some
kind
of
framework
I
think
he
has
for
this
coordination
controllers
that
allowed
to
sort
of
process
kind.
Of
summarize,
the
aggregation,
summarization
kind
of
thing
write
the
video
you
have
done
some
work
in
that
space.
You
have
already
some
code
doing
that.
F
Yes,
we
have
some
car
to
do
that,
especially
on
what
we
call
the
Sinker
virtual
workspace,
so
well
in
the
in
the
new
terminology.
We
would
call
that
virtual
view
thinker
virtual
view,
but
at
least
this
is
you
know
this
intermediate
component
that
presents
to
the
Sinker
only
what
it
needs
and
possibly
transform
the
information
and
the
resources
when
exposing
them
to
the
Sinker
and
the
other
way
around
when
The
Thinker
updates
the
statues.
This
Transformer
and
I
mean
this
future
workspace.
F
F
On
the
other
hand,
the
the
overall
use
case
that
drove
the
current
implementation
is,
is
you
know
not
the
same,
because
it's
explicitly
with
not
many
sync
targets
but
having
you
know
everything
stored
locally,
so
through
labels
or
or
annotations,
mainly
so
yeah
on
one
hand,
it
seems
to
be
quite
the
same
need,
but
on
the
other
hand,
that's
not
clear
to
me
how
much
could
be
shared
in
terms
of
components,
at
least
runtime.
F
F
Yeah
and
by
the
way,
I
I
think
what
could
be,
maybe
a
reference
or
interesting
is,
is
the
virtual
workspace
concept,
the
fact
that
we
have
in
kcp
this
concept,
where
you
can
you
know,
have
a
sort
of
proxy
before
kcp
itself,
before
the
chart
that
you
know
prepares,
transform
changes.
Information
gathers
it
to
present
it
to
to
an
external
component
for
a
dedicated
use
case.
So
obviously
you
might
be
able
to
have
such
a
virtual
workspace.
F
That
would
not
be
the
same
implementation
as
the
one
for
the
Sinker,
because,
obviously
you're
you
know
you
would
be
reading
objects
outside
and
all
these
policies
that
you're
mentioning
previously,
but
the
overall
structure
of
having
on
one
side
between
kcb
and
the
Sinker,
which
is
an
external
agent
having
this
sort
of
virtual
workspace
that
prepares
the
information.
I
assume
that
would
be
probably
a
direction
that
you
that
could
be
useful
for
you
as
well.
D
F
Exactly
yes,
well,
and
even
even
there
are
more
virtual
workspaces
because
even
for
API
exports,
in
fact,
you
know
the
URL
that
you
have
in
API
exports.
They
are
driven
by.
You
know
they
are
provided
by
your
virtual
workspace
as
well
the
what
could
be
shared
for
probably
between
EMC
and
TMC
is
some
parts
of
the
implementation
of
the
Synchro
virtual
workspace.
F
We
already
do
that
for
app
syncing
and
thinking.
For
example,
we
have
a
common
part
or
the
part
that
you
know
is
related
to
what
are
the
apis,
that
we
are
going
to
expose
so
look
into
the
scene.
Target
get
the
various
apis
that
are
supported
and
compatible
for
example,
and
then
expose
them,
or
this
aspect
could
be
possibly
maybe
the
same
and
then,
on
the
other
hand,
or
the
aspect
of
how
we
transform
the
information
and
how
we
summarize
the
statues.
F
C
Yes,
I
was
thinking.
Similarly,
I
will
admit
to
one
difficulty
that
I
haven't
found
a
reason
or
need
to
articulate
very
clearly,
but
it
is
bothering
me
one
of
the
things
we
learned
from
Verizon
about
their
customization.
Is
that
let's
see?
Oh
actually,
you
know
this
is
good.
Yeah
I
think
they
said
they
don't
want
to
have
all
the
inputs
to
the
customization
at
the
edge.
C
But
that's
that's
okay
for
us,
because,
with
the
inputs
at
the
center,
we
can
do
the
customization
of
the
center
and
present
the
customized
to
you
to
the
edge
so
yeah
strike
that
work.
I.
Think
we're
good.
D
So
it's
a
whole
new
object.
It's
not
just
the
status
of
an
existing
resource.
That
is
somehow
what
today
happens
when,
when
you
use
the
Sinker
or
even
this
Sinker
view
workspace
view
summarization.
That
David
was
talking
about
right.
It's
it's
really
actually
about
upsync
in
new
resources
and
summarizing
them,
aggregating
them,
and
so
on.
I,
don't
know,
I,
don't
I,
don't
see
explicitly
cover
that
scenario.
Yeah
I,
wonder
that
no.
C
No,
it
is,
we
did
write
it
down
here.
The
speaking
to
it
was
not
particularly
clear
but
and
in
fact
the
actually
the
penultimate
bullet
here
under
requirements.
Actually
types
is
not
exactly
on
point
because
it
might
be
more
fine-grained
than
whole
types,
but
the
point
is
exactly
what
you're
talking
about
is.
We
may
need
to
get
information
back
from
objects
that
are
associated
with
the
objects
that
came
with
from
Center
to
Edge
right.
C
C
Well
so
my
point
here
in
general,
again,
I
think
it
wasn't
quite
clearly
said
when,
when
Constantine
was
talking,
but
thinking
is
you
may
not
be
able
to
afford
always
to
propagate
the
whole
object
back.
I
think
the
right
way
to
think
about
it
is
here
always
going
to
want
some
kind
of
summary.
You
may
want
the
whole
objects
you
may
have
to
satisfy
yourself
with
something
much
less
okay
right,
because
we've
seen
use
cases
where
the
volume
of
you
know,
status.
C
Data
at
the
edge
is
just
way
too
big
to
consider
bringing
it
to
the
center,
and
that's
one
of
the
reasons
you
have
a
hierarchy
we
may
want
a
hierarchy
is
so
you
can
do
some
reducing
or
processing
along
the
way
in
so,
for
example,
your
exception
handling,
you
know
typically,
hopefully,
will
be
mostly
automated
and
that
automation
may
live
in
the
intermediate
vertices
in
the
hierarchy.
So
even
your
your
error
handling
your
response,
you
know
won't
necessarily
all
be
in
the
center.
A
Okay,
so
I've
gone
ahead
and
taken
the
Liberty
to
share
this
with
the
kcp
dev
Google
group,
so
now
that
it's
out
in
the
open
in
the
community,
others
can
make
comments.
I
invite
you
all,
as
members
take
a
closer
look
at
the
document.
This
seems
to
this
is
largely
resonating,
as
I
I
think
is
our
main
thrust.
It's
behind
edgemc
or
one
of
them,
and
so
any
any
and
all
information
that
you
can
impart
on
this
will
help
us
form
and
shape
the
challenge
ahead.
A
All
right,
thank
you.
Everybody
for
attending
Mike,
I've,
taken
number
five
that
was
listed
on
the
agenda
and
pushed
it
to
January
26th
the
next
community
meeting.
Thank
you
all
for
attending
today
and
I
look
forward
to
seeing
you
all
again
out
there.