►
From YouTube: Community Meeting, February 21, 2022
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
hey
everybody
today
is
February
21st,
and
this
is
the
kcp
community
meeting
I've
got
the
current
item
or
agenda
issue
up
in
the
screen
share.
Let
me
paste
a
link
if
you
need
it.
If
you
don't
have
an
item
on
the
agenda
and
you'd
like
to
add
one,
please
feel
free
to
add
a
comment
and
we
also
are
doing
our
best
to
use
raise
hands
in
Google
meet.
So
if
you
do
have
something
that
you'd
like
to
say,
please
hit
the
raise
hand,
button
and
I
will.
C
Thanks,
oh
boy,
cool
I,
don't
know
to
what
extent
we
want
to
like
hash
out
all
these
in
the
meeting
or
just
bring
attention
to
it
and
then
maybe
talk
more
asynchronously
and
Andy
I'm
glad
you're
here
since
I
wanted
your
your
input
on
this
one.
So
the
first
one
we
generally
have
just
one
place
where
we're
doing
API
requests
across
resource
identities,
and
it's
in
these
partial
metadata.
You
know
star
cluster
requests
that
feed
these
generic
inform
our
generic
controllers
and
I'm
wondering.
C
C
Ultimately,
we
want
their
view
of
people
that
are
bound
to
them
to
be
scoped
only
to
the
ones
that
are
bound
to
actually
their
particular
export
makes
sense.
So
we
assign
each
export
and
identity
and
then
in
storage
we
actually
hash
out
which
sorry
that's
a
not
literal
hashing,
but
we,
you
know,
we
separate
out
the
different
identities
so
that
you
can
look
at
individual
ones
and
Andy
had
spent
a
bunch
of
time.
D
A
I
was
going
to
say
just
a
reminder:
Mike,
please
use
the
rates
hand
feature
if
you
wouldn't
mind:
I'm
sorry,
yeah
thanks
Steve
feel
free
to
answer
and
then
Stefan
we'll
go
over
to
you.
C
Yeah,
oh
yeah,
so
the
only
the
only
current
users
of
this
are
like
super
highly
privileged
system
controllers
that
are
trying
to
do
actions
on
quote-unquote
generic
objects
and
they
really
don't
care
about
anything
except
for
the
metadata
in
the
first
place.
C
So
they're
able
to
kind
of
sidestep
the
problem
that
the
rest
of
the
scheme
might
be
totally
different
and
obviously
in
Cube,
the
partial
metadata
requests.
Don't
have
the
same
properties
because
you're
guaranteed
to
have
the
same
schema
elsewhere.
Stefan.
E
Yeah,
so
they
are
used
by
garbage
collection,
never
space,
controller,
I,
guess
and
especially
the
resource
label,
controller
of
workloads
right,
TMC
yeah.
So
is
there
and
maybe
a
quarter
as
well?
I,
don't
know
they're
readily
generic
controllers
and
it's
highly
privileged.
All
of
those
at
the
moment
would
so
which
work
was
I
work
at
the
moment
as
a
loopback
client
right.
E
So
I
think
I,
agree
Steve
with
you
that
this
is
not
a
pattern
we
want
to
encourage
to
use,
so
we
we
could
restrict
that
like
just
screw
back,
can
use
it
basically,
so
somebody
will
Fox
kcp
can
make
use
of
it,
but
externally
it's
not
usable,
even
as
a
system
after
it's
not
visible,
something.
E
A
It
it
only.
It
exists
for
the
few
use
cases
that
Stefan
mentioned
it
originally
came
about,
because
when
we
like,
even
before
we
had
API
exports
and
API
bindings,
we
had
the
resource
scheduling
and
if
you
had
deployments,
for
example,
that
were
imported
as
a
crd
into
different
workspaces
and
they
came
from
different
kubernetes
versions.
So
we
had
different
openshift
kubernetes
clusters
at
different
times.
We
originally
had
code.
A
That
said,
every
single
crd,
every
single
deployment
crd
had
to
have
a
compatible
open,
API
schema,
and
if
we
couldn't
identify
that
they
were
compatible,
the
system
basically
threw
it
Tans
up
in
the
air
and
said
you
can't
look
at
these
crds,
and
so
we
implemented
logic.
That
said
mainly
for
resource
scheduling,
but
it
does
have
other
uses.
A
A
I
would
be
really
curious
to
try
and
do
some
brainstorming
to
see.
If
there's
ways
to
maintain
efficiencies,
because
this
is
an
efficiency
versus
having
to
do
something
else
or
you
know,
would
we
have
to
say
that
the
the
resource
scheduling
is
identity,
specific
and
you
need
to
feed
it
multiple
identities,
so
I
think
it's
worth
doing
some
brainstorming
as
a
like
separate
exercise,
Stefan
go.
E
Ahead,
yeah,
just
a
quick
addition:
the
resource
getting
in
control
of
workloads,
I
think
we,
our
philosophy,
has
changed
that
exports
are
explicit,
so
it's
not
that
everything
Target
will
get
get
its
own
apis.
So
there
are
so
many.
So
we
could
change
that
back
to
Identity
based
watching
I.
Think
that's
right!.
E
A
C
I
think
I
saw
two
like
classes
of
usage.
The
first
one
was
quota
and
garbage
collection,
and
the
second
one
was
like
the
scheduling
bits
where
it
seemed
like
because
I
know
I
think
either
David
or
Jim
put
in
the
pr
that
dynamically
starts
and
stops
Sinker
controllers
against
individual
sync,
Target
virtual
workspaces
right
I
feel
like
we
could
do
a
very
similar
approach
with
the
scheduling
bits
and
that
would
also
de-privilege
them
right
and
so
I
feel
like
for
the
garbage
collection
in
Quota
bits.
C
You
know
very
concretely
we're
talking
about
an
efficiency
gain
of
when
there
are
many
identities
for
the
same
gdr.
Right
I,
think
I
mean
Telemetry
might
help
us
in
the
future
for
that
sort
of
thing,
and
we
might
reconsider
it,
but
I
wonder
if
we've
jumped
a
gun
on
that
efficiency
and
potentially
opened
us
up
to
this
entire
class
of
bug
that
you
spent
two
weeks
fixing
that
otherwise
doesn't
exist
in
the
system
and
the
only
reason
I
really
brought
this
up
today.
C
Was
you
know,
while
we
were
fixing
those
bugs
I
didn't
hear
a
conversation
about
hold
on?
Is
this
actually
like
that
valuable
that
we
want
to
spend
this
much
time
on
it
and
I
just
wanted
to
give
some
time
to
that.
A
Yeah
I
I
mean
I
also
so
when
looking
into
those
Flakes
and
bugs
I
think
one
of
the
one
of
the
side
effects
for
fixing
the
bug
was
that
the
work
cues
for
quota
and
garbage
collection
essentially
and
effectively
got
but
not
backlogged.
They
basically
just
got
stuck,
and
so
you
could
see
in
the
Prometheus
metrics
that
we
gathered
at
the
end
of
the
test
run
when
we
had
quota
failures
or
garbage
collector
collection
failures
in
the
edes.
A
A
Well,
maybe
we
just
go
in
and
fix
quota
or
pack
quota
and
GC
Upstream
so
that
we
don't
have
to
spin
up
an
instance
per
workspace
like
we're
doing
right
now,
but
that
doesn't
solve
the
problem
necessarily
because
if
we
do
make
that
change,
we
need
to
be
able
to
see
I
think
we
need
to
see
things
across,
but
but
I
think
it's
it's
worth
exploring
ways
to
de-privilege
that
and
see.
If
we
can,
you
know
fix
that.
F
Yes,
just
about
the
scheduling
part,
you
know
the
TMC
part
I
think
we.
We
still
need
to
keep
in
mind
the
quite
important
separation
between
scheduling
and
and
thinking.
That's
right
that
the
thinking
you
know
goes
through
the
various
apis
that
are
exposed
by
this
virtual
Sinker.
But
it's
it's
through
apis,
it's
not
really
through
identities,
because
the
filter
filtering
is
already
down
at
the
virtual
workspace
level.
But
then
we
also
have
to
think
about.
F
You
know
some
some
future
opportunities
of
defining
customizable
scheduling,
and
so
we
we
should
still
be
careful
about
keeping
things
simple
and
really
separate
those
two,
those
two
layers,
the
the
scheduling,
is
something
which
is
really
API
agnostic,
which
is
which
is
applied
on
any
type
of
object
according
to
how
it
has
been
placed.
So
really
we
have
to
Define
and
separate
those
I
think
just
wanted
to
to
mention
that.
A
C
Yeah
I
mean
I
I,
think
that
makes
sense
David.
What
what
I
was
saying
as
well
is
like,
even
though
it's
scheduling
is
a
resource
agnostic,
you
can
only
schedule
things
that
are
resources
in
the
supported
workload
export
in
the
first
place,
and
so,
if
you
look
through
the
virtual
workspace
for
that
workload,
export
you
can
see
everything
that
could
potentially
be
scheduled
and
so
like
a
dynamic
literal
Discovery
might
be
helpful
there
like
something
in
that
vein
means
it's
still
agnostic,
but
it's
not
like
you.
A
G
A
Yeah
cool
cool
thanks,
so
you
have
the
second
item.
C
C
I,
mostly
just
wanted
to
ask
I
feel
like
I've
seen
a
lot
of
conversations
from
from
different
folks
about
it
seems
very
like
hesitant
or
conservative.
I
guess
is
how
I
might
put
it
in
how
we're
changing
our
apis,
and
you
know
one
example
of
this.
Is
this
placement
spec
location
resource,
which
you
know
it
has
some
background,
but
it's
confused
multiple
people
now
and
then
another
one
I
saw
was
unschedulable
shards.
C
The
decision
was
made
to
add
a
pseudo
API
through
an
annotation
and
I
I
just
wanted
to
kind
of
hear
people's
thoughts
on
like
what
we're
gaining
as
a
project.
By
being
this
conservative
about
our
V1
Alpha
One
apis-
and
maybe
are
we
contorting
ourselves
a
little
too
much
here
if
there's
a
field
that
nobody
uses
but
might
be
used
in
the
future
like
let's
delete
it
until
it
has
a
use,
and
then
we
don't
risk
confusing
folks
in
the
community
that
are
looking
at
her
for
the
first
time.
A
E
E
Workload
specific
as
long
as
they
live
in
scheduling,
create
as
a
k,
kcpio,
icing,
location
resource
is
a
must
because
otherwise
we
binds
API
something
which
it's
it's
not
meant
for
justice
and
I
think
the
use
cases
where
you
can
use
some
or
want
to
use
something
else
about
the
other
topic,
I
kind
of
agree
that
we
should
be
more
flexible
in
the
if
apis,
but
in
any
also
every
of
those
cases
I
would
like
to
see
an
exploration
where
we
want
to
go,
don't
add
one
of
fields,
because
it's
easy-
and
this
was
one
of
those
fields
we
had.
E
We
had
a
much
more
complex,
short
API
at
some
point
and
it
had
plenty
of
views
which
were
not
implemented
or
half
implemented
and
we
had
to
show
all
of
them
away.
This
is
not
better
than
being
conservative.
This
is
very
Casey
about
unscheduable.
Maybe
that's
easy
enough,
so
Boolean
doesn't
hurt,
but
I
think
it's
a
good
good
process.
If
you
want
to
add
something
at
least
describe
in
a
sketch
document,
the
next
two
steps
where
this
should
go
as
an
API,
and
we
haven't
done
that
here.
E
C
Yeah
that
makes
perfect
sense
for
the
second
part
for
the
location
resource.
If
I
remember
correctly,
this
field
tells
you
what
type
the
placement
or
sorry
what
type
the
location
is
closing
over.
So
if
I
create
a
placement,
this
will
say
sync
targets
and
like
that
seems
fundamentally
at
odds
with
the
idea
that
a
placement
consumes
the
location
the
location
closes
over.
Whatever
else
is
happening
like
the
location
is
a
abstraction
given
by
the
compute
provider.
I
consume
that
I,
don't
know
what
happens
under
the
covers.
Why
do
I
need
to
know.
E
There's
one
problem
and
I
think
we
talked
about
that
before
when
this
was
made.
This
AI,
this
location,
workspace
separation,
we
didn't
have
right
now
we
have
a
placement,
it
doesn't
talk
at
all
about
Sim
targets
yeah,
but
somehow
we
have
to
identify
as
the
right
locations,
because
there
might
be
more
than
that
there
might
be
locations
for.
E
You
different
so
imagine
a
different
workload,
API,
be
it
Edge
or
be
it
I,
don't
know
some
other
like
you
scheduled
Kafka,
for
example,
and
it's
not
as
soon
targets,
not
TMC
right.
E
Do
you
think
you
also
assumed
somehow,
but
it's
a
different
kind
of
workload
syncing
and
you
have
your
own
Sinker
and
it
installs
Kafka
guards
into
your
clusters,
for
example,
and
then
you
want
to
I
mean
there
must
be
also
something
like
a
Sim
Target,
but
it's
it's
not
civil
right,
one.
So
qmc.
C
E
E
C
Okay,
I'm,
okay,
with
with
more
thought
but
yeah
on
the
face
of
it,
a
field
that
only
exists
because
of
maybe
potential
theoretical
might
be
use
cases
seems
kind
of
weird,
especially
when
it
causes
new
users
to
have
questions.
Basically,
every
time
they
look
at
it
go
ahead.
Mate.
E
As
a
background
is
obviously,
this
is
an
API
not
only
for
TMC,
whether
it's
sensible
to
to
make
this
to
have
the
school.
We
can
talk
about
that
of
course,
but
if
we
change
that
we
already
move
this
API.
D
Okay,
yeah
question
and
comment
yeah
for
Edge
MC,
we're
defining
our
own
Edge
placement
type.
So
we're
not
exactly
you
know,
I
mean
it's
different
from
GMC,
so
we
have
our
own
placement
and
also
and
I'm.
You
know
trying
to
follow
this
stuff
without
having
good
Clues
here,
but
it's
just
looking
at
what
happens
when
I
run
through
the
quick
start
and
the
behavior
that
I
see
now
is
that
it
creates
a
TMC
placement
object
that
refers
to
sync
targets
rather
than
locations.
D
So
it
I
had
drawn
the
conclusion
that
locations
were
being
phased
out
by
the
way
in
edgemc
I.
You
know
so
far,
I've
been
taking
the
maybe
just
lazy,
our
opportunistic
approach.
We
do
need
two
abstractions,
one
of
which
very
naturally
would
be
called
location,
and
it
is
like
a
geographic
location
right,
think
of
just
an
edge
world
right,
there's
a
natural
concept
of
location
and
it
might
have
multiple
clusters
in
it.
D
So
it's
a
very
natural
modeling
to
say
a
location
corresponds
to
a
physical
Edge
location
and
a
sync
Target
corresponds
to
a
cluster
in
an
edge
location.
So
that's
the
approach
we've
been
taking
and
I'm,
not
quite
clear,
what's
going
on
in
TMC,
so
that's
why
I'm
asking
about
where
the
locations
are
being
phased
out
and
if
I
stand
correctly,
what
you're
talking
about
is
in
the
TMC
placement?
There
is
this
field
that
says
what
type
of
thing
are:
is
the
predicate
over
the
label
predicate
over?
D
You
know
and
it
was
originally
locations
now.
It
seems
to
be
sync
targets
and
I'm
trying
to
understand
here.
A
E
Yeah,
so
there's
no
change
in
in
sync
Target
or
location.
What
you,
what
you
target
your
Target
locations
but
as
I
explained
before
there
is
a
selecting
process
which
locations
you
mean,
but
I
really
much
like
what
you
bring
up
here.
So
you
have
a
placement
as
well
I
think
when
we,
when
we
created
that
API,
we
didn't
even
have
API
bindings,
and
we
might
remember
that
was
very
early,
probably
when
we
had
this
idea.
E
So
nowadays
we
can
basically
find
in
the
edge
placement
and
we
could
find
in
the
TMC
placement
right
depending
on
use
space,
which
is
a
good
data
point
that
maybe
it
makes
sense
to
Uber
to
workload
and
then
get
rid
of
vocational
Resource
as
well
and
I
could
imagine.
There
are
other
cases
where
you
need
a
slightly
different
placement
as
well.
C
Yeah
and
I'm
also
happy
to
take
this
offline,
but
Mike
I'd
love
to
hear
what
like,
how
did
you
guys
choose
to
do
your
own
placement?
Were
you
fundamentally
looking
at
a
different
API
surface.
D
Yeah
with
so
first
off,
we,
you
know
when
we
first
discussed
it
with
this
community
I
think
there
was
a
suggestion.
It
made
sense
to
me
that
we
should
have
our
own
placement
because
we
got
our
own
semantics.
D
The
first
and
foremost
difference
is
that
in
TMC
there
is
a
selector
and
the
semantic
is
anycaster.
Choose
one
I
mean
Edge,
we
want
choose
all
or
multicast
okay,
so
it
seems
natural
since
it's
it's
distinct
semantics
to
have
a
distinct
type,
to
make
the
modeling
clear,
also
I,
not
I'm,
having
trouble
understanding
what
Stefan
is
saying
about
locations
are
still
being
used
when
I
follow
the
quick
start.
It
creates
a
placement
that
refers
to
sync
targets,
not
locations.
Yes,.
E
H
C
Yeah
Mike,
the
the
field.
There
is
saying:
here's
my
label
selector,
please
any
cast
me
to
any
location
that
matches
my
label
selector
as
long
as
the
underlying
thing,
underneath
that
location
is
a
sync
Target
and
I.
Think
in
the
world
that
you're,
referring
to
and
and
the
reasons
that
you
had
for
making
your
own
placement
are
perfect.
I
think
that's,
you
know
perfectly
reasonable.
In
that
case,
we
might
want
to
reconsider
whether
this
interaction
makes
sense
at
this
level.
D
Oh
so
maybe
misunderstood,
what's
going
on
now,
so
if
I
run
through
the
quick
start
now
and
I
get
a
placement
that
has
a
field
selector
and
it
has
a
location
resource
and
it
says
the
location
resource
is
sync
Target.
The
field
selector
is
still
a
field
selector
over
a
location
objects.
Yes,
oh
wow,
yeah,
that's
complicated!
It's
by
the
way
is
it
written
down
anywhere.
E
A
B
Yeah
to
share
like
quick
progress
on
this
first
one
is
I
renamed
this
proposal
from
workspace
initialization
to
something
more
like
a
pi
life.
C
B
Hooks
I
think
is
captured
a
bit
more.
What
the
work
is
about
and
then
I
didn't
do
much
on
the
on
on
the
proposal
here,
I
just
added
like
two
crds
one
Pi's
life
cycle
export,
which
is
where
you
can
Define
the
hooks
and
then
the
an
equivalent
and.
B
Fbi
lifecycle
binding,
where
you
capture
the
the
the
accepted
clamps
so
I
spent
some
time
actually
understanding
permission
claims.
So
that's
why
I
didn't
make
much
progress
on
this
and
yeah
thanks
to
everyone
for
helping.
G
B
This
so
the
next,
so
we
don't
have
to
discuss
the
The
Proposal
here
I
guess
we
can
take
this
offline.
B
To
work
on
a
kind
of
of
I'm
working
on
the
on
the
POC,
and
maybe
the
question
I
have
for
Stefan
because
he
said
always
not
to
have
this
in
your
car.
And
what
do
you
mean
by
that?
Right?.
E
E
Do
we
have
everything
to
enable
that,
if
not,
maybe
you
have
to
add
something
to
claims,
for
example,
to
our
virtual
workspaces
and
only
as
a
last
step
when
we
think
okay,
this
is
so
complicated
outside
then
maybe
we
can
consider
to
do
it
in
the
main
or
ECP,
but
I
think
we
are
not
there.
The
hope
is
that
we
can
do
it
outside
yeah.
B
I
think
we
can
do
it
outside
I
I
need
to
to
check
that
yeah.
E
And
we
could
still
have
this
as
a
binary
in
kcpu
if
it
makes
sense,
maintenance
lies,
but
it
wouldn't
be
in
the
main
kcp
binary
like
you
have
to
start
it
next
to
it,
or
something
like
that.
There
are
many
ways
to
do
that,
but
this
is
more
like
a
maintenance
question
for
the
design.
It's
really
this
rule
of
thumb.
Let's
try
to
be
outside
and
see
how
it
goes.
A
E
Yeah
this
one
got
to
learn
should
be
minimal,
which
means
you
must
have
good
arguments
to
increase
the
size
and
scope.
So
you
can
write
it
down.
Of
course,
yeah
okay,
I
think
when
we
have
kcp
minus
core
as
a
binary
I
think
we
will
be
there
that
we
can
have
a
small
goal
and
go
dock
MD
or
something
dot
go
files,
and
this
website.
A
Sounds
good,
so
thank
you
for
the
update
Lionel
it
like
sounds
like
you're.
Working
on
a
proof
concept.
Is
that
right.
A
Okay,
well,
if
you
need
any
more
help,
you
know
where
to
find
us
and
if
anybody's
interested
in
helping
them
out,
please
get
in
touch
all
right,
sergius
I'm!
Do
you
want
me
to
run
through
this.
H
H
Windows,
so
it
is
something
I've
been
working
on
in
the
last
days
to
improve
our
ability
to
actually
debug
the
food.
What
we
produce
here
so
I
guess
we
have
like
more
or
less
still
flying
a
little
bit
blind
like
when
it
comes
to
all
promise
like
we
are
supposed
to
do
things
fast
and
very
efficient
when
it
comes
to
creating
workspaces
and
I
just
want
to
yeah
everybody
that
is
working
on
pull
requests
just
make
aware
that
you
now
have
the
possibility
to
actually
prove
that
you
improve
things
based
on
metrics.
H
Let's
take
this
one
David,
we
are
storing
all
the
Prometheus
metrics
that
we
are
scraping
from
kcp
end-to-end
test
runs,
which
you
can
then
download
or
inspect
online,
and
for
specifically
for
for
GitHub
ports.
You
can
just
go
into
details
and
then
on
the
summary,
and
then
you
have
this
tall
ball
here
called
e2e.
Let's
take
for
the
shorted
one
for
the
to
each
shot
at
one
shot
at
metrics.
H
You
can
just
simply
download
it
and
yeah
unzip
it
locally
and
as
you
see,
this
is
a
little
bit
of
a
convoluted
process
that
still
works
at
least
that's
the
only
way
I
know
of
how
to
do
it
for
GitHub,
and
then
you
can
have
this
little
command
that
you
can
execute.
If
you
have
Prometheus
installed
locally
and
then
you
can
launch
and
inspect
your
metrics
for
the
YouTube
you
run-
and
this
is
you
know,
sort
of
like
the
collected
metrics
that
happened
during
the
e2e
run
on
on
GitHub
actions.
H
That's
a
little
bit
convoluted
I
will
show
you
in
a
second
like
a
way
easier
way
on
how
to
inspect
those
metrics
instead
of
downloading
them
locally,
but
just
from
a
structure
Beware
that
we
have
many
e2e
tests
that
start
a
small
kcp
server
locally
and
for
those
tests
where
we
started
a
kcp
locally.
H
So
just
for
for
the
structure
of
the
of
the
metrics
I
hope
this
will
help
debugging
a
little
bit,
especially
when
we're
working
things
on.
Like
caching
and
distributing
things
Distributing
requests,
this
will
give
us
sort
of
like
a
better
overview
on
how
kcp
behaves
behaves
during
runtime.
H
Since
this
is
very
convoluted,
what
I
just
showed
you?
It
still
works
for
GitHub.
We
have
a
much
better
way
for
doing
it
for
end-to-end
test
results
that
were
tested
by
prowl.
So
and
then
here
you
see
those
Pro
jobs
and
let's
say
you
want
to
inspect.
Well,
let's
take
the
same
job,
the
shorter
job,
and
this
you
know
you
can
go
on
details
here
and
inspect
the
artifacts.
What
you
can
do
now
is
we
have
this
little
online
tool,
which
we
also
use
for
openshift
and
it's
you
know
available
for
everybody.
H
You
just
copy
this
prior
URL
and
you
paste
it
into
this
tool
called
Prometheus.
It's
a
funky
name
and
then
it
you
know
spins
up
a
Prometheus
for
you,
so
you
don't
have
to
do
all
the
things
that
I
just
showed
you
on
command
line.
You
just
invoke
this
link
and
in
whom
you
have
the
same
sort
of
like
metrics
result.
G
H
The
nice
thing
of
this
here
is
that
it
also
sort
of
like
narrowstone
and
filters
the
end-to-end
starting
times
for
you,
so
you
don't
don't
have
to
sort
of
like
hook
them
up,
and
then
you
can
directly.
H
You
know
inject
your
metrics
that
you're
interested
here,
for
instance,
I
just
you
know
this
is
some
like
a
metric
that
gives
us
a
like
rough
overview
of
the
request,
duration
of
API
server
requests
and
we
see
like
we
have
maximum
of
three
and
a
half
seconds
on
the
resource.
The
forces
which
I
believe
is
a
resource
that
was
being
screamed
up
during
end-to-end
tests.
I
guess
so
that's
pretty
much
to
it.
D
D
So
our
do
we
now
having
kcp
cluster
aware
metrics.
Are
they
cluster,
oblivious
or
what.
H
They
are
not
faster
aware
and
they
shouldn't
be
the
same
way.
How
namespace
aware
metrics
are
not
a
thinking,
qpob
stream
as
well,
and
the
problem
here
is:
you
know
that
you
know,
workspaces
are,
do
not
have
an
upper
bound
like
they
can
and
we
literally
want
means
to
be
there,
and
there
is
a
problem
which
is
called
the
cardinality
problem.
H
So
if
you
would
allow
tracking
metrics
per
workspace,
we
would
have
literally
an
unbounded
amount
of
metrics
and
we
could
very
easily
overwhelm
Prometheus
multiply
that
by
namespaces,
and
then
we
even
have
a
bigger
problem.
So,
unfortunately,
you
know
metrics
only
have
a
certain
level
of
granularity
and
we
have
to
be
a
little
bit
careful
generally
when
you
inspect
metrics
in
kcp,
since
it's
an
API
server,
so
it's
literally
the
same
metrics
as
cube.
H
Api
server
has
them
unless
we
introduce
new
ones,
but
whenever
you
create
new
metrics,
just
always
be
aware
that
any
label
that
you
set
the
rule
of
thumb
is
there
should
be
a
constant
upper
bound
of
label
values
that
you
set
for
any
given
Prometheus
label.
Otherwise
we
could
probably
take
a.
D
All
right,
yeah
I
mean
I'm
aware
of
the
cardinality
problem
right
it.
It
just
runs
into
conflicts
with
the
basic
idea
that
you're
virtualizing,
the
API
server.
So
for
other
purposes
it
would
have
been
nice
but
yeah
I
understand
the
conflict.
I
got
my
answer
how
it's
resolved!
Thank
you.
E
Yeah,
just
addition,
I
think
we
talked
about
that
we
could
have
in
the
metrics
endpoint
per
workspace
right.
This
is
not
expensive,
but
it's
a
problem,
I
think
in
the
metrics
infrastructure
and
Cube.
So
getting
the
workspace
through
there
is
hard
or
something
so
collecting
separates
metrics
counters
and
other
things.
That's
tricky!
Well,
it's
more
like
a
programming
question
in.
D
Cubers
right,
actually
that
was
part
of
the
complexity
in
my
question
right
because
you
know
the
in
the
URL
structure
right,
the
you've
got
a
URL
prefix
that
looks
like
a
cool
API
server,
but
in
a
cube,
API
server.
You
can
add
metrics
to
that
prefix
and
you
know,
get
a
metrics
right,
but
you
can't
now,
but.
B
I'm,
sorry,
all
right
so
I'm,
looking
at
the
current
resource
quota
and
there's
like
10
instances,
I
think
of
from
players
and
so
I
tried
it
and
I
needed
to
delete
manually
my
instance.
H
H
Global,
it's
it's
Global,
it's
Global!
So
it's
it's
a
scarce,
it's
a
scarce
resource.
So
what
I
just
showed
you
the
trick
with
with
GitHub
actions,
you
can
do
the
same
trick
with
Pro
metrics.
So
if
you
go
into
artifacts,
you
do
short
it.
For
instance,
in
this
case
and
then
into
artifacts
again,
there
is
a
new
subdirectory
called
metrics
and
then
you
can
download
the
Prometheus
star.
H
A
Sounds
like
it
I
added,
but
have
not
yet
documented
a
make.
Target
for
downloading
the
logs
from
the
prowl
runs
to
your
laptop
or
desktop
so
that
you
don't
have
to
click
through
the
browser
to
get
them.
Maybe
we
could
do
something
similar
to
download
the
Prometheus
tar
file
and
spin
up
Prometheus
locally.
Just
to
you
know
an
idea
that
might
be
helpful.
A
G
Actually,
I'm
looking
to
do
some
performance
test
on
kcp.
So
are
there
any
suggestions
on
the
areas
which
I
can
test
I
mean
for
now
what
I'm
doing
is
I'm,
exposing
the
slash,
Matrix,
endpoint
and
scraping
on
top
of
it
and
as
well
as
I'm,
scraping
some
of
the
Sinker
logs
to
see
if
I
can
get
some
latencies,
so
are
there
any
other,
Solutions
or
areas
which
I
can
look
into
as
of
now?
At
this
point,
I.
A
Mean
the
short
answer
is
all
of
them:
I
would
say,
scaling
scaling,
workspaces
horizontally,
so
adding
more
workspaces,
and
you
know
how
do
things
change
if
there's
10
versus
100
versus
500,
you
know
adding
more
namespaces
like
just
adding
more
of
everything
and
scaling
out
horizontally
to
see
where
things
start
to
get
slower.
Surginess.
G
When
we
do
those
actions,
the
only
way
to
make
the
only
way
to
monitor
the
performance
Matrix
is
the
slash
Matrix
endpoint
right.
A
Also
I
mean
you
could
run
SAR
on,
you
know
Linux
system
and
try
and
capture
some
of
the
some
of
that
information,
but
I
think
probably
going
through
the
Prometheus
metrics
would
make
the
most
sense,
sergius
and
then
Steve.
H
Yeah
I
just
posted
on
the
chat,
or
maybe
we
can
post
also
on
the
issue
itself.
There
is
this
project
called
kubernetes
mixing.
G
H
This
project
I'm
not
saying
you,
can
use
everything
from
that,
but
there
is
a
lot
of
to
scavenge
from
when
it
comes
to
alerts
for
API
server.
So
they
are
like
dashboards
for
the
API
server
declared
in
this
project
and
a
lot
of
recording
rules
and
a
lot
of
alerts,
sort
of
like
best
practices
around
monitoring,
Cube,
API
server.
So
I
would
imply
that
many
of
those
best
practices
when
it
comes
to
slos
on
alerts
that
apply
to
stock
Cube,
API
server
and
also
applied
to
kcp.
H
So
it
would
be
a
very
nice
exercise
to
sort
of
like
scavenge
this
repository
look,
what
alerts
are
available
in
there
and
what
alerts
and
recording
rules
essentially
are
sort
of,
like
you
know,
useful
for
kcp
as
well,
and
then
use
those
to
evaluate
like
General
performance
of
KCT
and
again
I
totally
agree
with
Mike
that
it
doesn't
give
us
Pro
workspace
views,
but
it
like
even
for
a
global
view,
like
we
need
better
insights
on
how
we
behave
during
the
runtime.
So
just
as
a
hint
that
might
be
a
good
source
of
inspiration.
A
C
Yeah
I
was
just
gonna
say
if
you're
as
far
as
measuring
performance,
I
think
I
also
heard
mention
measuring
like
resource
consumption
and
stuff,
obviously,
depending
on
how
you
set
up
your
test,
if
you're
doing
containerized,
stuff,
locally,
you're
likely
able
to
use
c
groups
or
ebpf
or
whatever,
to
monitor
that
or
if
you
structure
it
as
a
series
of
cube
pods,
you
know
you
can
scrape
the
Prometheus
metrics
from
the
host
cluster.
G
A
And
I'll
just
Echo
what
was
written
in
chat
if
you
find
yourself
scraping
logs
to
get
information,
feel
free
to
try
and
convert
what
you're
looking
for
into
meaningful
metrics,
and
you
will
definitely
love
to
see
some
PRS
there
if
you've
got
the
time.
If
you
don't
have
the
time
and
just
have
ideas,
please
feel
free
to
file
issues,
and
we
can
see
about
getting
around
to
them.
A
Just
remember
that,
as
was
discussed
before,
we
can't
if
we
can't
really
have
per
workspace
metrics,
but
we
can
have
Global
ones
all
right,
so
anything
else
on
metrics
before
we
go
on.
A
Okay,
David.
F
There's
a
just
a
heads
up
to
say
that
finally,
the
Pod
logs
and
the
sinking
of
PODS
automatically
for
all
the
deployments
that
are
being
synced
landed
into
main.
So
it's
still
under
the
kcp's
Synchro
tunnel
feature
flag,
since
we
are
missing
one
layer
still
missing
one
layer
of
security
when
you
know
forwarding
the
parts
of
resources
to
the
physical
cluster.
So,
but
if
you
enable
the
this
feature
flag
on
both
the
kcp
side
and
the
Sinker,
you
know
side.
F
When
you
do
a
kcp
workload
sync
and
create
your
using
Target,
then
you
should
get
the
logs
be
able
to
exec
into
Parts.
Yes,
and
even
you
know,
log
the
deployment
so
feedback
welcome.
A
Very
cool
thanks
David,
so
that's
the
end
of
what's
on
the
agenda.
I
did
remember
I
wanted
to
show
the
docs
update
here.
So
when
you
come
into
kcp,
when
you
go
to
docs.kcp.io
kcp,
you
will
be
redirected
to
our
latest
stable
release,
which
is
still
0.10,
but
there
is
a
version
selector
where
you
can
switch
to
Main
and
as
we
release
new
versions
it
will
show
up
in
here.
A
We
are
only
going
to
do
the
major
minor,
so
you'll
just
see
0
10,
you
won't
see
multiple
zero,
tens
or
zero
elevens
and
I
know
that
there
was
a
question
in
slack
I
think
from
Mike
about.
Why
does
it
say
main
here?
But
it
says
0.10
here.
This
is
a
banner
or
something
that's
injected
where
it
just
takes
the
latest
release
from
the
repository
and
shows
that
information.
So
that's
why
you
see
that
discrepancy.
A
One
thing
that
I'm
working
on
that
I
hope
to
have
a
PR
very
shortly
is
that
what
we
had
in
our
older
docs
site
is
that
if
you
clicked
on
Developers,
for
example,
it
would
open
up
a
page
here
that
just
had
content
and
descriptions
for
each
of
the
the
child
pages
and
so
I've
been
working
on
making
that
a
reality,
so
that
when
you
click
on
Developers,
for
example,
you
see
just
a
summary
page
and
anything
that
gets
added
as
a
real
page
in
the
file
system
will
automatically
show
up
in
here.
A
You
don't
have
to
manually
edit
this
page,
which
was
pretty
cool
to
develop
and
thanks
to
some
very
powerful,
make
docs
plugins.
This
was
a
whole
lot
easier
than
it
could
have
been.
So
that's
what
I
have
on
docs
and
I
think
we're.
We
only
have
a
couple
minutes
left
before
we
are
officially
over
so
I'm
gonna
propose.
We
skip
the
the
triage
work
for
today,
and
we
can
pick
it
up
again,
another
time.
A
So
thanks
everybody
for
joining
today.
I
think
this
is
a
great
meeting
lots
of
good
discussion,
Steve
I'm,
looking
forward
to
those
issues
that
you're
going
to
be
filing
and
have
a
great
rest
of
your
week.
Everybody
see
you
next
time.