►
From YouTube: Kubernetes SIG CLI 20210127
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Which
is
basically
less
than
two
weeks
from
last
tuesday
yesterday.
So
if
you
have
any
awaiting
reviews,
enhancement
proposals
feel
free
to
reach
out
to
myself
sean
or
phil
or
eddie,
and
we
will
get
them
through.
I
have
a
couple
of
them
related
to.
B
Another
important
topic
is
starting
from
this
year,
and
every
single
special
interest
group
will
be
creating
an
annual
report.
I
will
be
creating
that
report
for
the
sex,
cli
and.
B
B
B
D
I
I
guess
I
can
go,
I'm
francesc
my
first
time
joining
and
I
actually
worked
with
philip
wetrock
and
katrina
very
apple,
so
I'm
here
for
the
for
the
purpose
of
the
cap
awesome
welcome.
Thank.
E
B
B
B
B
Okay,
hearing
none
welcome
everyone,
a
follow-up
that
I
had.
I
was
talking
with
vodak
with
regards
to
the
previous
question
from
last
time
about
the
cap,
migration
in
the
short
run.
Yes,
it
will
be
the
current
in
progress.
In
the
long
run,
the
goal
is
to
migrate
all
the
caps
to
the
new
format.
So
if
you're
currently
working
on
a
cap-
or
you
have
a
cap
that
is-
I
don't
know
halfway
through
your
alpha
beta
stage
or
you're-
currently
promoting
to
ga.
B
Yes,
your
cap
has
to
be
migrated
as
soon
as
possible
for
the
very
old
ones.
It's
not
a
requirement
requirement
yet,
but
eventually
they
will
want
to
see
all
of
the
caps
migrated
to
the
new
format.
So
I
think
that's
that's
the
only
one
that
I
wanted
to
follow
up
from
last
time
and
we
can
jump
off
to
the
main
topics
and
lee
I
think
you're
going
first
lee.
Are
you
going
to
share
your
screen.
F
No,
I'm
not.
Can
you
hear
me,
okay,.
F
Cool
so
for
to
start
with,
I
just
want
to
make
sure
that
it's
okay
with
everyone,
I
incented
myself
ahead
of
a
couple
of
of
items
because
of
a
scheduling
conflict.
It's
a
bit
rude
of
me.
I
want
to
make
sure
that
no
one
else
also
has
one
no
fantastic
thanks
everyone.
So
I've
been
having
a
couple
conversations
about
what
to
do
with
things
like
arguments
to
to
coup
ctl
debug
from
the
beginning,
we've
wanted
to
support
configurable
capabilities.
F
It's
a
commonly
asked
feature,
ask
for
feature
for
ephemeral
containers,
but
then
also
we
got
a
feature
request
for
this
for
the
the
non-ephemeral
container
use
case,
which
is
just
creating
a
copy
of
a
pod
and
adding
a
new
container,
so
the
first,
the
the
first,
the
biggest
request
is
for,
is
for
being
able
to
to
trace
a
process
right
that
takes
that
requires
cis
p
trace,
and
then
there
are
a
couple
others
for
for
net
edmond
and
sis
edmond,
and
it
kind
of
it
kind
of
points
to
a
bigger
question
of,
as
as
people
use
this
this
built-in
more
and
more,
there
will
probably
be
more
of
these.
F
My
first
thought
was
that
we
should
just
allow
it
to
be
configured
via
flags,
but
you
can
see
how
that
would
just
be
a
proliferation
of
of
flags.
You
know
one
for
capabilities,
one
for
other
tweaking
other
parts
of
the
generated
spec,
and
so
I
wanted
to
check
to
see
if
anyone
had
any
any
good
ideas
about
how
we
can
make
that
a
a
bit
of
a
friendlier
ux.
F
I
was
thinking,
maybe
something
like
just
a
flag
that
sets
admin
features
turns
on
admin
features
and
you
just
get
a
a
predefined
set
of
of
net
admin
and
sys
admin,
or
something
like
that.
Does
anyone
have
any
other
ideas.
G
F
That's
right,
yeah
I
apologize.
I
should
take
a
a
step
back
group
control
debug
is,
is
a
new
command.
It's
a
new
built-in
that
will
do
three.
It
has
three
journeys.
One
is
to
add
an
ephemeral
container
to
a
an
existing
running
pod.
F
The
other
is
to
create
a
copy
of
a
pod
and
add
a
new
container
or
you
can
modify
an
existing
container
and
then
the
other
one
is
to
create
a
new
pod
to
intended
to
debug
nodes
and
the
the
question
is:
how
much
configurability
should
we
we
add
into
the
security
context
right
now?
We
don't
allow
anything,
but
that's
the
common
use
case
is
to
be
able
to
trace
a
process,
so
it
would
be
nice
to
give
some
configurability
into
security
permissions.
G
And
the
what's
in
your
proposal,
like
one,
is
like
a
list
of
privileges
that
they
specify
another
as
a
like
a
flag,
a
repeated
flag
or
something
like
that.
Another
is
like
a
meta
flag
that
like
corresponds
to
some
pre-configured
setup
permissions
that
are
common,
that's
exactly
right,
and
then
I
guess
a
third
could
be
like
these
are
things
that
you
could
otherwise
specify
in
the
pod
template
right
or
in
the
container.
F
G
Yeah,
I
guess
a
third
option
would
be
like
allowing
someone
to
specify
a
pod
template
or
container
spec
or
like
like,
because
I
and
like
right
now.
It's
permissions,
I'm
wondering
if
tomorrow
it's
like.
Oh,
I
need
more
memory
for
this
thing
because
the
command
I
want
to
run
out
of
memories
right
or
you
know
like,
like
whatever
it
is
right,
then
I
need
to
mount
this
volume
or
I
I
don't
know
so.
G
I
think
one
thing
you
may
want
to
consider
is
just
like:
have
a
flag
which
is
like
here
is
the
pod
temple
or
the
container
here
is
here's
the
container
definition
for
like
to
spin
up
as
part
of
an
ephemeral
container
or
that
sort
of
thing
which
I
know
you
can
do
without
control
debug,
but
I'm
guessing
doing
it
through
control.
Debug
will
still
save
you
a
number
of
steps
and
have
be
better
documented
and
then
that
doesn't
need
to
like
that
doesn't
mean
we
don't
like
can't
offer
the
flags
as
well.
G
A
a
like
meta
flag,
similar
to
how
like
control,
get
supports
like
dash
dash
all,
for
instance,
but
also
supports
listing
the
resources
individually.
H
Cube
cube
cutter
run
have
a
have
an
overrides
flag
that
allows
me
to
patch
jason
and
that's
completely
generic
and
I
use
for
lots
of
things
to
me.
It
looks
like
a
better
approach.
B
Yeah,
I
was,
I
was
leaning
towards
that
approach
because,
like
lee
mentioned
at
the
beginning
and
phil
also
stressed
that
one
I'm
worried
about
the
proliferation
of
the
flaps,
because
capabilities
today,
something
else
tomorrow,
etc,
etc.
B
The
list
will
be
ever
growing
and
we
will
end
up
with
debug
having
the
same
problem
as
ron
currently
has,
whereas
having
the
patch
type
of
a
thing,
I'm
not
saying
it
has
to
be
exactly
patch
would
be
probably
similar
to
what
phil
described
with
regards
to
being
able
to
specify
part
of
the,
maybe
not
necessarily
the
entire
plot
template.
But
you
will
be
able
to
specify
only
the
best
that
you
care
about.
B
So
if
there
will
be
capabilities,
all
you
need
to
know
is
path
to
capabilities
which
would
override
whatever
the
debug
created.
This
way
you
have
a
full
control
and
you
only
modifying
the
bits
that
you
care
about
still
maintaining
the
default
behavior
of
the
debug
and
in
the
rest
of
of
the
bit,
and
if
I'm,
if
I'm
not
a
mistaken,
maybe
we
can
even
do
the
patch
to
be
both
a
a
string
or
even
a
file
so
that
it
is
applied
automatically.
B
G
The
command
would
be
control,
debug,
dash
dash
profile
and
then
like
dollar,
sign,
open
frame,
control,
debug,
like
admin
pro
profile,
dash
admin
or
something
like
that
close
brain.
And
then
now
you
have
like
a
nice
scriptable
way
of
getting
what
you
want,
but
it
doesn't
really
have
any
built-in
assumptions
about,
like
you
know
what
the
debug.
G
A
F
Yeah
that
makes
a
lot
of
sense.
I
think
one
of
the
things
I'd
be
concerned
about
is
that
not
all
the
operations
supported
by
coop
control,
debug
allow,
like
you
can't
just
so.
Some
of
them
are
necessarily
imperative.
Someone
like
creating
an
ephemeral
container
uses
a
separate
sub
resource,
so
you
can't
it's
difficult
to
then
use
that
with
another
coupe
control
command,
but
then,
of
course,
we
could
create
a
another
debug
sub
command
to
then
apply
that.
G
A
G
From
a
high
level,
rather
than
flags
like
having
a
more
expressive
file
or
string,
which
is
like
the
full
set
of
configuration
not
just
limited
to
these,
then
like
is
step
step
one
and
then
step.
Two
is
that's
a
hassle
for
users
to
generate
and
they're,
not
sure
what
it
should
look
like,
and
this
sort
of
thing
so
like
provide
commands
to
generate
that
for
them
and
then
step
three
is
like
allow
them
to
be
wired
into
one
another
more
effectively.
I.
B
I
mean,
alternatively,
we
could
do
and
I
I
think,
that's
still
possible
if
carol
debug
would
spit
out
the
json
whatever
or
yam
of
the
newly
created
the
pod,
and
then
you
just
manually
modify
that
one
and
use
that
as
an
input
for
for
the
debug
on
that
tray,
yeah
or
dry
run
right
dry
around
here
yeah.
They
basically
do
dry
run
it
spit
it
out,
modify
and
then
use
that
as
a
at
the
input
for
debug.
G
Yeah,
I
mean
my
the
main.
The
the
challenge
we
just
want
to
make
sure
of
is
like
an
end
user.
Probably
just
wants
to
say
I
want
like
just
give
me
the
privileges.
I
need
to
do
the
thing,
because
I'm
panicking,
because
I'm
in
control,
debug
mode
and
something's
broken
and
so
looking
up
documentation
about
which
things
I
need
to
enable
is
like
gonna
not
be
the
thing
I
want
to
do
right
now,
so
just
also
like
pairing
that,
with
a
like,
by
the
way,
here's
probably
the
edits
you
want
to
here's
the
edited
file.
I
I
I
don't
know
if
cube
cuddle
currently
invokes
any
crds
directly
in
that
way,
but
it's
already
a
resource.
It's
not.
G
J
I
can
confirm
that
you
can
access
it.
I
had
built
something
around
it
and
there
was
a
proposed
deprecation
maybe
a
year
ago,
and
I
was
one
of
several
people
who
objected
to
that.
So
it's
still
there.
B
Yeah,
I
personally
haven't
seen
anyone
creating
those
but.
G
Anyway,
okay,
do
you
have
enough?
I
think
you
got
a
lot
of
feedback
here.
You
have
enough
information
for
it.
F
Yeah
yeah
thanks
just
to
tie
it
up.
I
think
what
I'm
gonna
do
is
start
out
by
taking
the
patch
approach
that
con
that
run
uses
just
to,
because
that's
a
that's,
an
easy
fix
and
still
useful
if
we
support
this
sort
of
profiles
and
then,
as
a
next
step,
revisit
this,
the
the
suggestion
that
phil
made
of
being
having
sub
commands
that
that
can
have
useful
profiles
to
be
a
more
user
friendly
experience.
B
B
I
assume
you're
next,
with
cap2299
customize,
plug-in
compass,
api
composition,
api,
you
wanna,
share
your
screen.
J
I
don't
need
to.
I
was
thinking
that
since
there's
a
lot
on
the
agenda
today,
maybe
I
could
just
give
a
really
quick
high
level
description
rather
than
going
over
the
whole
cap
and
just
give
people
an
idea
of
whether
or
not
they're
interested
in
this,
and
I
answer
some
high
level
questions
if
there
are
any,
but
we
can
save
discussions
for
the
actual
pr,
because
there's
quite
a
lot
of
content
in
there
does
that
sound
good.
J
Okay,
so
this
cup
is
something
that
phil
francesca
and
I
have
been
working
on,
collaborating
on
entirely
internally
at
apple
and
basically,
it
proposes
a
new
kind
for
customize
that
builds
on
the
existing
configuration
function,
specification
and
concretely
on
the
kml
libraries
that
are
already
available,
and
it
builds
an
api
around
configuring,
configuring
and
orchestrating
those.
J
So
by
doing
this,
we're
unlocking
new
capabilities
and
a
better
ux
for
plug-in
based
workflows,
but
at
the
same
time
this
is
a
separate
alpha
api
that
we're
proposing
that
would
be
able
to
be
built
with
customized
build,
but
it's
like
a
completely
new
type.
It
doesn't
change
customization
and
we're
not
proposing
integration
into
cube
control,
at
least
at
this
time.
So
in
other
words,
cube
control.
Fk
would
not
accept
this
type.
We
want
to
keep
it
completely
alpha
for
now.
J
Another
important
thing
to
note
is
that,
although
this
is
separate,
we
do
support
all
the
customized
transformations,
so
it's
kind
of
we
do
support
customized
transformations
as
in
sort
of
like
a
plug-in
like
format,
but
it
this
new
api
does
also
expose
the
capabilities
of
customize
and
it's
very
much
in
the
original
spirit.
J
J
So
I'm
looking
for
viewers
on
that
people
who
are
super
interested
in
customized
or
customized
plugins,
specifically,
if
you
could
please
take
a
look
at
the
cap,
bring
me
your
comments
either
right
now,
if
you
already
have
some
at
a
high
level
or
I'm
available
in
the
kubernetes
slack
as
well
as
obviously
on
the.
I
Katrina,
I
have
some
questions,
but
I'm
going
to
throw
them
into
the
cap.
I
guess
it's
a
pretty
complicated
and
you
know
there's
a
lot
of
the
assumptions
that
I'd
like
to
go
over
and
yeah.
Thank
you
for
the
cap.
Lots
of
good
stuff
in
there.
J
Thanks,
I
look
really
look
forward
to
getting
your
feedback
thanks
for
taking
a
look.
J
Phil,
are
you
also
able
to
review
by
february
5th.
G
It
but
I'll
look
over
a
second
time
and
then
I'll.
Look
at
any
other
comments
that
pop
up
as
well
and
either
respond
or
think
about
them.
B
B
Currently,
the
entire
upgrade
is
blocked
by
customized
changes
and
I'll,
let
jeff
to
walk
through
them
towards
the
end
of
the
of
the
goal.
Yes,
we
are
aware
it
is
a
top
ongoing
topic
for
for
us
and
it
just
takes
a
little
bit
of
time.
E
Hey,
thank
you
maji,
so
hey
folks,
like
I
said
before,
I'm
marvin,
I'm
part
of
sig
windows
and
I'm
actually
representing
sigmundos
here
as
part
of
this
camp.
So
what
this
kept
is
trying
to
do
is
that
is
a
existing
warlog
viewer
in
the
cubelet.
E
E
The
questions
we
have
is
you
know
there
is
this
existing
var
log
endpoint
in
the
cubelet?
Is
that
being
used
by
any
clients
in
ecli
clients?
Today
we
didn't
see
any
use
of
it
in
cube,
cuddle
itself,
then
the
other
question
is
at
least
on
the
openshift
side.
E
So
those
are
the
things
we
would
like
to
discuss
and
figure
out.
What
the
best
way
forward
is
because
I'm
not
clear
from
a
security
perspective
if
viewing
of
node
logs
should
be
given
to
anybody
who's
on
the
on
the
cluster,
and
you
know
typic.
Ideally,
this
should
only
be
viewed
by
a
by
admin
or
a
cluster
admin.
B
I'll
probably
start
with
answering
the
second
question,
because
when
we
spoke
last
time
it
didn't
it
wasn't
obvious
to
me,
but
I
now
that
we
were
we
were
talking
about
logs
and
beforehand,
we
were
talking
about
debug.
B
B
So
when
it
finds
that
you're
trying
to
get
the
blocks
from
a
deployment,
it
will
find
that
this
is
a
deployment
we
need
to
find
the
the
current
running
pod
from
that
deployment
and
redirects
it
silently
to
the
to
the
appropriate
pod.
Similarly,
we
could
for
node
reach
out
to
the
appropriate
api
and
get
the
node
logs
from
there.
B
So
that's
one
thing
and
the
fact
whether
you
can
access
the
logs
or
you
cannot
will
purely
depend
on
whether
this
endpoint
is
exposed
to
you
or
not.
And
if
you,
if
you
don't,
have
the
necessary
access
rights,
you'll
just
get
an
inappropriate
information
coming
from
the
api
server,
because
this
is
this
should
be
easily
configured
by
airbag
on
the
server
side
and
clients
will
only
consume
them
if
they
have
the
necessary
access
rights.
E
E
B
Think
it's
a
it's
a
reasonable
extension
of
the
current
log
since
we're
treating
every
resource.
Similarly,
a
node
is
a
resource
within
a
cluster.
B
Similarly
to
how
a
pod
or
a
deployment
is,
why
not
reusing
that
and
expose
the
functionality
through
through
the
pure,
lock
command.
E
Okay,
that
actually
makes
a
lot
of
sense.
We
have
no
we're
not
tied
to
having
a
separate
command
for
any
any
reason.
That
seems
like
a
valid
approach.
I'm
gonna
now
also
allow
the
other
folks
from
segwindows
who
were
saying
we're.
Gonna
join
here
to
also
speak
up.
I
don't
want
to
take
all
of
the
time
christian
mark
if
you're
around.
Do
you
have
any
any
more
questions
to
add
around
this.
K
Hey
everybody.
Can
you
hear
me?
Okay,
all
right,
yeah.
I
don't
think
I
have
any
anything
huge
to
add
to
that
really.
The
yeah,
the
open
question
we've
had
was
whether
there
is
already
a
client
for
that
viral
log
streaming
feature.
Apparently
there
isn't
so
yeah.
The
next
step
would
be
to
find
a
place
to
add
that
command
and
ideally
have
that
command
shielded
to
to
be
only
usable
by
admin
users.
What
must
he
said
might
be
a
valid
alternative
approach
here.
E
Yeah,
thank
you
much.
I
think
you
know
we
just
started
down
this
down
this
path.
I'm
sure
we'll
have
more
questions
and
we'll
hoping
you
if
we
run
into
more
issues
or
more
questions.
B
Out
of
curiosity,
do
you
have
any
particular
release
you're
targeting
this.
E
E
B
Think
the
server
side
bits
will
be
more
time
consuming
than
the
client
part.
Client
part
should
be
pretty
straightforward
if
I
remember
correctly,
because
you're
basically
streaming
from
a
particular
endpoint.
E
I
see
okay
yeah,
I
I'm
I'm
not
an
ex.
I've
not
worked
with
cube
cuddle
so
far,
so
it's
all
going
to
be
new.
So
I'm
going
to
be
asking
questions.
Sorry
about
that.
A
Should
this
effort
start
as
a
plug-in
before
you
know
before
we
see
prs
against
the
logs
command.
B
It
is
a
valid
approach,
definitely
I'm
not
rolling
one
way
or
the
other
arvind
there.
There
is
an
interesting
question
from
from
someone
on
the
chat.
B
K
So
maybe
I
can
take
this
with
the
current
feature.
That's
already
implemented.
You
can
essentially
stream
any
file
that
is
in
the
var
log
directory
and
any
sub
directory.
So
if
it
exists
in
there
you
can
kind
of
specify
it
and
stream.
It
obviously
there's
no
client
for
it.
So
it's
it's
a
bit
harder
to
do
and
yeah
then
for
the
for
the
journal
box.
We
don't
really
have
a
a
listing
of
streams.
K
It's
really
just
what
the
the
journal
ctl
command
would
would
return
to
us
that
we
then
stream
over
so
in
in
case
those
logs
live
in
the
journal.
They
can
be
streamed
and
selectively
selectively
streamed
as
well,
but
there
is
no
such
thing
as
listing
the
different
streams.
So
you
can.
You
can
probably
get
a
get
a
file
listing
from
the
wirelock
directory,
but
yeah
there's
no,
no
logic
to
to
selectively
stream.
Something
really.
B
E
B
Are
there
any
other
questions
with
regards
to
no
vlogs.
B
I
see
there
is
a
discussion
arvin
that
might
be
worth
having
a
look
it.
It
might
have
some
inputs
for
your
proposal.
Okay.
In
the
meantime,
we
will
move
on
with
the
next
topic.
Okay,.
B
L
Yeah,
can
you
or
just
open
the
link.
B
Sure
sure
I
was
about
to
do
it.
I
got
out
of
fight
because
now.
L
So,
oh
actually,
I'm
struck
at
a
design
like
a
choice,
so
antoine
has
actually
suggested
me
something,
and
I
I
kind
of
misread
and
implemented
something
else.
So
the
idea.
L
So
what
I
implemented
is
to
have
a
minus
minus
force,
condition
or
like
minus
minus
for
flag
itself
would
take
in
the
or
a
conditional
statement
and
using
minus
minus
four
flag.
Multiple
times
would
give
you
the
functionality
of
an,
and
so
that's
what
I
was
going
for,
but
like
antoine,
had
suggested
that
having
the
minus
minus
four
flag
multiple
times
would
give
you
the
or
functionality,
and
if
you
want
the
and
functionality,
you
run
the
command
again
you
or
basically
use
the
linux
and
operator.
L
So
I'm
a
bit
confused
because
doing
it
this
way
gives
the
benefit
of
kind
of
having
both
of
them
in
one
command
itself
cube
cuddle
itself,
whereas
antoine's
approach
is
much
cleaner.
To
be
honest,
I'm
happy
to
restart,
but
I
just
wanted
to
have
some
views.
L
A
So
so
it's
good
that
you,
you
got
feedback
from
antoine.
A
There
are
at
least
two
other
sigs
that
are
working
on
conditions,
and
so
you,
I
think
that
a
lot
of
your
effort
may
be
coordinating
here,
because
because
of
sig
apps
and
api
machinery,
I
believe
are
also
in
on
conditions,
work
and
trying
to
get
everyone
to
to
for
some
consensus
on
these
conditions
that
are
going
to
be
used
throughout
the
kubernetes
ecosystem.
B
A
Head,
but
I
will
dig
in
for
that-
I
I
know
somebody
else
who
is
significantly
more
plugged
in
a
guy
named
morton
at
my
company,
who's
been
doing
condition,
work
with
sig
apps
and
been
coordinating
with
api
machinery
on
that.
B
Okay,
that
would
be
very
helpful,
I'm
personally
interested
in
that
topic
as
well.
So
I
wasn't
even
aware
that
there's
this
effort
going
on,
I
must
have
probably
missed
that
bit
of
discussion
in
this
api
anyway
yeah,
so
I
would
probably
hold
with
any
particular
decisions.
L
Yeah
yeah,
that
sounds
good.
I'm
just
wondering,
like
I'm,
I'm
attracted
to
what
antoine's
understood,
but
I
kind
of
see
the
value
for
what
I
currently
implemented,
because
you
can
look
for
multiple
conditions
at
once
like
or
you
have
one
resource
and
you
look
for
multiple
conditions,
so
that's
kind
of
nice
yeah.
So
if,
if
this
is
not
attractive
like
I
can
just
go
with
that
idea,
instead
of.
B
I
mean
this
is
not
about
saying
which
idea
is
good
or
bad.
It's
more
about
consistency
with
the
conditions
that
are
exposed
through
the
api,
with
how
the
conditions
then
can
be
consumed
and
reused
for
the
cape
cod
command,
so
I
wouldn't
rule
out
one
way
or
the
other.
I
would
just
wait
for
a
definitive
answer
from
how
this
topic
will
be
approached.
L
A
It's
not
clear,
so
some
of
those
will
apply
to
certain
resources
and
not
to
others,
and
you
know,
unless
the
rest
of
the
system
agrees,
that
those
conditions
even
exist.
It's
it's
not
going
to
be.
L
Yeah,
that's
true,
so
someone
had
recently
commented
that
they
wanted
this
for
job
under
deployment
like
they
just
wanted
to
run
one
command
and
check
this
multiple
conditions
again
against
a
deployment
and
a
job.
So
I
think
in
that
case
it
kind
of
makes
sense
like
for
particular
resources.
It
makes
sense,
but
yeah
there's
an
inconsistency
around
which
you
wouldn't
anyway
happy
to
wait
out
for
more.
A
I'll
get
back
to
you
by
by
modifying
this
sig
cli
doc,
so
that
everybody
can
also
see
it.
Does
that
sound,
reasonable,
harsh
yeah.
A
You're
up
next,
so
I
just
wanted
to
coordinate
with
the
rest
of
the
team.
I
was
hoping
to
make
some
progress
on
our
the
the
coop
cuddle
commands
in
the
headers
and
and
by
the
way
we've
we've
recently
because
of
the
the
new
kepp
format.
We've
moved
them
into
a
new
location.
A
We
kind
of
lost
the
history
of
this,
and
I
don't
know
what
the
answer
is,
but
that's
kind
of
a
unfortunate
loss
since
we,
you
know
it'd,
be
really
nice
to
see
who
was
doing
what
on
this
cap-
and
I
know
that
the
mache-
you
are
very
interested
on
this,
so
I
wanted
to
think
at
least
you
know
especially
ping
you
to
make
sure
that
you
know
get
your
feedback
on
on
actually
starting
to
to
dig
into
this
and
get
some
of
this
accomplished.
B
The
initial
idea,
if
I
remember
correctly,
came
from
from
phil
and
the
issue
that
was
linked
with
the
original
I
just
recently
removed
life
cycle
frozen
or
something
like
that
from
that
issue.
So
I
can
point
you
to
the
issue,
so
I
I
remember
that
this
came
from
tail
and
I
was
also
interested
in
that
because
we
would
be
able
to
consume
that
information
through
metrics
to
be
able
to
say
what
users
are
invoking.
B
That
was
one
of
the
reasons,
but
there,
the
others
were
also
mentioned
in
the
cap
as
well,
so
make
sure
to
look
fail
as
well.
I
know
that
he
had
a
conflict
that
we
had
to
leave
halfway
through
the
call.
A
But
anyone
else
I'll
reach
out
to
him
directly
but
but
yeah.
I
just
brought
it
up
in
this
forum
to
see
if
anybody
else
had
either
wanted
to
join
or
had
already
started
some
work
on
this
just
so
that
we
can
all
coordinate
efficiently
here,
cool
anybody,
who's
interested.
Please,
please
ping
me
and
if
you
have
any
ideas
the
cap
is
actually,
I
think,
pretty
well
done
and
pretty
specific.
B
Cool
thanks
a
lot
sean
marek
you're
up.
Next,
I
think
you
have
two
topics.
I
remember
that
the
protobuf
was
related
with
cucuta
top,
so
go
ahead.
Yeah.
M
Hey
I
am
mark,
or
on
github
serratus.
M
I
come
here
from
cd
instrumentation
to
about
talk
about
two
things,
so,
firstly,
I
wanted
to
pick
up
one
topic
that
was
important
for
me
as
a
maintainer
of
magic
server,
which
is
the
main
source
of
or
the
only
or
the
default
source
of
matrix
for
cube,
ctl
top
and
one
of
the
like
problems
or
like
things
that
we
are
currently
working
on
is
trying
to
shift
resources
that
we
use
from
things
that
could
be
like,
instead
of
using
them
for
like
non-functional
improvements
to
makes
make
some
improvements
in
the
agent
itself.
M
M
Making
the
tilt
up
much
more
useful
for
debugging
main
change
here
is
that
we
would
want
to
like
keep
the
same
level
of
resources,
and
one
of
the
biggest
improvements
would
be
changing
that
the
the
in
all
the
clients
or
the
difficult
clients
how
this
api
is
called
with
preference
to
using
protobuf.
Currently,
I
think
I
changed
in
120
controller
manager
has
already
used
this
protobuf.
M
The
only
thing
so
the
second,
like,
I
think,
common
user
use
of
this
api
is
stl
top.
So
I
wanted
to
overall,
maybe
start
a
discussion
about
using
keeps
changing
the
content
type.
N
N
M
Are
there
any
like
overall
thoughts
or
like
I
I
don't
know,
I
haven't
been
six.
I
didn't
know
his
story
about
like
why
we
didn't
use
the
protobuf
or
like
preferred
json
by
default.
I
think
it's
because
it's
default,
it's
preferred,
but
I
know
that
scalability
went
around
and
like
changed
everywhere.
That
was
impactful
like
for
performance
use,
so
we
have
cases
that
prefer
protobuf,
I'm
just.
B
We're
really
talking
cluster
internally,
so
controllers
to
api
server
are
talking
about
because
it's
it's
better
for
both
parties,
especially
given
how
much
communication
between
them
is
happening.
B
B
Maybe
we
should
do
it
before
we
change
the
defaults.
Maybe
we
could
do
it
through
a
flag
to
the
top
command
with
exposing
a
flag
where
you
could
pick
the
protobuf
or
eventually
their
resting
period,
which
is
currently,
as
you
mentioned.
One
minute
to
lower
would
require
to
switch
to
protobuf,
let's
say
so.
B
We
would
leave
the
defaults
as
is,
but
if
you
would
be
lowering
the
the
resolution
to
let's
say
15
seconds,
that
will
require
you
to
switch
to
protobuf
for
performance
reasons.
C
M
So
the
problem
here
is
that
there
is
it's
enough
for
one
person
that
or
one
cluster
user
to
have
forget
to
set
their
the
flag
to
basically
overload
the
metric
server
like
or
like
increase
the
resources
by
30
percent.
M
B
Kingdom
can
you
repeat
and
put
yourself
a
little
bit
above
your
volume.
I
I
could
barely
hear
what
you
were
saying.
M
I
I
don't
know
he's
like
here
history,
but
assuming
that
cube
aggregator
at
the
beginning
and
like
the
extension
servers,
I
think
they
supported
proto
because
I
think
it's
it
was
created.
Maybe
around
later,
like
the
support
for
5k
notes.
B
I
it
shouldn't
be
a
big
breaking
change
with
regards
to
communication
between
the
client
and
the
server,
because
within
the
boundaries
of
our
supported
versions,
it
will
work
just
fine.
So
I'm
not
worried
about
that
one,
although
we
are
changing
the
default
behavior
of
a
command
that
people
are
using
so.
B
This
would
allow
people
to
have
the
necessary
time
for
for
the
change
prepare
for
the
change.
Then
we
would
flip
the
switch
to
default
in
the
next
release
and
then
eventually
into
releases.
We
would
remove
the
flag
along
the
way
without
the
ability
to
go
back.
B
So,
instead
of
doing
a
big
bang
change,
I
would
prefer
to
roll
this
type
of
a
change
gradually.
What
do
you
think
about
that
kind
of
approach?
Mark.
B
B
What
about
the
other
topic
or
is.
M
B
M
A
M
I
was
experimenting
with
so
overall
idea
that
I'm
like
experimenting
with
is
that
by
default,
cube,
ctl
top
omni
gives
us
simple
access
to
cpu
and
memory,
and
the
main
reason
is
that
the
most
default
implementation
is
the
most
core
part
of
kubernetes
monitoring,
which
is
called
core.
Metrics
are
those
two
metrics
that
are
used
on,
like
mainly
by
autoscalers
like
if
you
want
easy
default
behavior
for
the
scaling.
M
You
have
a
cpu
memory
and
you
go,
but
aside
of
that,
instrumentation
also
has
different
apis
for
metrics,
and
one
of
them
is
like
that
is
custom.
Metrics
custom
metrics
differ
only
from
core
metrics
is
by
two
things:
one.
It
can
be
defined
any
possible
metric
that
you
can
think
of
and
second
it
can
be
to
any
component
refer
to
any
kubernetes
resource.
M
M
You
have
a
balancer
connected
to
your
ingress.
You
can
expose
metrics
and
do
it
so
currently.
This
requires
much
more
setup
from
on
monitoring
site,
but
there
was
no
currently
no
work
before
to
have
this
data
available
in
clients.
What
would
look
like?
We
would
need
one
first
command,
give
me
a
list
of
metrics
that
are
available
for
such
with
some
resource,
so
here
just
a
temporary
namekit
still
magic
spots
and
second,
you
have
a
commands.
M
You
can
add
technically
a
simple
slack
which
could
be
more
adapted
to
how
custom
columns
to
to
to
add
any
custom
column
that
you
would
want
to
use
it,
for.
I
think
there
is
a
question.
M
Oh
okay,
yeah,
so
with
that
technically,
if
you
would
deploy
somewhat
fixed,
but
if
you
did
like,
if
your
user
prometheus
operator
or
cube
premiums
deploy
it,
maybe
do
some
two
weeks,
you
will
get
those
metrics
almost
by
default,
like
those
the
results
from
those
met,
those
two
commands
and
access
all
to
those
new
metrics
by
default.
M
So
here
I
mostly
wanted
to
bring
this
topic
to
discussion.
Possibly
if
there
is
any
interest
or,
if
that,
like
what
kind
of
er,
if
there's
interest
in
it,
and
how
such
changes
should
be
proposed,
as
it
should
be
proposed
in
such
changes
as
cube
steel,
should
it
be
a
plug-in
so
that
we
qctl
top
or
and
if
we
should
write
like
change,
requires
full.
E
B
I
know
it's
not
the
the
most
favorable,
but
it's
a
good
because
on
one
hand
we
document
the
functionality
which
in
most
cases
just
happens
to
be
added
and
then
forgotten
and
and
and
kept
at
least
gives
us
some
kind
of
a
history
what
the
intent
was
and
what
the
desire
from
the
functionality
and
it
allows
to
express
some
ideas
back
and
forth,
or
questions
in
a
consistent
form
and
for
others
to
to
digest
if
they
have
any
questions.
B
Would
be
good
in
the
interest
of
time
I'll
probably
leave
off
the
discussion
for
the
next
time.
I
still
want
to
go
because
I
promise
to
dance
that
will
go
through
stand.
Ups
from
nick
I'll
I'll,
probably
speak
up.
B
There
is,
there
are
one
pr
requesting
kuya
repo
in
the
openshift
work
in
the
kubernetes
or
sorry
and
there's
a
pr
open
describing
this
change.
Basically,
we
are
moving
koi
under
kubernetes
repo,
as
it
was
mentioned
in
the
past,
I'm
not
sure
nick.
You
want
to
add
something.
To
that
one
look,
that's
it!
Yeah
just
wanted
to
make
fyi
on
that
cool.
Thanks
and
most
importantly,
I
want
to
hear
from
jeff
with
regards
to
customize
and
hey,
hey
machi.
I
Thank
you
thanks
everyone,
so
I've
updated
issue
number
1500
in
the
customized
repository,
that's
what's
tracking
this
reintegration,
so
we've
been
deleting
code
getting
the
libraries
ready
to
go
and
the
best
way
people
can
help
is
use.
The
current
version
of
the
latest
version
of
customize
3.9.2
report
bugs
and
those
that
version
of
customize
is
using
the
same
libraries
that
we're
going
to
be
integrating
into
coop
cuddle.
I
So
I'm
feeling
pretty
good
about
it.
I
don't
see
any
major
bugs
just
waiting
for
you
know
a
little
bit
more
feedback
from
folks.
Meanwhile,
preparing
cl,
prs
and
whatnot.
So
that's
it.
If
you
want
to
help
try
to
file
some
bugs
on
3.9.2,
we'll
probably
release
3.9.2
with
certainly
released
3.9.3
next
wednesday
and
that'll,
be
it
that'll,
be
the
the
version
of
code
that
we
integrated
into
coop
cuddle.
B
Much
okay
with
that,
I
think
I'm
gonna
close
the
call
today,
sorry
for
the
five
minutes
overrun
and
thank
you
very
much
for
your
time,
see
you
soon
bye.
All.