►
From YouTube: Kubernetes SIG CLI 20200129
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today,
is
January
29th
and
welcome
you
all
to
do
another
hour
of
our
bi-weekly
six,
your
line
meetings.
My
name
is
Monty
and
I'll,
be
your
host
today.
So
a
quick
announcements
that
I
would
like
to
start
with.
The
meeting
with
is
the
enhancements
freeze
was
due
yesterday,
which
basically
means,
if
you're
planning
on
working
on
any
major
features
and
the
enhancements
has
not
verged.
A
Yesterday
then
I'm,
sorry,
it
won't
be
included
in
118,
but
it
may
be
worth
working
it
out
already
so
that
we
can
land
it
in
119
timeframe,
another
important
date
that
that
is
five
weeks.
The
head
of
us
is
the
code
freeze.
If
my
math
is
okay,
we
have
about
five
weeks
five
four
weeks
in
a
day
to
be
specific
before
the
code.
Freeze,
I
do
hope
that
that
this
is
sufficient
amount
of
time
for,
for
all
of
us
to
deliver
all
the
118
features
and
from
what
I
was
checking
the
track
features
for
six-year.
A
One
last
announcement
we
held
our
first
box
crap
last
Wednesday,
all
the
credit
and
the
fame
and
applauses,
and
everything
else
goes
to
Eddie
I'm,
not
seeing
any
on
the
call
today,
but
the
gist
is
we
went
through
a
pretty
big
amount
of
bugs
and
a
cubicle
repo.
We
will
definitely
repeat
the
same
exercise
and
from
what
we
were
discussing.
This
will
be
held
approximate
once
a
month
for
starters,
and
eventually,
as
this
let's
shrink
significantly,
we
will
eventually
reevaluate.
A
B
Thanks
man,
she
yeah,
so
this
is
a
topic
which
came
up
in
crew
quite
often
with
the
recent
months
so
and
basically,
how
is
crew
installed?
So
I'm
cool
for
anybody
who
doesn't
know
crew
is
a
package
manager
for
Q
credit
plugins
and
it
doesn't
come
with
a
cue
card
itself.
So
we
have
a
bash
script
on
our
landing
page
for
crew,
which
is
just
well,
maybe
no
one
diner.
But
it's
a
it's
pretty
pretty
small
to
install
Chrome.
B
However,
what
we
find
is
that
people
still
want
to
have
a
more
yeah
easier
way
to
install
crew
on
their
systems
and
in
particular,
and
people
start
adding
crew
to
package
managers
which
in
principle
would
be
fine
because
it
gives
a
consistent
UX
for
installing
binaries
on
the
system.
However,
in
the
case
for
of
crew,
it's
it
doesn't
really
make
sense,
because
crew
is
designed
to
handle
itself
crew
itself
as
a
deep-cover
plugin,
and
we
don't
want
to
give
up
control
over
when
plugins
are
installed
and
in
what
versions
in
particular.
B
Also,
we
found
that
some
of
those
installations
were
not
fully
correct,
so
dependencies
were
missing
and
the
crew
insulation
was
simply
broken,
so
this
is
suboptimal
and
for
users
perspective,
because
they
would
say
think
that
crew
might
be
program
because
install
complete
success
and
yeah.
So
also
we
don't
want
to
to
maintain
the
adding
crew
to
all
those
or
to
all
different
package
managers,
because
it
would
mean
a
huge
overhead.
I
mean
they're.
B
One
option
that
we
were
thinking
about
was
if
we
could
get
a
shim
in
queue
cuddle
which
basically
tells
users
that
crew
is
not
currently
installed,
but
could
install
the
accrue
itself
on
the
system,
so
either
I
mean
there
are
different
variants
of
it,
for
the
most
basic
variant
would
be
that
just
the
the
installation
instructions
are
printed
and
then
two
different
to
various
degrees
going
to
a
full
installation
of
crew.
So
what
this
would
also
mean
is
that
the
sub
come
on.
B
Reza
Lucian
would
have
to
be
changed
for
crew,
at
least
because
if
this
Shane
were
really
in
queue,
cuddle
then-
and
it
should
only
resolve
to
the
its
shame
as
long
as
crew
is
not
installed,
which
is
different
to
other
plugin
resolution
logic
yeah.
So
basically,
it's
a
it's
halfway.
It's
getting
crew
halfway
into
coop
cuddle,
so
not
having
crew
completely
in
cute
cuddle,
but
at
least
make
it
easier
for
users
to
install
Chrome.
A
A
B
So,
printing,
the
historian
structions,
is
certainly
the
smallest
step
in
the
direction
of
getting
this
into
the
cube
table.
But
it's
a
it's
a
good
start.
I
think
we
would
be
should
we
need
to
see
if
yeah,
if
it
really
reduces
the
need
for
people
and
pushing
crew
into
package
managers
which
I
suspect
will
not
work
completely,
because
even
finding
the
plugins
command
is
not
something
whichever
user,
Rivera
I
think.
A
C
C
So
I've
threw
together
just
a
couple
slides
that
I'll
use
to
talk
through
this,
but
there's
a
cup
and
one
of
the
changes
that's
required,
for
it
is
a
change
to
keep
control
so
I
figured
I
would
come
in
and
introduce
it
to
you
guys
and
answer
any
questions
you
have
join
so
I'll
start
out.
I
was
going
to
do
a
demo,
but
I
doesn't
look
like
we'll
be
able
to
do
it.
Maybe
we'll
be
able
to
do
it
in
a
few
minutes,
but
that
there
are
two
basic
changes
that
are
required
for
this.
C
C
One
of
the
things
we
want
to
do
is
generate
a
context
and
store
it
in
object,
annotation,
because
this
is
how
we're
passing
the
context
between
controllers
right
now
and
that
way
the
controller
manager,
the
scheduler,
the
cue
button.
They
can
all
reference
that
same
trace
context
and
then,
when
the
trace
backend
gets
all
the
traces
at
the
very
end.
Is
there
a
question?
Let's
see
no
and
then,
when
the
traces
get
to
the
trace
back
and
that
can
then
put
them
all
back
together
into
a
nice
trace.
C
So
that's
the
first
change.
The
second
change
is
that
we
have
to
send
the
context
with
the
actual
HTTP
request
that
we
make
it
to
the
API
server.
This
is
sort
of
the
standard
way
tracing
is
normally
done,
which
is
there's
an
RPC.
You
attach
the
context
to
an
RPC
and
it
goes
down
the
chain
and
then
back
up
the
chain,
and
you
get
a
cool
little
trace
like
this
I
split
for
my
demo
at
least
I
split
out
the
commit
that
can
change
that
contains
the
cue
control
changes
out
separately.
C
And
if,
if
everyone
else
likes
the
way
that
we're
doing
tracing,
basically
the
thing
that
we're
missing
without
this
would
be
that
you
can
use
client
go
to
send
to
trace
operations.
But
then
we
really
need
a
route
of
what
the
user
input
was.
And
so
that's
why
we
need
to
make
a
change
to
keep
control
so
that
users
can
specify
when
something
should
be
traced
and
when
it
shouldn't
be.
That
makes
sense.
C
A
C
A
little
bit
so
and
I
can
give
a
little
bit
of
background.
So,
with
things
like
see
advisor
and
heap
start,
we
took
the
approach
of
trying
to
have
a
community-owned
component
that
could
send
to
a
variety
of
different
metrics
back
ends,
and
that
has
ended
rather
terribly,
since
none
of
the
integrations
are
supported
and
it's
a
pain
for
all
of
the
the
maintainer
of
those
containers
of
those
components.
So
I'm
trying
to
take
a
slightly
different
approach
here.
C
There's
a
great
new
project
called
open
telemetry,
which
essentially
allows
us
to
instrument
our
components
once
and
send
to
a
what's
called
the
open,
telemetry
agent,
or
it's
also
referred
to
as
the
collector,
and
that
then
can
be
separately
versioned,
and
we
don't
necessarily
need
to
have
experts
in
the
community
for
how
do
I
send
a
Zipkin.
How
do
I
send
a
Jaeger
that
I
sent
to
whatever
back-end?
We
just
need
to
send
to
this
agent
and
supported
entry,
and
then
you
can
actually
get
these
choices
in
any
back-end
you're
interested
in.
C
D
C
C
C
So
you
can
see,
can
everyone
see
this
yeah
after
myself?
Okay,
so
you
can
see
that
there's
an
initial
request
to
create
the
deployment
that
we
can
see
as
traced
in
the
API
server
and
the
@cd
transaction
that
corresponds
to
it.
But
then,
once
the
controller
manager
picks
that
up
and
creates
a
replica
set
from
the
daemon
said,
there
are
other
actions
that
are
traced
for
that
as
well.
The
replica
set
creates
pods
and
then
for
each
of
the
pods.
C
We
can
see
that
they're
scheduled
and
then
that
we
can
see
all
the
cubelet
work
down
to
the
container
runtime
operations
and
we
can
even
see
the
the
final
status
update.
The
cubelets
ends
to
update
the
state
of
the
pod
to
running
so
all
that's
in
here
and
you
can
get
this.
You
can
even
see
in
my
tabs
up
here
that
I've
got
a
tab
with
stack
driver
I've
got
a
tab
with
Jaeger
and
I've
got
a
tab
with
sidqin
because
we
are
making
use
of
of
the
flexibility
that
opening
Kalama
tree
gives
us.
A
So
I
have
a
question
once
again.
You
already
closed
it,
but
going
through
this
tag
of
the
interactions
I've
noted
there
was
cube
API
unity
CD,
which
is
going
through
API
down
controller
manager.
It's
just
controllers.
What
I'm
missing
out?
There
is
Cube
scheduler,
oh
okay,
sorry
there's
a
scheduler,
but
I've
noticed
that
the
scheduler
is
less
granular.
C
This
is
a
really
flexible
framework
that
the
crux
of
what
I'm
proposing
is
that
we
can
propagate
context
through
objects
and
then
any
controller
that
reads
an
object
and
does
something
in
order
to
drive
that
object
towards
a
desired
state
can
trace
their
action.
So
what
I've
started
out
with
which
I
sort
of
considered
the
what
I've
started
out
with?
Is
that
I've
traced
almost
exclusively
RPC
and
HTTP
boundaries?
C
So
you
can
see
this
deployment
trace,
is
created
and
ended
when
the
request
to
the
API
server
first
hits
the
API
server
and
then,
when
it
returns
back
to
the
user
and
the
Etsy
transaction
is
when
it's
calling
out
to
@cd
and
then,
when
that
CD
responds
so
tracing
along
component
boundaries,
allows
you
to
figure
out
where
a
problem
resides.
So
that's
why
I've
started
off
with
that,
but
you
could.
C
Certainly,
if
you
are
really
interested
in
the
depths
of
the
scheduler
algorithm,
you
could
have
multiple
nested
traces
or
sorry,
multiple
nested
spans,
instead
of
a
single,
hey
schedule.
This
thing,
if
that
makes
sense,
but
any
controller
that
can
read
an
object
can
write
traces
about
the
work
that
it's
doing
on
it.
So
this
isn't
necessarily
specific
to
two
pods
and
it's
a
replica
sense.
C
C
What
I've
started
out
with
has
been
and
for
the
demo
is
obviously
a
user
driven
change,
but
if
you
wanted
you
could
you
could
very
easily
have
long-running
processes
like
operators
or
another
example?
That's
been
thrown
out
has
been
like
Auto
scalars,
and
things
like
that.
Yeah
wait
for
a
while
and
then
take
some
action
when
it's
required.
C
Those
can
also
all
be
associated
with
either
the
initial
request
that
created
those
objects,
but
that
may
not
make
sense
or
more
likely
each
time
that
that
controller
decides
to
make
a
change.
It'll
actually
decide
they're
not
to
trace
it.
Within
the
controllers,
say
with
some
probability,
and
then
we
can
see
some
subset
of
the
actions
that
that
controller
is
taken.
Heisey.
D
C
So
one
of
the
fun
things
about
tracing
is
that
it's
a
an
extremely
rich
format.
When
you
compare
it
to
metrics
or
logs,
it
does
form
a
tree
because
it
all
has
parent
references,
but
it
also
has
an
open
telemetry
that
referred
to
as
tags,
but
for
people
familiar
with
Prometheus,
you
would
think
of
them
as
labels.
D
C
Not
particularly
in
the
scope
of
this
I
mean
there
is
an
aspect
of
it,
since
we
the
any
of
the
components
that
export
traces
have
a
a
buffer
in
memory
that
where
they
buffer
the
the
spans
that
have
been
collected,
that
are
to
be
sent
to
the
agent.
So
if
you're
worried
about
memory
consumption
in
your
component,
then
tracing
makes
that
ever-so-slightly
works
in.
D
E
D
D
E
D
E
C
C
D
C
And
update
so
any
right,
it
would
also
be
nice
to
have
it
forget,
since
the
idea
is
that
I
can.
Actually,
if,
if
a
particular
request
is
slow
and
I
want
to
see
the
path
that
takes
through
the
API
server,
the
only
thing
I
think
we
don't
really
want
to
trace
is
watch
since
that
is
long-running
and
I
haven't
really
reasoned
through
what
a
trace
would
look
like
or
how
it
would
be
useful.
C
A
Mean
the
only
thing
is
figuring
out
the
name,
because
it
kind
of
has
it
on
a
trace
yeah.
It
might
be
this
slightly
misleading,
because
some
people
might
think
that
this
is
actually
tracing
T,
the
command,
which
is
actually
that
true,
but
that
can
be
worked
out
and
I
think
that
I
don't
see
any
particular
objection
on
on
having
this
kind
of
functionality.
D
C
Right
so,
while
this
is
an
alpha,
you'll
need
to
have
the
feature
gate
enabled.
Otherwise
you
won't
get
anything.
The
other
thing
is
that
you
do
need
to
install
the
open,
telemetry,
collector
or
agent,
and
that
can
be
run
in
a
number
of
different
configurations.
You
could
have
it
as
a
one
per
cluster
service
or
you
can
run
it
as
a
daemon
set
there.
In
my
experience
running
demos,
especially
on
gke,
there
are
some
tricky
bits
with
how
to
get
traces
from
the
master.
C
So
if
you
have
a
master
node,
that's
stuck
away
somewhere,
but
that's
educating
problem
and
we'll
have
to
figure
out
how
to
solve
that
ourselves.
So
it
the
answer
is:
yes,
you
need
to
run
a
collector
and
every
controller
that
wants
to
be
able
to
send
traces
needs
to
be
able
to
needs
to
have
one
of
those
collectors
that
it
can
send
to,
but
once
you
have
that
it's
fairly
simple
to
configure
it
to
send
to
Zipkin
or
Jager
or
whatever
your
back
into
choices,.
A
C
C
C
Odd,
if
I
specify
trace,
then
I
can
hop
over
to
Zipkin
and
I
can
see
the
same
things.
I
showed
you
before
right.
The
transaction
sent
to
the
or
the
request
sent
to
the
cubed
API
server
I
can
get
all
sorts
of
lovely
metadata
about
the
request.
I
can
see
the
G
RPC
transaction
made
to
CD
and
how
long
that
took
and
I
can
even
see
all
the
things
from
the
cubelet
and
the
scheduler
there.
It
is,
and
back
here
I
can
see
the
status
updates
that
come
through.
So
it's
all
there
and
let's
see.
C
A
F
David
I
wasn't
awesome
demo.
Actually,
that's
really
cool.
My
headphones
aren't
working
and
I'm
gonna
shared
space,
so
I'm!
Sorry,
if
it's
loud,
let
me
know
I
just
wanted
to
follow
up.
Did
we
make
any
decision
so
I
think
we're
all
in
agreement.
We
need
to
get
like
the
stale
BOTS
added
to
the
cube
CTL
project.
Okay,
did
anyone
volunteer
to
take
that
on
I.
A
Haven't
heard
and
I
haven't
got
a
chance
to
work
through
it.
So
if
you
have
the
time
and
you're
interested
yeah
go
ahead,
just
ping
me
into
in
the
PR
or
by
slack.
That
would
be
the
simplest
because
getting
through
the
make
it
happy
knows
it's
tough,
so
slack
is
the
preferred
open,
a
PR
and
I
think
we
can
get
it
merged
fast
enough.
A
A
A
We
figure
out
that
the
boat,
that
the
bot
that
is
responsible
for
closing
inactive
issues
is
not
working
in
the
cubicle
repo
or
we
haven't
seen
it
working
and
we
figure
out
and
agree
to
all
of
us
a
president
of
the
code
that
we
would
like
to
have
that
one
configure
for
Q
cuddle
Rico
as
well.
So
a
D
is
going
to
add
that
kind
of
functionality
or
no
I
just
got
the
figure,
because
I'm
pretty
heard
that
this
is
only
about
adding
the
cube
cuddle
Rico
somewhere
in
the
test.
F
E
A
Start
with
cube
cuddle,
let's,
let's
clean
this
one
up,
I've
noticed
that
Jeff
and
zinc
font
are
doing
a
pretty
good
job
with
customize
and
all
the
crew
font,
Cornelius
and
Ahmed
are
doing
pretty
well.
I.
Think
you
cuddle
itself
is
in
the
worst
situation
as
of
now.
So
once
we
settle
on
on
a
good,
as
we
figure
out
all
the
old
issues
that
there's
quite
a
few
with
regard
to
keep
go,
we
can
sync
with
Jing
Fung
and
Cornelius
and
figure
out
where
we
want
to
go
with
the
with
crew
and
customize.
A
A
Okay,
I'll
add,
from
my
end,
I'm
trying
to
merge
the
removal
of
all
that
all
the
generators
from
under
cubicle
run.
If
you
pay
attention
and
try
to
run,
cubicle
run
for
creating
deployments
or
stateful
circuits
or
basically
anything,
except
for
pod,
you'll,
probably
notice
that
there
was
at
the
brocation
warning
the
deprecation
whirring
was
there
for
over
a
year.