►
From YouTube: Kubernetes SIG CLI 20191023
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
One
will
be
a
90
minute
session
during
the
main
part
of
the
conference,
where
we
will
be
replying,
live
to
questions,
concerns
and
literally
picking
the
topics
that
people
will
want
to
talk
about
and
the
other
one
is
your
in
contributed
summit
at
9:30
a.m.
we
will
have
a
90
minute
session
where
we
will
introduce
what
keeps
ETL
is
how
to
get
involved
in
keeps
ETL
development,
how
to
write
plug-ins
and
generally
topics
around
contributions
to
six
year.
A
Okay,
maybe
he'll
get
a
chance
to
join
in.
In
the
meantime,
let's
skip
on
to
the
next
issue.
In
that
case,
so
I
think
it
was
proposed
somewhere
in
slack
or
in
the
discussion
with
a
couple
of
folks
there's
a
proposal
to
half
so
by
default.
If
you're,
interacting
with
cube
CTL
with
a
running
cluster,
we
will
be
doing
a
pre
validation
of
the
resources
you're
submitting
to
a
cluster
using
the
open,
API
schema.
A
Up
until
this
point,
where
this
schema
was
not
fully
structural,
thanks
to
awesome
work
from
the
CICC
api
machinery,
we're
getting
significantly
better
at
this
idea,
so
the
the
ratio,
noise,
the
actual
failures-
is
getting
closer
to
the
validation
that
is
happening
on
the
server.
But
there
are
still
some
discrepancies
between
the
two
and
there
was
a
proposal
that
for
non-interactive
cube
CTL
invocation.
A
A
B
A
When
you're,
trying
to
insult
certain
manager,
you
have
to
manually
pass
the
validation,
but
since
you're
using
the
built-in
mechanism,
I'm
not
sure
what
the
third
manager,
what
kind
of
mechanisms
are
using?
You
can't
pass
the
slack
through
the
machinery
that
the
third
manager
is
using.
That's
why,
in
their
case,
that
is
failing
bad
and
you're,
not
able
to
use
look
city.
What
is
what
they
were,
what
they
mentioned
in
that
issue
and
this
earth
manager?
That's
why
they
brought
it
up,
but
I
think
it
is.
A
B
See
I
see
here
there
there's
something
to
preserve
unknown
fields.
True.
B
A
B
B
A
A
B
Think
by
default
it
should
fail,
even
in
non-interactive
mode,
because
if
you
have
a
good
op
system
and
something
fails,
validation
like
it's
not
clear
that
you
want
to
apply
it
all
right.
In
fact,
you
don't
want
to
still,
and
you
just
want
to
fail
and
be
like
you
put
a
typo
in
your
thing.
Instead
of
making
someone
go
find
the
logs,
if
for
there's
some
edge
case
where
it's
like?
B
A
Like
given
that
there's
a
reasonable,
maybe
not
an
easy
way,
but
it's
still
a
reasonable
approach
to
bypass
the
problem.
The
initial
problem,
I'm
kind
of
hesitant
on
actually
implementing
it,
especially
that,
given
that
the
amount
of
work
that
the
cig
API
machinery
put
into
the
structural
scheme
and
the
direction
that
it
is
going
forward
it,
it
will
allow
us
to
have
a
really
strong
validation
on
the
server
for
series.
And
then
again.
This
is
only
for
Sierra,
T's
and
I'm
worried
that
we
would
not
have
a
check
within
cube
CTO,
which
would
only.
A
B
What
one
I
can
they
just
run?
Coo
control
apply
with
validation,
then,
if
it
fails
run
it
without
validation
like
it
should
be
idempotent
right
so
like,
then
you
get
your
error
messages
and
you
get
to
apply
your
resources
anyway,
and
it's
like
enough
extra
steps
that
it
should
be
clear
that
you
know
you
must
know
what
you're
doing
and
really
intend
this
behavior
sorry
I
interrupted
someone
else,
I'm,
not
quite
sure
who
that
was
I,
think
that
was
trying
to
say
something.
Yep.
C
B
Validation,
well,
it
says
relating
to
kubernetes
preserve
unknown
fields,
so
that
may
have
been
introduced
in
the
this
could
have
been
changed
in
116,
I
I,
don't
know,
cuz
the
this
I
think
this
year
a
tea
validation
is
done.
Server-Side
if
I
recall
correctly,
although
maybe
I'm
wrong
on
that
I'm
gonna,
the
preserve
unknown
field.
Is
it's
a.
A
A
That
particular
field
from
pruning,
so,
for
example,
if
you
have
some
kind
of
object
embedding
like
configuration
within
your
nested
field,
you
would
want
to
set
on
that
configuration
field
which
most
frequently
is
runtime
row
extension
or
front
I'm
yeah
raw
extension
is
the
one
you
want
to
have
the
printing
mechanism
stay
away
from
that
particular
field.
Only.
A
Other
than
that
I'm
guessing
I
haven't
looked
through
that
certain
that
particular
C
or
D
that
they
have
in
question
four
but
I
guess:
oh
yeah
config
that
particular
one
is
an
additional
configuration
to
the
web
hook
API
server
and
that
make
sense.
In
that
case
you
just
need
to
ensure
that
this
particular
field
and
everything
within
it
won't
be
proactively
pruned
by
the
the
CRT
pruning
mechanism.
C
B
B
If
there's
like,
maybe
if
there's
maybe
a
stronger
argument,
try
to
try
the
workarounds
that
we've
suggested,
I
guess
would
be.
My
response
like
this
seems
like
it's
going
away
too
right
like
if
you're
running,
supply
the
flag,
if
you're
running,
115
or
below
so
in
like
to
say
it
takes
us
a
release
to
get
this
out
right
and
then
then,
like
six
months
after
that,
it's
maybe
everyone's
using
the
new
version.
So
it's
not
us
anymore,
I,
don't
know
right,
but
if
you're
using
to
control
one,
this
would
be
an
acute
control.
B
A
A
A
A
Then
you
can
add
containers
your
existing
pots
back,
because
if
you
have
ever
tried
modifying
pots
back,
you
probably
know
that
you
cannot
modify
pots
back
by
default
because
it
is
already
running
most
of
the
times.
So
ephemeral
containers
expand
this
idea
by
allowing
you
to
add
additional
containers
to
are
already
running
pots
back.
A
So
that's
pretty
neat
idea
and
if
Romero
containers
also
allow
you
to
attach
to
an
existing
containers
within
a
pod,
they
don't
have
any
guarantees
such
as
that
they
will
be
restarted
and
everything
they
will
start
in
the
moment
that
the
container
dies,
no
matter
whether
it
I
was
in
there
or
it
just
finished
its
execution.
It
will
be
done,
no
restarting
nothing
more.
A
So
it
sounds
like
a
perfect
place
for
debugging,
because
you
could
literally
spit
up
a
container
right
next
to
your
current
running
container
that
you're
struggling
with
or
you're
having
problems
with,
and
it
all
looks
very
neat.
My
only
problem
is
that
this
requires
the
the
pots
back
to
proactively
turn
on
the
ephemeral
containers
and
I'm,
not
sure
that
it
is
always
the
case
for
for
your
deployment
or
whatever
work
with
you're
running
on.
A
And
the
area
that
Lee
is
covering
in
this
particular
proposal
is
with
regards
to
the
containers,
this
Robles
containers,
meaning
that
you
cannot
by
default
exec
into
them.
So
the
ephemeral
containers
for
those
particular
can
containers
is
the
only
reasonable
way,
but
not
all
were
closed.
Running
in
a
cluster
could
be
that
way.
So
I
was
actually
thinking
about
expanding
and
building
on
top
of
Lee's
idea
and
proposing
something
bigger
than
just
the
ephemeral
containers.
A
I
would
like
to
see
a
a
brother
proposal
and
I
if
I
find
the
time
I'll
try
to
sync
with
Lee
as
well,
and
if
I
get
the
time
I'll
try
to
write
it
down
as
a
additional
cap
build
on
top
of
what
the
proposes
here,
which
would
allow
running
a
paw
inside
of
your
running
app
and
be
able
to
or
inject
a
inject
yourself
or
inject
any
tooling,
and
then
debug
your
you're
running
workload
and
before
I
start
working
on
it.
I
would
like
to
ask
other
folks
how
they
feel
about
it.
A
B
Get
information
about
that
specific
inner,
a
pod,
I'm
guessing
would
be
for
accessing
services
or
that
sort
of
thing,
but
but
not
like
getting
PS
yeah.
So
I
think
what
I
heard
you
saying
is
effectively
like
looking
holistically
about
the
debug
workflow,
an
inspection
workflow
and
and
including
a
number
of
different
use
cases
yeah.
B
Okay,
that
makes
sense
to
me.
The
thing
is.
E
Next
highways,
if
sorry,
if
you're
hearing
background
noise,
so
apparently
there
are
actually
today
to
debug
plugins
one
is
the
one
that
Lee
has
written.
We
have
on
board
a
dental
crew
and
apparently
there's
another
one
that
is
written
by
someone
else.
If
you
search
for
like
Kushite
out
the
body
directly
on
google,
I
think
that's
like
the
first
result,
and
it
has
some
800
stars
on
up
today.
I
think
that
does
not
use
the
ephemeral
containers
monitors
20.
B
E
A
In
the
openshift
world
we
also
have
OCD
Bell
Command,
which
is
from
looking
at
the
Kipps
e
field.
About
that
you
just
mentioned.
It
is
very
similar
because
it
is
running
a
pre-installed,
troubleshooting
type
of
an
image,
so
I'm
guest
combining
all
three
into
a
single
I
know.
It
will
create
a
monster
at
some
point
in
time.
But
if
we
try
to
figure
out
a
reasonable
approach
to
this,
we
could
provide
users
with
a
valuable
tooling
for
debugging.
A
A
A
Container,
where
you
have
full
access
to
your
nodes
file
system,
so
you
know
one
single
command
and
then
you
can
run
everything,
although
we
don't
support
intramural
containers
yet,
but
I
think
that
that's
why
I
would
like
to
see
all
of
those
contain
it
under
single
command
and
seems
reasonable
to
give
users.
This
flexibility.
E
A
A
6:30
p.m.
yep
he's
near
okay
in
decades,
I'll
definitely
sing
in
my
usual
times
with
him
after
slack,
and
we
could
try
and
we'll
try
to
come
up
with
a
a
browser,
debug
proposal,
because
I
I'm
pretty
sure
that
it
is
just
valuable
for
for
the
broader
community.
If,
if,
like
Ahmed
said,
the
keep,
CTL
debug
has
800
stars,
not
sure
how
these
debug
plug-in
it's
looking,
but
yeah
and
I
see
that
majority
of
us
agrees
to
to
the
point
and
yeah.
B
Only
thing
I'd
add
is
like
if
we,
if
any
piece
of
these
were
like.
Oh,
this
is
valuable.
We
think
this
is
valuable,
but
we're
trying
to
figure
out
like
what
should
the
name
of
the
command
be
or
like?
How
does
this
command
relate
this
other
command
that
we
just
put
them
in
alpha
when
when
they
seem
correct,
and
then
you
know
put
in
the
release,
notes
we're
gonna
change
the
names
of
these
things
and
that
sort
of
stuff,
as
we
figure
it
out?
Oh
yeah,.
A
A
A
G
Yeah,
that
sounds
great.
Actually
we
were
talking.
I
was
talking
with
Nam,
it's
about
about
how
to
name
the
plugin
for
crew
ands
and
that
sort
of
came
up
right
like
I,
wanted
it
to
be
debug,
but
but
pod.
It
really
only
focuses
that
pod
if
we
can
make
it
general-purpose.
I
think
that
would
be
great,
so
I
will
definitely
go
back
and
review
the
part
of
the
meeting
that
I
missed
when
it
gets
posted.
A
A
Yeah
I
would
even
go
all
the
way
to
let's
get
the
proposal
up
and
we
can
outline
the
main
functionality
and
we
can
start
cranking
a
in
alpha
debug
command,
even
though
we
we
can
start
even
cranking
it
in
117.
Even
though
it's
not
a
actual
feature,
we
can
hide
towards
the
end
of
the
release,
the
command
so
that
it
is
not
available
for
users,
but
we
will
continue
working
on
under
functionality.
A
A
A
F
My
name
is
Michael
Gugino
I
work
for
Red
Hat
on
open
shift
and
I'm.
Also
a
member
of
Sigma,
closer
lifecycle
focusing
on
the
cluster
API
project.
I
just
want
to
introduce
myself,
probably
seeing
a
PR
for
me
last
few
days,
fixing
some
bugs
and
Drain
and
just
want
to
give
some
context
on
what
we're
doing
with
that.
F
So
we're
trying
to
build
automation
and
remove
machines
to
the
cluster,
both
upstream
and
downstream,
downstream
we've
been
supporting
our
own
version
of
drain
and
we're
hoping
to
migrate
away
from
that
and
not
too
long
ago,
upstream
coop
control
train
was
refactor
to
make
it
more
usable
as
a
library,
so
we're
currently
consuming
that
upstream
and
cluster
API.
So
the
workflow
is
when
you
delete
a
machine
as
we
call
it.
F
First,
you
drain
the
node,
and
then
you
get
rid
of
the
VM
and
so
I
have
a
couple
more
ideas
and
features
I'd
like
to
kind
of
add
to
that
more
support
that
use
case
ideally
they'd
be
broadly
applicable
to
other
people
consuming
this
as
a
library.
So
if
you
see
more
stuff
from
me,
I
just
wanted
to
give
you
context
on
that.
F
A
It's
hard
for
me
to
say
at
this
point
in
time
how
how
much
towards
the
cap
you!
Actually
you
have
to
go
if,
if
I
don't
see
the
scope
of
the
work,
because
from
what
you're
saying
so
far,
it
is
the
input
to
small
improvements,
more
small
fixes
and
I'm,
not
sure
if
it's
worth
the
cap,
if
it
will
be
insignificant
provements,
where
you
will
be
rewriting
a
significant
part
of
code,
then
my
answer
would
be
yes.
I
would
like
to
see
a
cap
for
for
this.
Okay.
F
You
know
right
now
more
along
the
lines
of
thing.
You
know
adding
like
small
incurable,
adding
like
a
context
as
as
we
discussed
my
other
PR.
So
they
are
real
small
stuff,
but
I
can
I.
Can
file
are
Fe
and
outline
some
of
those
things,
and
if
we
need
to
go
full
cut
I
guess
we
can
do
that.
Yeah,
yes,
yeah.
B
H
I'm
Michael,
this
is
Sean
Sullivan
I
just
had
a
couple
of
quick
notes
on
the
drain.
Work
number
one
if
it
might
be
useful
to
include
just
in
Santa
Barbara
in
some
of
the
PRS,
because
he
actually
did
the
refactoring
the
drain
refactoring
that
you're
describing
and
then
the
other
thing
is
to
the
extent
possible
of
if
we
can
add
to
the
tests,
since
it
is
going
to
be
used
widely
and
vendored
widely.
I
think
that
it'll
be
really
useful
to
to
enhance
our
test
coverage
and
your
recent
PR
that
had
the
the
go
routines.
H
F
Yeah
I
work
with
Justin
Santa
Barbara
on
the
close
to
API
project,
so
yeah
yeah
work
with
him
pretty
frequently
and
then
also
others
from
that
project.
I
looped
in
as
well
yeah
I
mean.
Obviously,
tests
are
always
good
I
try
to
contribute
tests
when
I
can
and
they
don't
like.
Take
a
ton
of
work
like
like.
We
said
these
leak
go
routine
things
that
that
looks
like
I
can
get
a
little
messy
they're
testing
synthetically,
but
yeah
I
definitely
try
to
and
test
where
I
can
cool
thanks.
A
Especially
if
it's,
if,
if
you
are
expanding
on
the
work,
to
provide
a
library
that
will
be
consumed
by
other
users,
I
would
say
in
the
tests
are
kershel,
because
that's
most
probably
the
only
way
to
ensure
that
we
don't
break
users
in
any
way.
D
F
Okay,
cool
I,
don't
think
I
haven't
me
personally
immediate
need
for
that,
but
I'm
sure
it'd
be
useful
to
very
many
other
people,
because
it
is
sometimes
a
mystery.
What's
going
on
what
Stan.
I
Also
in
regard
of
drain,
we
found
at
crew.
We
have.
We
had
a
plugin
submission
where
a
user
made
a
plugin
that
allows
to
evict
a
part
without
violating
the
port
disruption
budgets,
which
is
something
which
is
not
possible
with
Q
:
at
present,
unless
on
something,
maybe
that
would
also
be
something
which
could
be
included
in.
F
Yeah
the
eviction
has
to
succeed,
and
then
the
coop
control
library
behind
the
scenes
calls
delete
I
believe
or
no
excuse
me.
The
actual
pdb
controller
approves
the
eviction
and
then
delete
is
called
so
I.
Don't
think,
there's
anything
on
the
API
server
portion
blocking
the
call
to
delete
so
that
yeah
you'd
have
to
do
that.
Client-Side.
I
F
Well,
that
seems
like
it's
not
really
in
the
context
of
drains,
specifically
because
drains
going
to
use
the
eviction
API
if
it's
available,
and
so
we
wouldn't
really
hit
that
case.
I,
guess
that
would
just
be
more
koop
control
generally
leading
things,
something
so
I,
don't
think,
there's
a
workflow
where
you
can
currently
specify
just
go
straight
to
delete.
So
it's
always
going
to
use
the
eviction
API.
If
it's
available.
A
I
Yeah
I'll
start
and
if
I,
if
family
is
anything
that
so
we've
had
a
meeting
with
Kui.
So,
as
you
might
remember,
there
was
a
demo
of
tasks
and
a
Kui
extension,
which
was
pitched
last
60
Li
meeting
and
yeah.
Together
we
looked
at
how
tasks
and
crew
could
work
together
and,
as
we
understand
it
right
now,
tasks
is
sort
of
duplicating
plug-in
management
work
because
it
would
basically
manage
UI
plugins.
I
So
from
our
perspective,
this
does
not
make
so
much
sense
because
it
would
duplicate
work
that
crew
already
has
done
and
which
is
breaking.
So
we
thought
about
how
we
could
do
that
in
a
in
a
more
efficient
way,
meaning
let's
duplicate
it,
work
and
getting
it
done
quickly,
and
there
are
basically
two
options.
The
first
option
would
be
to
have
fat
UI
plug
in
bundles,
which
include
an
electron
instance
and
GUI
for
each
UI
plug-in.
I
So
in
that
picture
the
Cask
would
totally
be
gone,
and
this
might
work
well
because
right
now
the
there
will
probably
not
be
so
many
UI
plugins.
So
the
actual
duplicated
space
on
disk
and
wait,
the
download
bandwidth
is
probably
limited
and
another
idea
still
using
the
installation.
Machinery
of
crew
would
be
to
have
a
base
plugin,
which
comes
which
installs,
the
electron
and
Kui
dependencies
and
then
have
things
UI
plugin
bundlers,
which
come
with
a
help
of
binary,
which
first
checks.
I
I
Think
for
the
second
case,
a
lot
of
cask
work
could
actually
be
reused.
So
yeah,
let's
see
where
this
goes
now
and
I,
see
that
the
github
issue
is
already
linked,
that
you
can
also
find
a
bit
more
about
the
discussion
and
also
some
making
notes
there
and
about
the
upcoming
cube.
So
I
think
I'm
sad
to
tell
us
because
I
don't
know
anything
about
it.
A
I
C
H
H
Binary
I've
actually
got
a
binary
working
need
to
work
on
flags,
but
once
once
we
have
that,
then
we
can
remove
the
convert,
depend,
see
I,
don't
believe
the
final,
a
threat,
concil
dependency,
the
dependency
on
our
back
is
going
to
be
addressed
anytime
soon,
I,
don't
so
Jordan
said
he.
He
started
down
a
particular
path
and
it
didn't
work
out
very
well,
but
that's
where
we
are
now
any
questions.
A
A
A
No,
but
we
could
copy
it
and
ensure
that
it
is
not
his
main
worry
at
least
from
what
I
was
talking
with
him
last
time.
His
main
worry
is
that
if
we
publish
this
code
under
keep
C
jail,
it'll
be
easier
for
people
to
consume
it
and
then
be
reliable
on
the
code
that
we
published
when
it
keeps
easier
and
that
it
will
be
wrong,
but
I
guess
we
could
we
could.