►
From YouTube: Kubernetes SIG CLI 20200701
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
good
morning,
good
evening,
good
afternoon,
depending
on
where
you
are
today,
is
July
first
and
my
voice
dries
out-
and
this
is
another
of
our
bi-weekly
6-year,
Live
Meeting,
my
name
is
machi
and
I'll
be
your
host.
Today
our
agenda
is
packed
with
topics
and
I
see
there
are
more
coming
over
and
over
again,
so
I
do
hope.
We
will
be
able
to
finish
it
by
the
end
of
the
hour.
A
There
are
two
announcements:
one
I
haven't
put
it
there,
there's
a
code
freeze
next
Thursday,
if
I
remember
correctly,
July
9th.
So
if
you
are
in
need
for
a
review
approval
or
anything
like
that,
reach
out
on
slack,
if
you
need
me
ping
me
several
times
until
you
get
here,
your
staff
approves
or
reviewed,
and
then
hopefully
it
will
land
for
119
and
the
other
one
I'm,
not
sure
who
put
it
over
here.
It
can
be
either
I'm,
guessing,
Phil
or
or
Sean
might
have
put
it
over.
Here.
A
The
cube
cone,
North,
America
CFP
was
pushed
to
July
12
or
maybe
Eddie
added
that
one
anyhow
yeah.
That's
that's.
Definitely
something
worthy
of
noting
out
not
sure
what
will
be
the
format
of
cube
con
North,
America
I
think
we
will
figure
it
out
soon:
yeah
Code
North
America
code,
exactly
but
I.
Remember
that
the
the
European
version
of
the
cube
con
will
be
happening
made
August.
B
C
Yeah,
that's
correct.
Go
ahead,
so
a
quick
intro,
I'm,
Vinay
I
work
with
a
future
way
in
out
of
Seattle
and
I've,
been
working
on
this
cap
and
feature
for
resizing
your
parts
without
restarting
them,
which
is
the
in-place
vertical
scaling,
and
the
feature
is
now
code
complete
and
we're
trying
to
target
it
for
119,
that's
by
next
by
July,
9th
and
I
was
hoping.
There
are
a
couple
of
files
that
need
to
be
reviewed
by
a
sig.
C
The
robot
said
that
gave
the
six
he
lands,
but
also
approves
required
for
the
PR
to
be
approved,
and
this
is
a
change
to
the
describe
function
for
pod
and
nodes,
and
this
comes
from
with
this
feature
the
design
of
this
new
feature.
What
it
does
is
in
the
pod
spec
you
have
containers
and
the
containers
have
resources,
fields
for
limits
and
requests,
CPU
memory
and
storage,
ephemeral
storage.
We
made
this
feature-
makes
the
those
fields
mutable
after
the
pods
been
created
and
to
track
what
the
actual
resources
are
allocated.
C
A
Yeah
I
can
definitely
have
a
look
at
it.
A
week
is
definitely
a
reasonable
timeframe.
For
me
to
have
a
look
at
it.
I
quickly
scan
those
ranges
hoping
so
the
majority
of
the
changes
seem
pretty
straightforward
from
65
point
of
view.
Okay,
since
you're
adding
the
that
changes,
at
least
for
that
point,
part
of
you
one
won't
be
the
easiest
way
to
verify
this
thing.
I
mean
I,
don't
have.
If
you
could.
If
there's
a
chance,
you
could
split,
because
I
notice
that
this
PR
is
pretty
big.
Yes,.
C
The
PR
is
big,
I
think
the
link
that
I
seen
are
out.
There
are
two
files
in
coop
CTL
I
can
I.
Can
we
redo
it
to
split
it
so
that
the
cube
serial
changes
go
into
a
single
change
list?
I
can
do
that
I
think
about
ten
ten
commits
right
now
is
looking
to
see
if
I
can
compromise
them
the
smaller,
as
the
reviews
went
by.
Let
me
see
if
I
can
do
that,
but
it's
just
a
little
give
me
one
second
I
mean.
C
C
A
So
personally
and
and
I've
been
giving
the
same
advice
to
several
people
already
and
I
know
that
other
other
maintainer
are
also
similar,
that
it
is
so
much
easier
to
get
smaller
bits
n,
rather
than
big
ones,
because
if
you
have
a
big
one,
it's
order
to
review
like
I
mentioned
some
interchanges
between
one
of
the
other
might
be
blocking
other
pieces.
So
if
you
could
split
this
thing
into
several
smaller
parts
and
then
target
specific
people
from
specific
groups,
specific
six
that
are
responsible
just
for
that
area,
then
trust
me.
A
C
Is
part
of
this
PR
I
think
the
I
was
trying
to
figure
out
how
this
can
be
done
in
two
separate
piers
with
because
API
needs
to
be
in
before
this
PR
can
be
even
created.
So
the
way
I
was
looking
at
it
was
their
separate
commits
in
the
beer
tent
in
different
commits,
and
in
this
case,
I
had
one
commit
for
the
cube
serial
changes,
but
then
I
realized
after
the
tests
ran
that
I
am
NOT
able
to
use
the
the
feature
gate
flag
in
cube
CTL.
C
So
that
was
one
of
the
questions
I
had
for
here
as
well.
So
is
there
a
way
to
disable
like
this
feature
is
going
to
be
alpha
in
119.
If
it
gets
me
if
it
makes
it,
is
there
a
way
to
disable
this?
These
changes
in
cube
CTL
for
alpha
and
then,
when
the
feature
which
kid
flag
is
switched
in
pinup,
then
it
turns
on
the
code.
Changes
are
there
for
anyone
who
wants
to
see
it
or
try
it
out.
Is
there
a
way
for
us
to
do
that?
A
D
A
Will
give
you
the
full
information
about
the
restoring
and
the
API
is
not
in
I
might
be
cautious
about
whether
whether
we
will
be
able
to
expose
that
in
cute
cuddle,
but
I
would
definitely
start
with
getting
the
API,
because
the
API
review
needs
to
go
through.
The
API.
Changes
need
to
go
through
API
review
committee
and
only
if,
when
you
get
that
bit
and
you'll
be
able
to
to
Landon
the
rest
of
the
bit
so
yeah
all
right.
I.
C
It's
already
been
reviewed,
Artem
Tim
Hakan
had
looked
at
my
changes
in
the
API
and
given
some
comments
and
they're
been
addressed
right
now,
it's
mainly
that's
the
first
commit
in
the
PR,
the
API
changes,
and
now
it's
waiting
for
either
Tim
or
tamarkan
or
Jordan
Leggett.
To
look
at
it
and
say:
yeah,
it's
good
or
no,
it's
not,
and
once
it
goes
in.
My
concern
is
even
once
it
even
after
it
goes
in.
Let's
say
we
have
the
cube.
Cto
changes
the
way
they
are
right.
C
Now
the
fields
are
going
to
be
there
at
the
API,
but
they're
not
going
to
be
enable.
So
when
you
do
describe
its
gonna
show
zeros
and
which
is
fine,
because
that's
what
it
is
when
the
when
the
feature
is
disabled,
ideally
I'd
like
to
like
not
show
those
fields
at
all,
since
they're
zeros
feature
is
not
enabled,
but
I
could
figure
out
a
way
to
do
that.
So
so.
E
C
E
Let
me
just
chime
in
here
for
a
minute
sure,
considering
the
UX,
the
CLI
experience
probably
deserves
his
own
discussion,
independent
of
the
API
there's
things
to
consider
like
client-server
version
skew
right.
So
even
if
the
features
enabled
like
is
it
like,
it
maybe
enable
on
the
server
it
may
not
be
enabled
on
the
server,
and
maybe
the
client
may
know
nothing
about
it
like
so
you
could
merge
this.
Have
it
available
in
the
server
have
it
enabled
you
have.
E
That
does
not
have
this
code
in
it.
You
could
have
it
not
merged
in
the
server
and
not
available
as
a
resource
at
all
and
a
to
control.
That
does
have
this
right,
so
there's
and
then
in
minutes.
Additionally,
the
many
of
the
cou
control
commands
already
like
support
generic
sort
of
resources
like
apply.
You
don't
need
to
do
anything
for
describe,
should
just
work
for
CR
DS,
so
I
think
there's
definitely
a
discussion
here
of
like
why.
E
Why
do
we
need
to
make
changes
to
cou
control,
because
we
wouldn't
be
able
to
do
this
for
a
C
or
D
like
when
we're
introducing
a
new
API
and
so
like?
Is
it
something
that's
necessary
or
not?
So
all
that
is
to
say
this
probably
is
worth
having
its
own
discussion,
and
it's
not
a
discussion.
You're
gonna
want
to
have
on
this
PR.
E
C
E
A
A
Working
for
charities,
the
problem
is
that
I
thought
it
spit
out
all
the
mo
used.
I
can't
remember
what
we
currently
do
with
the
Sierra
DS
and
I.
Do
remember
that
I
was
I.
I
was
talking
with
Clayton
so
that
he
does
a
brain
dumb,
because
some
time
ago
he
mentioned
that
he
has
an
idea
in
his
head.
I
asked
him
about
the
brain
dump
for
server-side,
describe
I
thought.
E
Described
in
heuristics
like
to
essentially
dump
out
the
best
it
could
based
on
what
it's
all
looking
at
the
API,
and
so
it's
entirely
possible
that,
like
the
what
is
dumped
by
describe
it's
sufficient,
or
at
least
like
not
terrible
and
it's
it's
might
be.
Actually
the
one
thing
I
would
suggest
doing
before
you
get
the
API
merged
is
run
just
to
control,
describe
without
the
changes
and
see
what
it
spits
out
and
that'll
give
you
kind
of
a
like
a
gut
check
for
like.
C
C
C
C
When
you
write
back
when
you
do
an
update
or
a
batch,
and
you
don't
know
about
the
field,
then
we
need
to
be
able
to
handle
that
case,
and
there
was
one
of
the
issues
that
we
discovered
during
the
first
round
of
API
reviews
and
then
I
fixed
that
so
what
I
did
is
I
took
the
master
branch,
the
current
headmaster
branch
built
the
cube,
control
and
use
that
to
run
against
my
feature,
forget
and
set,
and
it
works.
It
doesn't
have
any
issues.
C
E
C
Yeah
yeah
I
mean
mutations,
I
can
I
can't
try
the
describe
I
didn't
try
to
describe
in
particular,
but
I
tried
get
in
my
eye.
I
did
it
to
an
EP
I
wrote
a
little
gogo
code,
pretty
much
uses
the
same
API
is
that
the
cube
cube,
see
deal,
give
good
losers,
but
I
believe
I'll
try
this
out
just
to
complete
the
loop
and
make
sure
that
there
are
square
it's
fully
squared
away.
However,
I
get
the
feeling
that
this
feature,
this
change
to
coop
control
is
better
it's
better
to
bring
it
in
once.
C
E
C
So
when
the
feature
is
disabled,
there
is
nothing
there.
It's
the
resources,
resources
field
is
null
and
it
did.
The
allocated
describe
will
look
at
it
and
give
zeros
right
now
and
that's
fine,
because
it
is
not
lying.
It's
telling.
Okay,
there
is
nothing
there.
It's
showing
zeroes,
but
if
it's
just
zero
there's
nothing
to
see
so
that
way,
I
look
at
it
as
okay.
If
there's
nothing
to
see,
then
it
shouldn't
even
be
there
rather
than
the
feature
is
disabled.
So
why
do
you
see
that
I
think
I
get
anything
this.
E
Is
I
think
get
the
yeah?
My
advice
would
be
get
the
API
merged
yeah,
knowing
that
at
a
minimum
like
you're,
not
blocked
on
ku
control
like
you
should
be
able
to
write,
be
successful,
whether
you
make
the
Google
updates
o
consume,
that
it
probably
makes
sense
to
do
the
KU
control
updates
when
you're
thinking
about
going
to
beta
you.
E
However,
you
know
that's
not
necessarily
set
in
stone,
because,
because
of
the
version
skew
semantics
that
are
supported
like
you
could
be
running
a
version
of
KU
control
that
you
know
doesn't
like
you
could
be
rolled
it
running
an
older
version
of
KU
control
against
a
newer
version
of
the
API
server.
Soon
theory
you
could
have
like
API,
fully
enabled
and
available
yeah,
but
running
could
control
that
it
wasn't
at
that
version.
E
So
that's
where
there
could
be
an
argument
made
for
put
it
in
before
it's
you
know
to
beta,
but
but
I
don't
think
it's
like
it's
super
critical
one
way
or
the
other
okay.
C
E
Probably
not
a
cap,
but
at
least
explain
like
yeah
yeah
yeah
sure
like
because,
like
we're
wondering
assume
we're
going
under
the
assumption
that
no
changes
are
necessary
for
new
types
has
we
have
we're
trying
to
support
charities
and
therefore
like
this
would
be
an
exception
to
that
sort
of
philosophy
and
so
be
good
at
a
minimum
to
know
like.
Why
are
we
making
that
exception
and
because
it's
a
change
to
make
system
type.
D
C
Yeah
the
for
argument,
I
think
the
main
point
is
okay,
as
people
are
trying
it
out
for
debug
ability,
it's
better
easier
if
they
could
see
using
just
one
quick,
OOP
cuddle
command,
but
they
start
a
blocker.
You
can
do
oh
yeah
Mel
and
get
the
in
formation.
That's
in
the
pod
spec,
so
I'll
minimize
the
risk
for
now.
Please
don't
take
a
look
at
it
if
you
get
the
chance
and
it's
the
second
link
that
pulls
up
the
files
identify
a
reviewer
for
this
featured
good
cuboidal
changes.
C
A
What
we
did
with
building
cubicle
plugins
and
as
one
of
the
one
of
the
requirements
was
having
some
kind
of
a
shared
library
where
core
functionality
from
kubernetes
api
can
be
shared,
and
it
turns
out
that
we
can
also
use
that
library
go
I'm,
not
sure
if
that
will
be
the
final
name.
There
are
discussions
about
the
naming
strategy,
but
basically,
what's
important
for
us
is
that
the
functionality
that
we
share
with
the
API
server
for
our
back,
which
is
behind
the
author
reconcile
command,
can
be
a
place
in
that
shared
library.
A
A
We
will
be
able
to
extract
the
shared
libraries
between
API
server
and
cube
cuddle
that
are
needed
for
to
cuddle
offer
a
console
command,
and
that
is
the
last
piece
for
us
to
be
able
to
move
cube
cuddle
to
its
own
repo.
So
fingers
crossed
with
with
lots
of
good
luck,
I'm
hoping
that
120
will
be
the
release
when
we
will
have
cube
cuddle
in
its
own
repository.
A
If
you
want
to
have
a
look
at
the
PR
I
linked
it
in
the
agenda,
if
you
have
any
suggestions
or
if
you
know
that
other
six,
because
currently
the
motivation
is
our
six
CLI,
our
back
requirements,
six
scheduling
helpers
requirement,
but
if
you're
part
of
other
sick
that
know
that
might
benefit
from
this
shared
library
project
definitely
comment
on
that
one.
We
will
be
also
reaching
out
to
other
states,
probably
through
Kate's
death,
mailing
list
asking
for
their
feedback.
What
else
she
can
potentially
be
part
of
that
shared
library.
A
So
I
do
hope
that
we
will
be
able
to
finally
close
the
queue
cuddle
move
to
its
separate
repo
slowly,
but
slowly,
but
there
is
a
super
bright
light
in
the
tunnel.
Does
anyone
have
any
questions
for
this?
One.
B
B
It's
basically
a
yellow
filter
list
against
github
API
calls,
and
so
it's
a
nice
dashboard
that
you
can
write
all
these
custom
rules
to
show
you
different
issues
from
different
repos
I
deployed
a
quick
version
of
it
here,
pulling
in
the
cube
control,
repo
I'm
working
on
getting
the
kubernetes
issues
that
are
tagged
pulled
in
here
as
well.
It's
a
little
more
complicated.
We
do
across
repo
filters,
but
the
idea
is
that,
hopefully
we
can
we.
B
So
when
we
started
doing
bug
scrubs
we
had
I
think
it
was
close
to
like
400,
open
issues
and
now
we're
down
to
like
40
or
50
right.
So
we've
done
a
lot
of
work
and
definitely
want
to
like
keep
up
good
momentum.
So
the
idea
behind
this
is
we
can
separate
out
all
these
sections
again.
There
just
yeah
most
filters
and
if
you
go
all
the
way
back,
set-top
moshae
yep.
B
This
is
this
is
like
the
main
one
here,
there's
also
tabs
at
the
top
I
think
it's
probably
collapse,
because
your
browser's
are
smaller,
but
there's
different
sections
for
different
types
of
triage.
There's
like
daily
weekly
monthly,
quarterly
all
this
stuff.
So
you
can
filter
on
all
that,
but
the
basically
these
are
unpaired
eyes
issues
older
than
seven
days
right,
so
just
a
very
simple,
straightforward,
EML
filter
that
will
pull
all
these
in
and
I
guess.
B
The
idea
behind
this
is
that,
as
long
as
we
keep
addressing
these
issues
and
keep
getting
my
cue
down-
and
hopefully
we
can
spend
less
time,
bug
triaging
and
can
branch
out
to
like
customizing
some
other
cool
bug
triage
stuff,
so
we're
almost
we're
almost
there
with
keep
control.
So
if
everyone
just
takes
a
look
at
this,
we
can
move
this
to
a
different
domain.
At
some
point,
I
just
threw
up
on
vanity
domain
I
have
and
then
what
I
really
wanted
to
discuss
was
so
for
labels.
B
So
we
realized
that
we
use
key
0
1
2
3
labels
inside
the
cube
control,
repo
I,
don't
think
any
of
the
other
SIG's
or
repos
use
those
labels.
They
use
the
the
word
ones
like
that
log
to
be
done
soon
or
something,
and
so
I
wanted
to
propose
or
get
thoughts
on
people's
solution.
Let
me
step
back
so
Sean
said
that
he
just
reached
for
those
because
they
were
there
and
his
default
for
a
issue
is
like
a
p2.
But
there's
really
no
like
description
of
what
the
different
priority
levels
mean.
B
What's
nice
is,
if
you
look
at
the
label
list
inside
did
have
issues
they,
although
the
worded
ones
have
like
a
clear
description
of
what
it
is.
So
it's
like
this
issue
should
be
done.
You
know
this
should
be
prioritizing
someone's
main
task
that
they're
working
on
right.
So
it's
very
clear
descriptions,
so
I
want
to
get
thoughts
or
basically
proposed
switching
or
using
the
P
ones
to
the
P
worded
ones.
Me
alone
we're
clear.
A
A
So
I.
Personally,
don't
have
any
objections.
We
could
probably
try
to
automate
and
translate
the
old
ones
or
eventually
we
will
yeah.
I'm
probably
will
have
to
figure
out
what
we
want
to
do
with
the
old
ones.
But
do
we
want
to
translate
them
or
we
will
leave
the
old
ads
as
is,
and
will
continue
only
using
the
new
ones,
and
if
we
stumble
upon
the
old
ones,
we
will
be
just
adding
them
next
to
the
number
at
once.
I
personally,
don't
have
any
opinions
on
that.
One
yeah.
B
B
A
A
B
Pr's
as
well,
so
there
should
be
a
section
below
this
somewhere
for
PRS
and
I
bring
up
this
stat
a
lot.
But
if,
if
we
respond
I
know
this
is
hard
for
us
to
see.
But
if
we
respond
to
issues
our
pull
requests
in
48
hours,
people
are
90%
likely
to
contribute
again
and
I
even
sent
more
likely
to
contribute
again.
So
that's
just
like
a
get
up
get
up
stat
overall.
E
Yeah,
that's
all
yeah
I'm
I'm,
all
hard
on
that
one
cool
is
that
yeah
and
just
to
clarify,
like
that's
four
issues
relating
to
contributions
or
is
that
for
like,
like
is,
does
someone
filing
a
bug
on
coop
control?
Does
that
mean
like
they're
90
percent,
more
likely
to
file
another
bug
or
like
contribute
in
the
same
capacity
or
do
you
have
any
more
insight
into
what
that
means?
I,
don't.
B
Have
the
granular
there,
but
it's
it's
you
we
don't
necessarily
have
to
like
fix
or
do
anything
it's
just
like
an
act
like
hey.
We
saw
this
PR,
it's
labeled
as
prioritized
like
it's
someone.
You
know
basically
make
someone
feel
seen
and
heard
pretty
quickly
for
contributing
just
increases
the
likelihood
that
they'll
contribute
again
so
take
that
for
what
it
is.
A
B
I,
don't
want
to
spend
too
long
on
this
one
either,
but
I
was
discussing
with
Brian
yesterday
the
PR
that
I
have
open
around
adding
control,
commands
and
kind
of
standing
izing
on
what
a
cute
control
command
could
look
like
should
look
like
we
kind
of
came
into
this
thing
where
we're
lacking
the
historical
context
on
where
this
complete
validate
in
a
wrong
pattern
came
from.
I
was
just
hoping
to
start
and
get
some
insight
there.
A
A
One
big
function,
where
we
put
everything
it
was
super
hard
to
test
and
the
the
majority
of
the
tests
were
written,
such
that
you
were
initiating
the
test
factories
and
then
injecting
those
test
factories
and
back
in
the
early
days.
The
factory
that
was
behind
the
keith
caudill
was
pretty
big
with
lots
of
data
in
it.
A
Throughout
the
couple
year
releases
we
cut
the
factory
to
minimum
hopefully,
and
we
figure
that
for
ease
of
tests
and
for
comfort
II,
we
will
go
with
a
pattern
where
we
will
have
a
structure
describing
the
command
and
methods
which
will
be
describing
the
three
steps,
because
we
figure
we.
We
have
noticed
from
going
from
command
to
command
that
these
were
the
three
usual
action
people
will
doing
we're.
First
of
all,
you
would
be
filling
in
creating
clients
defaulting
some
values.
A
Do
additional
parsing
of
the
commands.
Basically,
what
is
currently
happening
within
the
complete?
Then
there
was
usually
the
validation
phase
and
then
eventually
was
the
the
actual
execution
of
a
command
and
to
be
able
to
focus
your
unit
tests
on
particular
functionality,
whether
it
was
whether
validation
is
working
properly
or
whether
your
main
execution
code
of
the
command,
the
ability
to
inject
all
the
prerequisites
and
then
just
invoke,
run
and
check
the
output
was
the
simplest
possible
way.
Similarly,
we
come
up
with
because
that
was
pretty
much.
A
A
Kinda
remember
we,
whether
that
was
written
down
somewhere.
Maybe
that
would
discuss
several
times.
I
would
probably
have
to
go
through
through
the
old
agenda.
Maybe
there
are
discussions
where
we
were
covering
these
topics
and
where
we
come
up
with
the
current
flow,
since
that
that's
basically
the
the
simple
is
that
we
could
come
up
with,
is
there
I
don't
know?
Does
that
make
sense,
yeah.
B
No,
it's
gonna
have
some
context,
I
figured
it
was
around
testing
improvements
and
dependency
injection,
or
all
that.
So
this
really
spun
out
of
this.
This
page,
you
have
open
here,
basically
so
I'm
doing
a
complete
on
this
command
and
we
had
discussion.
That
said,
you
know
we
should
move
this
to
the
validate
section
and
then
the
issue
I
have
trying
to
standardize
all
this
is
so
basically
saying
we
should
validate
inside
a
validate
which
makes
sense,
but
if
you
scroll
down
a
little
bit
more
to
the
next
couple
lines,
you
see
this.
B
Basically
all
I'm
checking
if
there's
an
index
out
of
bound
and
I'll,
go
back
up.
Sorry
like
nine
yeah
II,
and
so
it's
either
right.
So
if
we
don't
do
a
check
to
see
if
there's
any
orange,
there,
that's
gonna
panic-
and
maybe
this
is
like
a
bad
one
case
example,
but
I
feel
like.
Is
it
going
to
come
up
more
we're
either
way
like
you're?
If
we
move
this
into
validate,
then
it's
still
doing
some
like
completion
to
set
up
the
command,
so
I
feel
like
there's
like
it's
still
something
missing
there.
B
A
If
you
recall
the
discussion
that
we
had
some
time
ago
where-
and
there
was
a
pure
I'm-
not
sure
if
that
emerged
some
time
ago,
but
still
currently
the
structures
that
we
have
and
the
code
that
is
behind
every
command
it's
tightly
coupled
with
cobbler
library,
because
we
usually
mop
one
to
one
and
then
additionally
do
that
if
we
go
and
probably
that
that's
one
of
the
reason.
Because
if
you
look
at
the
code,
you're
literally
injecting.
A
A
Is
after
that's
played
and
what
it
does
it
does
the
initial
step
where
we
are
translating
flags
to
options
from
arcs,
and
only
then
we
we
do
just
run
because
there
are
no
particular,
but
the
flex
to
options
is
actually
translating
actual
track.
Sorry
actual
flags
into
specific
weight
options
and
there
is
a
separate
weight
flag
structure.
So
if
you,
as
the
user
of
weight
command
and
white
command,
for
example,
is
being
used
in
delete
if
you
check
the
lead,
the
white
logic
is
taken
from
from
this
weight
command.
A
So
if
you're,
a
user-
and
if
you
want
to
expose
weight
flags,
you
have
to
wire
your
weight
flag.
If
you
want
to
just
use
the
functionality,
you
don't
care
about
about
the
flags,
you
can
just
fill
in
the
the
weight
options
that
you
care
about
and
run
it
so
either
you
can.
You
know
you
can
reuse
your
command
in
both
ways
as
an
extension
or
improvement
to
your
command
even
up
to
the
flax.
Or,
if
you
don't
care
about
the
flax
you
just
take.
A
A
F
I
think
I
mean
to
me
I
think
it
cut
in
correct
me
if
I'm
wrong,
but
I
could
way
I
think
about
it
is
that
all
the
public
methods
exposed
by
the
commands
should
not
have
any
should
not
take
a
parameter
of
type
Cobra
command
so
like
the
fact
that
it's
cover
command
should
be
internal
to
that
to
the
command
and
so
like
complete,
validate
and
run
should
all
operate
on
things
like
like
the
weight
options
or
other
options
that
are
kind
of
neutral,
that
don't
that
don't
specify
that
it's
Cobra
yeah.
That's.
F
F
Then
the
I
think
the
only
other
thing
we
were
talking
about
was
kind
of
like
I
mean
there's,
certainly
in
some
commands.
I
know.
None
of
this
is
nobody's
saying
everything's
perfect,
but
you
know
like
we
have
some
validation
in
the
complete
methods
and
then
sometimes
validations
in
the
validation
method.
You
usually
it's
in
validation,
but
I,
think
the
the
role
and
the
purpose
of
complete.
In
my
mind,
it's
just
like
fill
in
defaults
create
things
that
it's
more
of
like
a
convenience.
So
you
don't
have
to
repeat
initialization
defaults
everywhere.
Is
that
correct.
A
The
defaults
are
usually
done
through
the
new,
whatever,
whatever
options
there
is
most
frequently,
because
we
then
wire
the
structural
elements
into
flags
so
that
when
you
do
I,
don't
know,
cube
Caudill
describe
help.
You
see
the
defaults
wired
to
the
flax,
probably
with
the
weight
example,
which
is
the
the
end
goal.
The
ultimate
end
goal
you'll
have
a
new
weight
Flags,
which
will
make
sure
the
defaults
are
in
and
then
the
two
two
options
method
that
is
responsible
for
translating
whatever
user
has
specified
and
the
defaults
into
the
actual
weight
options.
A
F
E
Two
quick
notes,
one
is
I,
think
Cobra
offers
its
own
notion
of
validate,
may
I
forget
what
the
command
is,
it's
like
Jack
or
something,
and
so
that's
an
alternative.
E
Cobra,
when
you
test
Cobra,
you
can
override
like
standard
in
and
standard
out,
these
sort
of
things
on
the
Cobra
command
itself
and
using
not
having
Cobra
in
the
actual
implementation,
and
the
libraries
makes
total
sense.
What
I
don't
think
we
have
necessarily
a
clear
idea
on
is:
should
we
be
plumbing
standard
in
and
standard
out
down
there
right?
You
could
imagine
that
you
have
some
library,
for
instance,
that
emits
some
message
as
part
of
what
it's
doing
and
then
now
you
want
to
run
a
test,
and
you
want
that.
E
You
want
to
capture
the
output
of
that
command.
Right,
like
you,
may
want
to
then
pass
in
bytes
buffer
or
something
as
standard
an
or
standard
or
standard
out
or
standard
error.
These
sorts
of
things
so
that
I
think
that's
just
one
like
I,
totally
agree
on
not
passing
in
Cobra
I'm,
not
sure
if
we
should
pass
in
certain
things
like
that
up
with
streams.
A
I
know
that
Antoine
wants
to
cover
this
topic,
and
probably
the
construction
of
the
commands
is
something
that
we
will
be.
We
will
continue
on
discussing
further
as
we
go
and
that's
that's
probably
deserves
another
look
at
it
and
you
know
figuring
out
where
we
want
to
go
with
this
in
the
long
term
because
yeah
it's
it's,
definitely
worthy
of
more
discussions.
G
G
G
H
Yes,
Harold
lifecycle,
directors,
I,
don't
know
if
that's
necessarily
or
it's
probably
possible
that
it's
not
appropriate
for
coop
control,
but
could
be
useful
in
lifecycle
directives
where
it's
more
of
a
declarative
use
case
where
there
might
be
the
forests
around
things
like
resources
might
have
immutable
fields.
But
if
you
want
to
apply
this
in
a
CD
system
or
something
you
might
want
to
want
to
delete
and
then
recreate
them,
what
we're
looking
at
is
having
directives.
So
we
only
do
it
if
the
user
actually
says
I
want
to
do
this.
For
this
beta
resource.
G
A
G
Know
I
can
see
why
if
we
have
client
side
up
like
Foss,
why
it
would
make
sense
to
say:
okay,
if
we
can
solve
us
out
like
then
we're
going
to
delete
a
98
for
the
same
reasons
right.
What
applies
to
client
side
in
these
case
also
applies
to.
You
saw
well
said
I
think
so,
but
the
question
is
at
the
time.
A
Yes,
the
probably
with
a
certain.
Maybe
we
can
start
with
a
simple
warning
that
the
force
does
not
work
with
server
side
applaud.
With
the
server
side
apply,
we
can
break
the
backwards
compatibility
for
sure,
but
the
server
side
apply
I
think
it's
still
beta.
It
means
we
can
probably
change
the
force
behavior
on
this
one
and
say
that
this
doesn't
work
with
server
side.
If
you
want
to
do
a
forceful,
you
should
be
falling
back
to
manual,
delete
and
then
apply
an.
G
A
G
A
G
Of
course,
no
I
understand
that
Thank
You
Mitch,
hey,
that's
super
useful.
D
I
have
an
example
that
I
could
add:
I
maintain
a
CIC
tool
and
one
case
that
I
had
to
use
in
the
past.
The
fourth
option
within
that
tool
was
that
there
was
a
bug
on
PDP
that
never
allowed
it
to
be
mutated
at
all.
So
we
had
a
special
case
for
that
resourceful
eternal
force,
just
to
add
kind
of
another
chase
where
somebody
might
have
used
it.
But
of
course
like
we
could
have
used
art
changed
our
automation
to
do
a
delete
and
create
more
manually
in
that
case,
as
well.
A
I'm
not
saying
that
there
are
no
cases
for
the
usage,
because
just
the
fact
that
this
exists
is
a
clear
signal
that
had
happened
in
the
past
and
it
was
at
it.
But
since
there
is
a
viable
alternative
where
you
manually
invoke,
delete
and
then
apply,
it's
perfectly
fine
to
have
a
reasonable
approach
with
this
one
and
yeah.
A
Okay,
I
think
we're
one
minute
after
top
of
the
hour.
Thank
you
very
much
all
for
stand-ups
I
notice
that
Chris
wrote
that
there's
nothing
more,
that
he
can
ask
other
and
what
whatever
he
wrote
already
mg
in
the
dock.
So
I'm
not
gonna,
hold
you
more
than
than
this.
Thank
you
very
much
see
you
in
two
weeks
stay
safe
and
have
a
good
one.
Thank
you.
All
bye.