►
From YouTube: Kubernetes SIG CLI 20220601
Description
Kubernetes SIG CLI Bi-Weekly Meeting on June 1st, 2022.
Agenda and Notes: https://docs.google.com/document/d/1r0YElcXt6G5mOWxwZiXgGu_X6he3F--wKwg-9UBc29I/edit#bookmark=kix.o6wr7nvfoow7
A
Going
to
record
and
you're
probably
all
got
pop-ups
saying
it's
recording
hello.
My
name
is
sean
sullivan.
I'm
going
to
be
your
moderator
today
for
our
bi-weekly
sig
cli
meeting
and
I'm
gonna
get
directly
into
our
we've
got
quite
a
few
announcements.
It's
been
two
weeks
since
we've
had
a
meeting.
We
didn't
have
a
meeting
last
last
scheduled
wednesday
two
weeks
ago
because
of
kukan
eu.
I
hope,
though,
anybody
who
was
able
to
attend
had
a
good
time
and
that's
actually
the
first
item
on
our
agenda
or
announcement.
A
Is
that
there's
a
six
cli
talk
at
kubecon
eu?
I
still
don't
have
the
link.
I
will
put
it
in
there.
If,
once
I
I'm
able
to
get
my
hands
on
that.
A
Okay,
great
thanks
a
lot
mochi
does.
Does
anybody
who
went
there
would
like?
Would
you
like
to
say
a
couple
words
and
if
not
no
big
deal
we'll
just
move
right
on
to
the
next
stuff.
A
Got
it
okay,
so
our
next
announcement
is:
let's:
let's
go
over
the
release.
125
dates,
so
the
enhancement
freeze
is
coming
up
fairly
quickly.
It's
coming
up
within
the
next
a
little
over
two
weeks
on
thursday
june
16th,
the
code
freeze
is
out
in
early
august
august,
2nd
and
the
release
is
scheduled
for
tuesday
august
23rd.
A
This
release
the
lead,
if
you
want
to
know
who
to
reach
out
to
is
cece
huang
who's
at
google
and
the
kep
template
has
been
officially
updated,
since
we
are
focusing
more
on
reliability
and
on
testability
and
on
testing,
and
so
the
cap
has
been
updated
accordingly.
A
So
please
have
a
look
at
this
new
cap
for
for
any
of
the
enhancements
that
you're
trying
to
get
in
within
the
next
two
weeks.
A
So,
in
addition
for
re
for
this
particular
release,
there
are
no
manual
cherry
picks.
There's
an
automatic
release
branch
fast
forward
job
now
running
for
this
release.
That
sounds
pretty
cool.
A
A
You
know,
after
about
what
I
think
it
was
90
days
you
would,
you
would
go
through
stale
and
then
eventually
the
issue
would
be
closed
and
so
that,
if
you
want
to
know
more
about
that
issue,
go
ahead
and
click
through
that
link,
but
the
important
triage
accepted
bugs
will
no
longer
be
automatically
closed.
A
A
It
looks
like
we
are
moving
to
three
releases
per
year.
I
don't
have
any
timeline
on
that,
but
please
click
through
on
that
issue.
If
you'd
like
to
know
more
about
that.
C
We
already
moved
right.
We
moved
last
year
to
three
releases
per
year,
so
this
is
confirming
that
we're
going
to
keep
that.
D
A
Sorry
about
that,
thank
you
is
that
katrina.
Yes
thanks
katrina,
okay,
so
even
though
kubecon
eu
just
just
finished
two
weeks
ago,
we
actually
have
the
deadline
day
after
tomorrow
for
the
north
american
cfps.
If
you'd
like
to
present
at
kubecon
north
america
in
detroit,
you
have
until
friday
at
about
midnight
specific
time,
I've
included
the
link
to
to
submit
your
proposals.
A
So
also
there's
a
new
python
client
23.6,
please
click
through
those
links.
If
you
are
a
python
clientista
and
finally,
we
we
were
connecting
here
through
zoom.
There
is
a
zoom
client
vulnerability,
so
please
check
your
client
and
the
version
and
make
sure
it
is
above
5.10.0.
A
Okay,
so
why
don't
we
get
into
introductions?
I
know
there's
at
least
it
sounds
like
there's
at
least
one
person
who
was
met,
the
six
eli
folks
or
got
more
interested
in
kubecon
eu
and
is
here
for
the
first
time
it
sounds
like.
Would
anyone
like
to
introduce
themselves
to
the
rest
of
your
sig,
cli
colleagues,
and
if,
if
you're
new,
please
introduce
yourself
if
you're
up
for
it,
no
problem,
if
you're,
if
you're
not.
D
D
I
was
at
the
srgb,
I
got
excited
about
sig
cli.
I
met
a
couple
of
folks
from
here
who
motivated
me
to
design
the
sig
meetings
and
I
look
forward
to
contributing.
D
Hi
team:
this
is
surya,
I'm
from
india
bangalore,
so
I'm
pretty
much
excited
what
kubernetes
is
doing
and
currently
I'm
learning
kubernetes
along
the
side,
I'm
pretty
much
interested
to
contribute
back
to
the
community
and
that's
why
I'm
not
here
today.
I
want
to
get
connected
with
all
of
you.
E
Hi
I'll
do
quick
in
jerusalem
joel
I
work
for
red
hat.
I
recently
raised
a
pr
for
six
years.
I
own
projects,
hello,
runtime
and
that
got
marche
thinking
about
some
refactoring.
So
I've
come
to
listen
to
what
he's
got
thoughts
on
there
about.
A
Okay,
great,
so
why
don't
we
get
into
our
thanks
for
all
those
introductions?
We
appreciate
you
joining
us.
We
know
you
have
things
to
do
and-
and
we
appreciate
you
taking
time
to
be
with
us,
so
the
first
topic
is
for
mache
about
duplicate
duplicate
flags.
Would
you
like
to
take
it
over.
B
Yeah,
so
during
our
last
blocker
box
crop,
we
stumbled
upon
an
issue
where
one
of
our
users
specified
the
namespace
flag
twice,
which
for
me
was
an
obvious
thing
for
quite
a
long,
and
I
close
it
as
a
as
as
not
effect
as
we
won't
fix
it
because
that's
how
it
works.
B
I
then
also
tweeted
about
the
fact,
and
it
turned
out
that
there
was
a
lot
of
discussion
back
and
forth.
A
lot
of
people
were
suggesting
that
this
should
actually
be
an
error,
or
at
least
ideally
that
there
is
a
warning
that
you're
specifying
a
particular
flag
twice.
B
I
can't
remember
whether
that
was
on
twitter
or
somewhere
else,
but
I
got
to
a
link
that
is
open
against
cobra.
Raising
the
exact
same
problem,
so
cobra
currently
does
not
allow
you
to
to
differentiate
between
these
options,
so
you
can't
say:
oh
I
want
to
be
warned.
I
want
to
have
an
error
or
do
anything
with
it.
It
will
silently
ignore
this
fact
and
will
always
pick
the
last
one.
B
That's
what
you
get
with
with
any
cube
cuddle
commands
today.
So
in
my
particular
case,
I
in
the
agenda
I
gave
cube,
gotta
get
pog
and
then
I
specified
namespace
x
and
namespace
y.
If
you
invoke
that
you
will
get
all
the
parts
from
a
namespace
y
x
will
be
silently
ignored.
There
will
be
no
information
about
this
fact.
B
So
I
started
thinking
how
we
would
feel
about
it,
I'm
positive
that
we
cannot
break
people
that
might
accidentally
have
the
namespace
specified
or
any
other
flag
specified
multiple
times,
but
giving
warnings
is
is
something
I'm
willing
to
to
consider
for
this
particular
case.
B
Of
course
it
will
require
us
to
invoke
with
cobra,
which
is
a
topic
of
itself,
and
I've
already
spoke
with
john
mcbride,
because
he
is
one
of
the
primary
maintainers
of
cobra
currently,
and
we
will
want
to
engage
with
with
the
cobra
community
and
help
them
maintain
cobra
long
term,
especially
that
we
rely
heavily
on
cobra
in
cube,
cuddle
and
pretty
much
every
other
cli
within
kubernetes
landscape
and
around
it
relies
on
cobra,
which
is
written
and
go.
B
So
I
guess
that
we
will
have
to
work
on
this,
and
warning
seems
probably
the
the
most
gentle
approach
and
I
was
curious
what
else?
What
other
people
think
about
this
particular
problem?
If
they
stumble
upon
it,
if
they
knew
about
it
beforehand,
I
it
just
so
happens
that
I've
been
with
with
kubernetes
almost
day
one,
and
for
me
it
was
pretty
obvious,
but
it
turns
out.
That's
not
the
case.
B
Yes,
if
you
do
git
checkout,
when
you
do
checkout,
you
can
create
a
branch.
So
if
you
do
get
checkout
dashb
x,
dash
b
y,
it
will
only
create
the
ladder
branch.
I
would
expect
that
it
could
theoretically
create
both
branches,
but
only
pick
check
out
the
last
one.
No,
you
will
only
get
the
last
one
created
and
checked
out
presently
it
won't
create
both.
F
I
ran
into
this
when
I
was
doing
the
diff
environment
variable
to
pass
full
flags
and
stuff,
and
a
lot
of
this
is
done
by
a
kernel
system
call
to
parse
the
flags.
There's
like
a
kernel
library
that
parses
the
flags-
I
don't
know
pflag
uses
it.
But
if,
if
that
bug
is
present
in
in
get
in
the
kernel,
then
I
would
assume
that's
the
accepted
behavior
across
the
whole
linux
ecosystem.
F
B
That's
a
viable
option
as
well,
but
I
probably
would
have
to
double
check
how
we
could
approach
it.
Joel
put
it
in
the
put
in
the
chat
comment
that,
if
you
specify
an
array,
it
will
accept
all
the
values
and
yes,
that
is
perfectly
acceptable
and
not
expected.
If
you,
if
you
say
that
you
are
accepting
array,
it
will
nicely
consume
all
the
values
and
and
that's
what
a
lot
of
people
will
actually
expect.
B
The
problem
is,
if
you
don't
expect
array,
but
a
single
value
and
you
specify
the
same
spike
multiple
times,
it
won't
error
out.
It
won't
warn
you
or
it
won't
do
anything.
It
will
silently
ignore
everything,
except
for
the
last
one,
which
is
on
one
hand
from
what
I've
remember.
I've
never
seen
any
tool
that
I've
used
complain
about
multiple
black
val
multiple
flags
specified
with
the
same
the
same
flag
specified
multiple
times
from
my
own
personal
experience
is
usually
just
silently
pick
the
last
one.
B
So
if
it's
the
case
like
eddie,
says
that
this
is
the
default
behavior
across
the
linux
environment,
then
I
probably
will
leave
it
as
is,
and
won't
even
bother
addressing
that
one
on.
C
The
topic
of
whether
it's
like
due
to
some
sort
of
kernel
bug,
I
don't
think
so
like.
If
you
look
at
the
cobra
issue
that
you
linked
it
looks
like
it's
already
possible
to
detect
the
situation
that
they've
that
they've
passed
multiple
values,
which
means
it's
fundamentally
possible
to
do
so.
C
Maybe
we
want
something
more
seamless
and
corporate
than
what
exists
now
I
I
don't
know
we'd
have
to
try
it
out
and
see
how
ugly
it
is
to
use,
but
it
looks
like
it's
already
possible
for
us
to
do
this
morning
if
we
want
to
which,
by
the
way,
I'm
in
favor
of
I
think
that
sounds
user-friendly,
like
I
don't
think,
there's
a
good
use
case
for
wanting
to
have
your
first
argument
ignored.
So
I
I
think
it
would
be
helpful
of
us
to
admit
that.
So
I
support
it.
B
So
the
second
one
and
the
third
one
that
eddie
linked
are
interconnected,
because
the
printing
warning,
when
multiple
crds
conflict
is
something
that
we've
been
discussing
a
while
back
and
that's
not
only
when
multiple
crds
conflict,
but
also
when
you
have
multiple
resources
hidden
under
the
exact
same
short
name.
B
So
the
problem
with
that
is
that
it
is
possible
that
multiple
resources
will
have
either
the
exact
same
short
names
or
that
multiple
resources
will
be
hidden
behind
the
early
asses
depending
on
the
environments.
B
At
the
time
when
we
were
discussing
this
originally,
I
was
very
hesitant
on
introducing
any
in-depth
information,
but
recently
joel
approached
me
that
they
are
struggling
with
a
similar
problem,
and
I
started
talking
with
david
around
that
if
we
could
potentially
print
a
warning
around
the
topic,
pointing
that,
oh,
there
are
how
many
and
we'll
have
to
figure
out
how
how
the
warning
will
look
like,
but
that's
probably
a
just
a
minor
thing
to
put
that
kind
of
warning
and
as
joel
was
working
on
the
pr
it
turned
out
that
my
default
answer
was.
B
Oh,
can
we
just
pass
over
the
I
o
streams
and
joel
returned
to
me
with
an
answer
that?
Well,
we
can't
because
we
would
be
introducing
cyclical
dependencies,
because
the
builder
where
we're
trying
to
put
the
imprinting
relies
on
the
eye
on
the
generics
on
the
genetic
generic
cli
options
package
within
which
the
io
streams
live.
B
So
the
problem
is
that
when
we
initially
created
the
cl
runtime,
a
lot
of
the
stuff
landed
in
the
generic
cli
options
by
default,
and
currently
it
is
causing
us
problems.
My
plan
and
I
was
hoping
that
I'll
be
able
to
create
a
pr
we.
We
talked
about
it
if
I
remember
correctly,
thursday
or
friday
last
week,
and
I
was
hoping
that
I'm
going
to
be
able
to
to
sketch
the
pr
how
the
new
division
of
the
cli
runtime
could
look
like.
B
But
for
starters,
I
would
just
put
everything
that
is
related
with
I
o
streams
to
their
own
package.
Probably
it
would
be
just
called
io
or
something
like
that.
B
I
need
to
go
through
all
the
stuff
that
exists
currently
in
the
package
and
see
what
we
can
do
about
it.
So
I'll
try
to
put
a
pr
up
as
soon
as
possible.
I
know
that
this
will
be
problematic
for
for
all
the
cli
runtime
consumers,
because
we
will
be
shuffling
things
back
and
forth,
but
I
do
hope
that
the
outcome,
the
end
outcome
will
be
beneficial
to
all
of
us,
in
that
we
will
be
able
to
much
more
granularly
control.
B
How
stuff
are
connected
within
the
cli
runtime
I'll
make
sure
to
send
the
pr
I'll
as
soon
as
I
have
the
pr
up
I'll
put
it
I'll
put
it
on
hold
and
I'll,
send
an
email
to
the
mailing
list
pointing
to
the
pr
and
I'll
leave
it
open
for
sufficient
amount
of
time
for
everyone
to
have
their
own
their
own
opinions
about
the
approach
or
eventually
about
different
changes.
B
So
if
you
have
opinions
about
how
we
would
want
how
you
would
want
to
see
ccli
runtime
restructured,
please
look
be
on
the
lookout
for
that
pr
and
make
sure
to
comment
and
review
it
as
soon
as
it
will
be
up.
A
So
what
will
would
the
reorganization
change
say
plugins,
which
are
using
the
cli
runtime
or
any
other
clients
that
are
all
you
know?
So
so,
if
we're
assuming
that
it's
only
coupe
control,
that's
using
this,
then
that
seems
like
a
much
easier
fix,
whereas
if
there
you
know,
there's
going
to
be
a
plethora
of
clients
that
are
depending
on
this,
yes,
we
may
end
up
like
breaking
a
lot
of
people.
B
Within
several
libraries
within
the
within
the
core
cube,
so
that's
why
I'm
saying
that
if
we
want
to
do
it,
we
should
do
a
big
bang:
rework
in
a
single
shot.
Rather
than
doing
this,
I
don't
know
a
couple
of
times
over
the
next
couple
of
releases,
because
if
people
will
be
upgrading,
they
will
be
faced
with
a
single
update,
just
change
the
path
to
to
look
like
this
instead
of
the
current
way-
and
you
should
be
good,
that's
my
suggested
approach.
C
D
D
C
Yeah,
so
one
of
the
reasons
that
we
are
hesitant
to
do
this
in
the
previous
discussions
was
the
incompatibility
with
the
intentional
behavior,
where,
if
you
have
the
same
kind
and
multiple
api
versions,
which
is
something
that
actually
occurs,
especially
in
the
older
versions,
you
know
as
as
a
resource
graduates
or
if
we
move
from
one
api
group
to
the
other.
We
don't
want
to
print
a
warning
every
single
time.
The
user
goes
cube,
control
get
deployments
just
because
extensions,
v1,
beta
1
is
still
enabled
on
the
api
server.
C
That
they're,
using
like
that's,
really
bad
ux
for
a
much
more
common
use
case
than
the
one
that's
being
addressed
by
the
reason
that
we'd
be
adding
the
warning.
So
how
have
we
overcome
that
concern?.
F
F
B
No,
we
can't
when
you're
reading
the
the
discovery
you
are
seeing
all
resources
as
equal.
Yes,
you
can,
you
could
theoretically
look
at
the
priorities
of
the
resources
and
we
could
probably
do
some
some
additional
logic
around.
It.
F
C
Well,
but
why
wouldn't
we
even
want
like,
even
even
assuming
we
could
do
that?
Why
wouldn't
we
want
crds
to
be
able
to
benefit
from
the
same
api
group
behavior
as
as
built-ins
right?
You
have
a
crd.
You
evolve
that
over
time
you
have
your
alpha
1
version
and
your
beta
1
version
enabled
your
beta
1
is
the
default.
You
want
people
to
type
keep
control,
get
my
thing
and
get
the
beta.
You
don't
want
them
to
get
a
warning
every
time,
because
the
alpha
group
version
exists.
C
I
think
the
same
argument
applies
to
crds
as
to
anything
else,
and
I've
actually
seen
a
case
where
they
they
have
an
intentional
group
like
naming
overlap
with
a
built
in
between
a
built
into
crd,
which
is
a
further
complication
to
that
and
and
again
like
the
api
version
precedence.
It
exists
for
for
these
reasons,
and
I
I'm
still
not
convinced
that
we're
not
going
to
improve
we're
not
going
to
make
the
user
experience
of
basically
every
invocation
in
in
those
situations
that
are
much
more
common
worse
for
everybody.
F
C
Right,
but
the
user
error
is
the
installation
of
those
two
naming
conflicts
which
it's
it's
sometimes
an
error
and
sometimes
not
and
once
we're
at
discovery.
We
can't
tell
if
it's
an
error
or
not.
We
gotta
assume
that
it
isn't
it's
what
I
think,
what
we
need
to
consider.
The
user
error
is
the
installation
of
two
overlapping
group
kinds
that
were
not
intended
to
be
overlapping,
which
we
need
to
be
throwing
at
crd
installation
time
not
to
the
end
user,
who
had
nothing
to
do
with
installing
those
duplicates.
C
Well
like
can
we
can
we
revisit
that,
because
I
think
that's
the
appropriate
place
to
make
this
change
and
that
we
actually
don't
really
have
any
power
to
to
improve
that
at
the
stage
that
we're
talking
about
when
we're
when
we're
dealing
with
discovery
information
already,
the
thing
has
already
been
installed.
The
mistakes
already
made
the
end
user
can't
do
anything
about
it
and
we
have
no
ability
to
detect
whether
or
not
it's
actually
a
problem
or
intentional.
A
So,
just
a
quick,
a
quick
note:
we
are
working
to
upgrade
discovery
at
the
api
server
to
make
it
more
efficient
and
we
are
also
there's
a
new
version
of
the
open
api.
There's
a
v3
and
we
are
also
it
sounds
like
there's
discussions
to
add
extra
data
to
the
open
api,
not
sure
exactly
how
that
would
that.
Would
that
won't
change
anything.
B
The
problem
will
still
be
will
be
still
present
because
you
can
easily
create
a
crd
which
is
like
a
trainer
pointed
out
with
a
short
name
po,
which
is
also
happens
to
be
a
short
name
for
pod
right.
B
So
I'm
honestly,
I'm
still
hesitant,
I'm
I
I
haven't
fully
said
yes
or
no
to
the
pr
that
joel
put
together,
but
it
uncovered
a
different
issue
that
I
would
like
to
address
in
parallel,
I'm
still
trying
to
weigh
in
whether
to
go
one
way
or
the
other
or
how
we
could
do
it
in
a
way
that.
B
B
That's
that's
an
approach
that
I'm
also
considering
as
well,
so
I'm
I'm
still
not
100
sure,
yes
or
no
about
the
warnings,
but
I
want
to
have
the
mechanism
in
place
for
sure.
F
C
Not
so
much
that
is
that
the
warning
we
don't
know
whether
it
is
actually
addressing
like
when
we
see
the
situation
where
we
have
the
duplicate
name.
We
don't
know
if
it's
a
situation
where
we
should
warn
or
not,
because
there
are
lots
of
legitimate
situations
where
the
the
duplication
occurs
and
it's
actually
going
to
be
confusing
to
the
end
users
to
show
a
warning
in
those
situations.
C
So
the
just
take
the
deployment
existed
in
three
different
group
versions
for
a
very
long
period
of
many
releases,
so
just
think
about
that
situation
occurring,
whether
you
did
it
because
you're
evolving,
your
crd
or
we
have
another
evolution
of
a
group
version
kind.
That's
built
in
think
about
that
situation
and
the
effect
on
the
end
user
when
they
type
queue,
control,
get
deployment
and
we
sent
a
warning
there.
C
Like
that's
super
confusing
the
average
end
user
doesn't
need
to
know
about
that,
and
unless
we
find
a
way
to
screen
out
that
situation
from
the
fix
that
we're
trying
to
make
here
where
there's
an
accidental
overlapping
installation,
I'm
gonna
be
a
no
on
the
feature.
I
just
I'm
not
a
no
on
the
principle
of
warning
against
the
duplicate
installation,
but
I
don't
think
we
have
any
new
approach
here
so
far
and
the
approach
that
we
have
proposed
doesn't
it.
I
don't
think
it's
viable
for
the
for
the
reason
described.
A
A
You
know
once
we
get
past
a
particular
version
we
where
we
don't
have
to
worry
about
deployment,
for
instance,
and
I'm
not
sure
if
we're
there
yet
out
of
the
extensions
api,
you
know
v1
beta
1
extensions.
We.
I
don't
think
that
we
should
be
right
writing
code
to
support
that.
We
recognize
that's
an
error,
we're
trying
to
get
past
that
it's
not
just
groups,
though.
B
But
that
still
doesn't
solve,
even
if
we,
if
even
if
we
solve-
and
we
will
ensure
that
there
will
be
no
resources
with
identical
names
or
identical
short
names
that
still
doesn't
address
the
problem-
that
there
is
a
open
issue
that
someone
can
create
a
crd
with
a
short
name
that
will
conflict
with
the
built-ins,
in
which
case
you
will
be
struggling
with
that
there
is
a
potential
that
someone
can
create
an
identical
name
like
a
deployment.
B
That
is
not
something
I
don't
know
super
unique
to
just
kubernetes,
because
someone
can
create
their
own
deployment
controller,
their
own
crd
deployment.
It
will
be
called
deployment
with
a
short-term
deploy.
The
only
difference
will
be
it'll,
be
it
will
exist
in
an
entirely
different
api
group
and
we
can't
protect
users
from
those
kind
of
mistakes,
but
at
the
same
time
it
doesn't
have
to
be
a
mistake.
It
might
be
a
an
explicit
decision
on
the
user
part
to
use
that
just
to
differentiate
by
the
api
group.
B
We
just
need
to
make
sure
that
we
learn
our
users
to
use
fully
qualified
names
on
one
hand
and
then
we'll
have
to
figure
out
how
the
warning
could
be
potentially
helping
rather
than
disturbing,
because
I
just
I
I
agree
with
what
katrina
says.
I
would
be
very
mad
if
I
would
be
seeing
a
warning
every
single
time.
I
do
get
paw
get
po
because
I'm
heavily
using
that
and
I'm
fully
aware
that
po
stands
for
pods
and
if
something
else
is
using
it,
I
don't
care
because
for
me,
po
is
shorthand
for
pots.
B
I'm
super
lazy
and
I
like
typing
less
and
more
so
we'll
have
to
figure
out
how
to
properly
juggle
that,
but
I'm
seeing
both
sides
of
the
equation.
We
just
have
to
figure
out
how
to
best
address
it,
but
the
mechanism
itself.
Yes,
I
want
to
have
it
in
place
and
we'll
have
to
figure
out
how
to
best
expose
that
information.
C
E
Yeah
the
fix,
so
a
lot
of
this
is
in
response
to
katrina
some
of
your
points.
So,
firstly,
I
hadn't
really
thought
about
the
the
in-built
like
deployment
where
it
moved
group.
I
hadn't
really
thought
about
that.
That's
a
really
good
example.
I
will
go
away
and
think
about
that.
The
other
thing
with
the
versions,
the
current
fix,
takes
into
account
that
things
will
evolve
in
versions.
So
it's
only
if
you
have
overlapping
group
kind,
not
like
the
version.
The
version
is
discarded
so
that
won't
matter
if
people
evolve
it.
E
The
other
thing
I
was
going
to
say
is
like
the
the
argument
about
crd
priorities.
They
are
confusing
if
you,
for
example,
have
two
groups
that
are
on
the
same
version,
so
you
have
two
crds
v1
beta
1.
Then
it
seems
to
prioritize
based
on
alphabetical
order.
If
I
then
evolve,
one
of
those
apis
say
the
one
that
is
second
in
the
alphabet
to
a
v1.
E
Suddenly
that
then
takes
priority
right
and
I
think
that's
where
the
major
confusion
is
for
end
users
is
like
I've
got
two
crds
one
one
is
you
know:
foo
dot
bar
viva
on
beats.
One
for
one
student
baz
for
you
on
beats
one,
and
I
say
you
know
I'm
oc
get
through
and
I
know
every
time
I
do
that.
It's
the
first
group
and
then
I
evolve.
The
second
group.
Suddenly,
that
is
changing
and
cue
control.
Behavior
has
changed
between
releases
because
I've
evolved
an
api
group.
E
That's
the
big
problem
I
want
to
try
and
fix
here
is
that
needs
to
be
stable,
at
least
if
there
is
some
way
that
we
have.
You
know
these
overlaps
either
yeah
tell
users
and
tell
them
to
put
you
know,
because,
even
if
you
just
do
like
the
first
pre
like
it,
even
does
a
prefix
match
right
like
if
I
do
the
first
bit
of
my
fully
qualified
domain
name,
foo
dot
b,
a
or
whatever
you
know
that
also
fixes
the
issue.
E
D
I
always
thought
it
was
strange
yeah
this
this
emphasis
on
short
names.
For
all
these
reasons,
instability
and
unpredictability
of
results.
I
like
what
mache
said
that
this
seems
to
be
a
very
personal
decision.
People.
You
know
you
like
po.
Someone
else
might
like
something
different.
Somebody
else
might
have
a
different
sense
of
what
pio
should
stand
for
and
the
fact
that
coop
cuddle
sort
of
hardwires,
a
particular
interpretation,
always
seems
kind
of
strange
to
me.
If
anything
that
should
be,
there
should
be
a
choices
file
on
the
side.
D
But
to
me
whatever
there's
an
ambiguity,
you
need
to
present
the
user
with
that
ambiguity
and
let
them
resolve
it,
and
then
we
can
remember
the
resolution
that
they
had,
but
then
that,
for
example,
then
would
say
if
I
then
run
that
same
script
in
a
ci
cd
context,
it'll
fail,
because
that
context
doesn't
have
the
same,
that
same
history.
Right
of
of
my
my
choices,
which
is
a
good
thing
right.
We
want
the
cic
pipeline
to
usefully
qualify
names
so
so.
A
B
Exactly
api
provides
the
information
about
the
priorities
and
the
versions
available
for
all
the
groups
and
then
well,
theoretically,
that
the
rest
mapper
is
part
of
the
clango.
But
that
knows
based
on
that
information,
which
version
should
be
picked
and
just
like
joel
said,
there's
the
version
priority
and
the
alphabetical
order
that
takes
in
that
is
being
taken
into
account
when
picking
one
over
the
other.
C
E
E
C
Right
and
ultimately,
there's
also
some
cluster
operators
involved
here
as
well,
because
they're,
the
ones
who
are
able
to
find
those
like
to
the
point
of
where
the
aliases
come
from
those
come
from
somebody
defining
them
like
as
part
of
their
crd,
like
cube
kettle,
isn't
inventing
those
itself
they're
in
the
information
that
we
get
back
from
the
server.
E
On
another
tangent,
I
did
also
wonder
maybe
park
this
for
now.
Is
this
a
security
vulnerability
like
if
someone
were
to
manage
to
install
a
separate
crd
that
then,
you
know,
could
take
over
a
name
by
producing
a
higher
version
priority,
or
something
like
that?
Does
that
then
lead
to
potential
security
issues
for
users?
If
someone
created
a
crd,
that
was
a
secret
that
looked
the
same,
for
example,.
B
You
know
built-ins
are
by
default
of
higher
priority.
You
can't
create
acrd
with
a
higher
priority
than
the
buildings.
The
built-ins
will
always
take
place
precedence
over
the
crds
but
joel's
point.
B
I
could
see
that,
as
I
don't
know,
you
have
a
third
party
secret
type
of
a
resource
and
then
you
create
another
one,
which
is
by
the
fact
that
is,
I
don't
know.
Let's
say
v2
over
v1
will
take
pretty
over
the
v1,
but
then
I
would
assume
that
it
will
depend
on
how
the
access
rights
is
being
dealt
with
by
the
by
the
cluster
administrator.
So
I
wouldn't.
B
I
wouldn't
consider
that
in
the
realms
of
security
issues
or
the
other
confusion.
C
The
flipping
of
the
order
is
a
really
interesting
point,
though
the
fact
that
we'd
use
up
like
between
two
things
in
an
equivalent
group,
it
would
just
rely
on
alphabetizing
and
then,
whichever
has
the
highest
priority
across
our
highest
version
across
both
groups.
That
instability
does
strike
me
as
something
that
maybe
we
could
reconsider.
C
The
same
thing
as
what
the
pr
is
doing
right
now,
I
suppose,
but
I'm
interested
in
that.
C
E
Sorry,
it's
not
dependent
on
just
a
particular
resource
as
well.
It's
that
group,
because
I
noticed
this
because
I
was
testing
something
I
had
a
conflict
because
I
had
two
objects
that
were
called
machine
both
in
v1
beta
1's.
If
I
now
install
something
else
in
one
of
those
groups,
there
isn't
a
machine,
but
it's
called
you
know
whatever,
but
that's
a
v1
that
one
suddenly
takes
priority.
So
it's
not
even
like
it's
just
that
particular
resource
being
at
that
version
takes
priority.
It's
the
group
first
and
then
the
machine
type,
whatever
is
kind.
A
But
if
I
understand
this
correctly,
please
correct
me:
it's
it's
mostly
when
we
have
the
same
kind
in
different
api
groups
right,
it's
that
that
this
only
have
it
almost
only
happens.
Then
right,
we've
and
we've
seen
this
actually
before,
as
katrina
said,
with
what
some
of
these
kinds
used
to
be
in
extensions
v1,
beta
1
and
then
they
move
to
apps
like
deployment
and
staple
sets,
etc
and
yeah.
A
If
talking
to
some
of
the
api
machinery
group
folks,
it's
like
that's,
we
know
that's
a
bad
thing
and
they're
not
going
to
support
that
anymore
and
do
not
suggest
that
those
types
of
things
happen
within
a
crd.
We
have
a
way
of
choosing
which
version
we
want,
which
one
is
the
you
know,
the
current
version,
the
the
storage
version
and
yeah.
So
I
are
we
supporting
something
that
you
know
really
shouldn't
be
supported.
If
we're
not,
I
understand
that
you
know
there's
yeah,
let
me
let's,
let's
think
about
this
and
revisit
it.
A
B
We
will
be
definitely
revisiting
the
topic.
The
pr
won't
go
anyway,
so
yeah
there's
still
the
reorg.
That
has
to
happen
first
and
then
we'll
have
to
revisit
to
be
able
to
land
that
pr
but
yeah
I
haven't
had
my
final
word
yet
on
it.
So
it'll
be.
G
C
Joel
part
of
what
you
described
was
was
this
instability
that
in
what
gets
returned,
I
I
thought
that
was
a
very
interesting
point
that
could
potentially
be
addressed
aside
from
the
controversial
warning.
C
A
G
G
Less
confusing
I'm
going
to
share
myself
right.
Can
I
share
my
screen.
G
I
do
have
a
link
in
the
agenda
for
this.
You
can
click
on
either
one
of
them.
One
of
them
is
just
a
comment
on
the
poll
request
that
I
started
all
this
with.
G
Issue
so
I
think
in
like
december
I
started
working
on
trying
to
get
basically
the
entire
cube
config
able
to
be
set
using
just
the
config
set
command
in
the
process
of
that
I
just
kind
of
extended
the
dot
delimited
padding
that
we
had
to
begin
with,
and
last
week
at
qcon
eu
has
decided
that
we
probably
didn't
want
to
extend
the
domain
specific
language
of
this,
and
it
would
suggest
that
I
use
a
json
path
instead,
which
definitely
makes
sense,
and
so
I
started
working
on
that.
G
But
what
I
found
was
that
the
way
that
the
config
object
is
defined
in
the
client
command
api
here,
this
config
struct
does
not
actually
like
reflect
the
gamma
that
gets
written
to
file
one
to
one,
and
so
what
that
ends
up
doing
is.
If
we
take
this
example
like
minimal
config,
here
we
have
users,
and
then
we
have
the
experimenter
user.
If
we
wanted
to
get
down
to
the
username
value
for
it,
we
would
still.
We
would
basically
just
be
reinventing
the
dot
delimited,
padding
that
we
were
using
before
uses
experimenter
username.
G
That's
like
how
it
would
work
now,
and
it
was
also
how
it
would
work
if
I
were
to
do
it
with.
G
That's
that's
also
how
it
would
work
if
I
were
to
just
wrap
everything
in
jsonpath,
so
we
talked
about
or
y'all
said
that
you
didn't
want.
You
know.
We
all
agree
that
we
don't
want
to
support
a
domain
specific
language
here,
so
the
way
that
it
would
be
expected
to
have
to
to
get
down
into
that
would
be
effectively
this
here,
except
this
is
for
password
instead
of
username,
so
which
I've
made
more
explicit
here.
So
it
feels
like
there's
three
things
here.
G
And
the
other
thing
that
we
could
do
is
just
make
a
parallel
struct
that
is
more
one-to-one
with
the
yaml
or
we
can
just
continue
with
the
domain
specific
and
rapid
and
json
json
path,
because,
right
now,
as
it
exists,
we
can
just
rip
and
replace
what
we're
doing
now,
with
jsonpath
minus
a
couple
of
things.
B
So
you're
saying
that
we
would
need
to
make
a
problem,
I'm
looking
at
it.
The
first
option,
where
we
would
just
on
marshall,
2
interface
and
lose
the
typing
is
not
an
option
of
implementing
domain
specific
language.
That's
one
of
the
reason
that
when
petrina
and
I
looked
at
it
during
kubecon
and
I
posted
a
picture
for
everyone
interested
I
just
it
just
looks
like
I
have
a
very
bad
name.
I
really
like
your
pr
so.
B
But
I'm
leaning
towards
the
second
option
where,
but
I'm
curious
why
we
would
need
to
have
a
parallel
config
struct
that
is
one-to-one
with
the
jason
young
to
unmarshal
onto
instead
of
using
the
currently
existing
conflict.
I.
G
B
I
was
aware
that
we
will
be
throwing
away
the
ability
to
specify
the
the
key
and
the
in
the
users
and
so
forth
by
the
you
would
basically
use
the
dot
notation
to
pick
a
particular
entry
by
name
and
you
would
have
to
either
use
a
indexed
notation
or
one
which
matches
a
name
like
you
have
in
your
example.
G
G
The
way
that
the
struct
for
the
config
is
laid
out
is
that
this
doesn't
work
if
we
use
the
currently
existing
config
struct.
G
It
has
to
be
like
this
because
it
the
way
that
it
unpacks
itself
is
it
it's
just
like
a
series
of
nested
maps,
basically,
which
is
how
the
navigation
step
parser
today
is
able
to
navigate
it
at
all.
G
So
this
this
wouldn't
work.
If
you're
using
the
existing
config.
G
Yeah
every
and
this
this
is
also
true.
This
could
be
clusters
or
contexts
as
well,
and
the
same
is
true
for
all
all
of
them:
they're
all
they're,
all
map
string,
whatever
type
they
are
basically.
B
B
So
many
yeah
that
reflection
and
all
the
all
the
changes
where
we
would
maintain
another
dsl
just
to
handle
this
one
was
the
reason
that
I
was
very
hesitant
on
on
adding
and
especially
that
when
I
started
reading
your
pr
where
you
were
expanding
it.
B
B
But
I
would
try
to
push
it
even
at
the
cost
of
breaking
users
with
this
one,
because
I
think
that
the
the
amount
of
users
that
will
be
affected
will
be
rather
minimal
and
the
benefit
for
us
maintainers
will
be
significant
if
we
switch
users
to
a
consistent
experience,
especially
that
jsonpath
is
something
that
they
already
are
aware
of
and
are
using
heavily
with
get
commands
or
any
other
that
allows
you
to
parse.
The
output.
G
Yeah
it
also
just
like,
could
if
we,
if
we
did
have
it,
you
know
like
this,
we
could
do
you
know.
Users
could
do
more
interesting
things
like
doing
user
star,
and
then
you
know
exactly
so
yeah
and
that's
what
I
would
prefer
and
that's
why
I'm
kind
of
like
I'm
on
board
for
the
second
one.
It's
just
gonna
take.
You
know
yeah.
B
I'm
even
willing
to
do
that
parallel
config,
if
there
won't
be
any
other
option,
because
we
could
probably
figure
out
a
a
way
to
do
it
automatically
and
have
a
unit
test
which
will
ensure
that
we
have
bi-directional
compatibility
and
that
if
something
is
being
added
to
the
client
go
api,
it
will
triple
down
also
to
to
keep
cuddle
files,
and
that
would
be
probably
the
closest
and
the
best
approach
that
I
would
go
with,
rather
than
maintaining
dsl
dsl
just
for
this
or
losing
the
typing
by
picking
the
option,
one
so
yeah
I'll.
B
G
I
think
something
that
the
parallel
config
would
enable
us.
We
could
do
both
for
a
while
and
just
announce
the
deprecation
yeah.
B
G
I
I
just
gotta,
I
actually
don't
know
how
much
effort
it
would
be
because
now
that
I
think
about
it
like,
I
probably
just
need
to
copy
this
and
have
the
config
json
struct
and
not
have
it
be
mapped
string
cluster
for
everything,
and
that
would
probably
do
it.
I
don't
know
for
sure,
but
you
know
we'll
see.
B
B
B
G
Okay,
so
the
way
that
I
got
to
hear
is
from
what's
happening.
B
G
Okay,
yeah:
I
will
work
with
this
one
and
we'll
see.
B
Yeah,
it's
it's
tricky
and
I
probably
my
guess
is
that
a
lot
of
the.
So
if
you
go
back
to
the
very
early
days
of
kubernetes,
a
lot
of
the
we
always
had
two
types:
internals
and
externals
everything
controllers,
clients
and
everything
was
using
the
external
internal
types.
Clients
were
the
only
ones
using
but
controllers
and
all
the
logic
within
the
cube
was
using
the
internal
types.
B
There
was
a
big
rewrite
happening
for
a
couple
of
releases
to
switch
everything
to
use
external,
so
I'm
guessing
that
nobody
just
cared
about
client
go
bits
that
much
for
them
to
be
externalized.
Let's
call
it
that
way.
B
So
I
and
I'm
I'm
perfectly
supportive
of
having
an
explicit
conversion,
do
whatever
you
need
to
do
with
jsonpath
convert
it
back.
That's
perfectly
fine!
If
you
need
help
with
the
conversion
exp.
The
conversion
invocation,
just
ping
me
on
stack
and
I'll
have
a
look
but
yeah
it's.
It
should
help
with
your
particular
case.
B
C
C
That
work
marcus,
like.
G
A
Cool
it
looks
like
we're
a
minute
over
eddie.
Is
it
okay?
If
we
get
to
your
topic
the
next
time
or
should
we
have
discussion.