►
From YouTube: Kubernetes SIG CLI 20181107
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Good
morning,
good
afternoon,
good
evening,
depending
on
where
you
are
welcome,
this
is
another
60
Li
meeting
today
is
November
7th.
My
name
is
Matty
and
I'll
be
your
host.
Today,
our
agenda
is
basically
packed
with
the
amount
of
items.
So
let's
get
through
announcements
and
test
grades
quickly
and
we
can
focus
on
on
the
main
topics.
So
I
think
the
most
important
announcement
is
that
the
code
starts
this
Friday,
which
is
November
or
November
9th.
A
That
basically
means
that
every
single
PR
that
has
a
milestone
113
can
safely
merge.
Anything
that
does
not
have
that
milestone,
said
I
will
have
to
wait
until
the
next
release.
If
I
remember
correctly,
the
person
that
was
looking
after
the
test
grade
was
Shawn
Shawn.
Can
you
share
with
us
the
update
on
the
desperate.
B
A
A
Okay,
awesome
so
quick
reminder
if
you
want
to
be
a
part
of
our
on-call
testing
responsibility
sign
up
for
the
roles
who
on
is
taking
over
the
role
for
the
next
two
weeks.
We
still
have
pots
available
until
the
end
of
this
year,
so
it's
definitely
something
worth
experiencing
on
your
own.
Okay,
with
that,
we
can
then
jump
to
the
main
topics
and
the
first
one
will
be
integrating
customizing
to
keep
CTL
so.
D
C
C
It
has
been
developed
for
the
past
few
months
in
his
own
Ripple
and
it
has
several
releases
and
currently
it
is
in
good
shape
and
when
customized
was
proposed
as
a
sub
project,
there
was
a
cap
it
so
customized
is
expected
to
be
integrated
into
the
cube
control.
So
to
achieve
that
about
two
weeks
ago,
we
did
promote
your
attempt
to
add
customize
as
a
sacrament
in
keep
control,
and
many
of
you
revealed
that
PR
and
give
us
the
all
comments
which
is
highly
related
to
the
UX
in
consistency
and
sort
of
things.
C
C
C
So
here
we
have
three
objects:
they
are
come
composed
in
one
customization
and
we
have
one
patch
applied
and
this
actually,
this
pantry
is
to
increase
the
replica
size
of
the
deployment
and
then,
let's
see
so
KC
is
the
my
alias
for
cube
control
and
now
which
I
want
to
do
a
very
basic
demo
for
the
lifecycle
management
of
the
math
test.
So,
first,
let's
do
apply
and
dash
F
for
my
current
directory
and
then
it
will
create
those
three
objects:
secret
service
deployment
and
oops
I
have
them
where's
the
place
and
release
them
first.
C
C
C
C
You
see
the
deployment
is
configured
which
means
it
has
changed.
Then
we
do
another
guide
for
current
of
directory.
We
see
that
the
replicas
size
is
decreased
to
three.
So,
finally,
if
we
are
out
good,
we
can
run
delete.
Then
all
these
objects
will
be
deleted.
So
that's
the
demo
and
I'm
going
to
explain
what
has
happened
so
the
integration
into
cube
control
is
mainly
in
the
CRI
runtime
and
there
there
is
a
builder
type
in
the
bureau
type.
C
Once
you
pass
in
a
dash
F
flag,
it's
going
to
visit
all
the
directories
and
files
upon
him
at
that
flag.
So
here
I
make
some
change
that
if
the
directory
contains
a
customization
that
yellow
file,
then
we
say
this
is
a
customization
directory
and
then
we
are
going
to
do
a
list
of
processing
steps
based
on.
So
those
processes
are
from
customize.
C
So,
as
the
output
of
the
pre-processing,
we
got
a
list
of
expanded
resources
and
those
resources
will
be
consumed
by
whatever
cube
control
supplements
like
I
got
or
delete
so
I
have
also
I
documented
all
the
details
in
the
cap.
So
if
you
can
take
a
look,
it
contains
more
details.
So,
as
I
mentioned
this
integration,
this
integration
doesn't
change
the
UX
and
is
backward
compatible
and.
E
B
C
B
B
F
D
A
That's
good,
so
my
question
was
it's
it's.
It
is
so
much
better.
It
is
like
really
awesome
from
what
I
understood
correctly.
It
is
hooked
into
the
Builder,
so
basically,
every
single
command
that
will
be
using
the
resource
builder
will
get
the
customized
for
free.
Is
that
correct?
That
is
correct.
Awesome.
A
D
C
D
D
A
D
C
D
We
did
I
agree
with
matcha.
We
need
to
do
more
like
right
now
there
are
there's
documentation
in
the
customization
repo
when
this
is
part
of
coop
control.
The
documentation
needs
to
exist
with
the
control
documentation
like
if
you're
learning
about
apply
like
this
should
probably
be
in
that
same
area
right,
because
it's
part
of
that
command
down.
Mm-Hmm.
C
A
A
D
A
Well,
yeah,
that's
true
any
command
that
is
using
resource
builder,
but
that
basically
means
you
know
that
I
know
that,
because
we
look
at
the
sources,
users
does
not
know
that
because
they
don't
look
at
the
sources,
they
don't
know
which
commands
use
the
resource
builder
and
some
of
our
commands
do
not
use
it
and
it
will
be
mixed.
So
probably
the
best
option
will
be
to
have
a
a
way
of
specifying
this
support,
customize
and
here's
how
to
get
some
additional
help
about
it.
A
So
we
will
have
to
go
through
all
the
commands
and
a
shirt
to
to
make
it
explicit,
because
then
we
will
get
people
asking
us.
Well,
I
did
the
customizing
arm
file,
but
I
mean
I'm,
invoking
QTL
XYZ
whatever,
and
it
just
doesn't
work
and
our
answer
to
that
question
will
be
well.
It
won't
work
because
that
command
does
not
support
customize.
So
we
need
to
make
it
explicit
for
end-users,
which
command
support
it
yeah.
D
I
agree
with
that,
and
even
and
maybe
while
we're
doing
that,
we
could
take
a
closer
look
at
the
commands
that
support
the
resource
builder
today
like
and
being
clear
about
what
it
is
that
supports
it
like
the.
If
you
look
at
control
rollout
status,
it
supports
the
resource
builder,
but
it's
not
obvious
from
the
documentation
like
like
how
it
does,
because
it
also
supports
taking
a
version
of
a
single
resource
right
and
so
I
think
Jean.
Finally,
you
have
a
list
of
all
the
commands
that
use
the
resource
builder
and
that's
all
the
commands.
D
B
E
A
Yes,
okay,
that's
that
that
is
that
is
so
much
better
and
it's
like
really
awesome,
and
actually
we
were
talking
with
you
on
about
it
and
it
will
be
so
cool
to
integrate
it
and
to
apply,
and
the
proposal
that
you
bring
to
the
table
is
just
amazing.
So
thank
you
very
much
for,
for
we
working
and-
and
you
know,
responding
to
the
feedback
and
that
that
is.
That
is
exactly
what
I
was
hoping
to
get
for
for
the
customized.
A
That's
perfect,
I
think,
that's
that's
exactly
what
we're
hoping
for
to
outline
how
this
will
and
keep
CTO.
One
of
the
technical
issues
question
that
I
have
is
the
entire
functionality
of
the
customize
being
implemented,
and
it
will
it
will
move
inside
of
cube,
CTL
repo,
eventually
or
it
will
still
be
a
external
dependency.
It.
D
Think
we
need
to
make
that
decision
now
and
it's
probably
whatever
works
best
right
like
if
it's,
if
we're
seeing
benefits
of
having
it
as
a
external
dependency,
because
it
is
encapsulated
enough
that
the
issues
are
easier
to
route
and
triage,
and
the
testing
is
easier
to
just
have
focused
on
that
repo
and
not
block
other
stuff
like
then
we
can
keep
it
as
an
external
dependency
right
and
just
give
right
access
to
that
repo
to
all
of
the
coop
control.
Maintainer
z--
right.
If
you
decide
that
it's
like
having
multiple
places
for
multiply.
D
This
really
is
part
of
apply
or
it's
tightly
coupled
with
apply,
and
so
anyone
debugging
apply
needs
to
be
able
to
debug
this,
and
so
the
issues
belong
in
the
same
place.
Then
we
can
just
migrate.
The
code
over
write,
the
coop
control
repo
I.
Think
it's
up
to
us
right.
We
can.
We
can
change
our
minds
about
that
at
any
time.
Yeah,
that's
true
awesome.
B
So
just
one
other
note:
this
is
Shawn,
so
we
also
we're
considering
this
or
would
like
to
get
feedback
and
get
people's
ideas
on
whether
or
not
this
is
a
whether
or
not
developing
a
sub
command
or
extra
shuttle
functionality
in
its
own
repo
is
something
we
want
to
do
in
the
future.
As
you
said,
we're
pushing
back
on
that
that
decision
right
now,
but
it
may
be
that
you
know
future
coupe
cuddle
come
in
sub
commands.
A
A
D
D
D
The
tutorials
and
onboarding
is
focused
on
teaching
about
kubernetes
and
there's
a
lot
of
things
as
spans,
like
some
of
it's
about
standing
up
a
cluster
and
masters
and
nodes,
and
these
sorts
of
things
and
a
lot
of
it,
is
about
the
api's
and
deployments
for
stateful
sets
for
services
and
how
to
do
this
sort
of
stuff
and
the
focus
on
the
tooling
of
how
to
do.
This
stuff
is
really
just
kind
of
sprinkled
in
there.
D
And
it's
not
there's
not
a
holistic
approach
of
like
here's,
how
to
understand
how
to
use
kubernetes,
and
so
we
see-
or
at
least
I've
experienced
a
lot
of
folks
asking
questions.
Just
about
like
how
do
I
set
up
my
project
structure
and
how
do
I
do
CI
CD
and
how
to
like
apply
does
this
and
how
does
it
work
and
this
sort
of
stuff?
Where,
like
you,
actually
need
a
comprehensive
source
of
information
about
the
tooling
you're
using
and
how
it
interacts
with
the
api's?
D
And-
and
this
became
more
clear
to
me,
especially
with
the
customization
stuff,
which
is
actually
requires
a
lot
of
documentation.
Because
now
we
have
api's
in
the
tooling
themselves
and
so
I
started
putting
together.
I
wrote
a
get
book
for
ku
builder,
which
is
the
SDK
and
I
sort
of
looked
at
that
format
and
said
what
would
like
a
book
on
coop
control.
D
Look
like
and
the
approach
here
is
like,
instead
of
having
like
there's
a
tasks
and
concepts
and
stuff
that
the
kubernetes
core
documentation
has,
and
this
is
kind
of
a
reinventing
of
like
a
chapter
by
chapter
like
concept
by
concept.
Here's
how
tooling
works
in
your
to
control
and
it
broke
it
up
into
a
couple
sections.
This
concept,
such
as
like
the
declarative
aspects
of
qu
control.
How
compositions
and
variation
in
reuse
works.
Cic
is
something
people
come
to
us.
D
I
hear
a
lot
of
questions
about
and
I
think
like
providing
direction
would
be
really
helpful
right.
So
one
of
this
stuff,
I,
flushed
out
right
and
I,
started
writing
documentation,
much
about
like
customized
write
and
how
that
format
looks,
and
so
here's
like
a
section
on
secrets
and
config
maps.
This
is
exist
in
the
customize
repo,
but
this
is
more
of
the
integration.
D
How
I
imagined
it
might
work,
and
so
like
this
talks
about
the
the
different
components
like
one
component
of
the
customization
animal
for
generate
config
mats
and
how
you
generate
them
from
from
different
sources,
same
thing
with
images,
for
instance,
here
I
have
one
on
how
to
set
an
image
from
commit
shot
or
something
like
that,
but
other
pieces.
It's
really
just
kind
of
a
documentation
of
like
all
the
questions
I
had
that
I.
Don't
think
we
have
answers
for
right
so,
like
how
do
you
just
structure
your
directory?
D
If
you
have
multiple
clusters,
you're
rolling
out
two
or
if
you
have
multiple
environments
right,
if
you
have
a
staging
and
prod
environment
like
I'm,
not
sure
that
we
have
a
great
documentation
on
like
here's,
where
you're
staging
versus
production
configs
go.
Some
of
this
stuff
is
just
on
things
that
I
think
probably
should
exist.
That
don't
like
it'd
be
great.
If
we
had
a
winter
that
you
know,
you
identified
things
that
we
knew
were
bad
ideas
and
probably
folks
shouldn't
be
doing
and
told
them
about
those
right,
and
so
really.
D
This
is
a
combination
right
of
stuff,
where
it's
like,
providing
documentation
for
stuff
that
maybe
doesn't
have
documentation,
providing
a
structure
for
hey.
We
need
to
document
this
stuff,
even
though
we
don't
have
to
build
anything
for
it
for
providing
a
list
of
things
that
it's
like.
Actually,
we
didn't
build
this
and
it's
seems
like
it
really
should
fit
in
the
book
or
things
that
are
buggy
right
now,
right
like
so
deleting
resources.
Right
now
is
one
of
the
first
things.
D
You'd
probably
want
to
know
right
if
you're
reading
about
apply
like
this
is
how
you
create,
update
and
delete,
but
our
delete
story
right
now.
If
you
look
at
it
says
this
is
alpha
and
there's
very
little
guards,
and
if
you
do
anything
just
a
little
bit
wrong,
you
could
blow
away
a
bunch
of
stuff
unintentionally
and
so
flushing
this
out,
and
some
of
this
stuff
is
is
like,
would
be
integrating
into
our
existing
commands
like
container
woggs
shells
and
breaking
it
up.
So
this
is
idea
I've
been
exploring.
D
A
D
A
D
Q
builder,
thank
you
yes,
so
this
is.
This
has
been
kind
of
a
popular
book
for
COO
builder.
This
is
what
it's
based
off.
This
is
a
different
book
same
format,
but
one
thing
I
do
like
about
is
is
really
easy
to
write.
These
things
like
Jekyll,
is
actually
somewhat
difficult
and
you
have
to
learn
about
different
pieces
of
Jekyll,
so
this
is
pretty
easy,
but
provides
some
nice
nice
things
like
code
currently.
A
D
A
Running
point
because
at
least
we
have
a
you
know
a
place
where
it
can
direct
people
to
with
some
instructions
how
to
build
it
locally.
So
they
can
play
with
it,
see
how
it
works,
and
you
know
giving
something
that
people
can
play
with
will
definitely
be
something
that
it
will
be
easier
for
for
folks
to
also
reply
to
actually
do.
D
A
See
right
now
call
definitely
sections
about
some
basic
steps.
We
definitely
miss
the
introductory
talks
about.
Oh
here's.
My
application
is
how
I'm
going
to
set
it
up
on
kubernetes
and
well.
You
need
to
write
this.
This
is
a
set
of
commands
that
you
need
to
invoke.
This
is
what
they
will
produce,
and
these
are
default
for
their
links
that
you
might
want
to
review
to.
Let
you
know
how
your
application
is,
how
you
should
be
managing
your
application
so
that
from
basically
any
kind
of
user
either
that
will
be
it.
A
A
beginner
user,
intermediate
or
advanced
user
will
be
able
to
pick
up
any
topic,
and
you
know,
learn
something
new,
so
that
I
found
that
format
is
definitely
something
that
that
is
worth
considering,
especially
if
I
remember
correctly.
Currently,
our
dogs
on
cue,
GPL
and
the
official
dogs
are
mostly
mostly
in
terms
of
well.
If
you
want
to
do
this,
these
are
the
keep
CDL
create
commands.
A
D
D
E
D
Am
I'd
envisioned
it
as
the
ladder
I
suspect
it's
going
to
probably
be
much
like
our
code
base,
probably
two
or
maybe
three
people
that
really
do
80%
of
the
contributions
to
it
and
then
a
long
tail
of
folks
who
contribute
one
thing
or
another
that
they
they're
interested
in
I.
Think
I,
envisioned
it
as
having
a
strong
editorial,
what
it
leadership
right
to
make
sure
that
the
structure
and
the
way
we
categorize
things
and
the
way
we
lead
users
through
the
experience
is
really
consistent.
E
D
Okay,
cool
I
should
move
on
to
the
dynamic
command
thin
yeah.
D
Okay,
are
you
able
to
see
my
IntelliJ
yep
ringing,
perfect,
okay,
so
what
I
have
here
is
I
mean
the
bottoms
of
terminal.
You
can
see
that
and
then
I'm
eventually
I'm
going
to
bring
up
some
code
in
the
demonstration
is
effectively.
Could
we
build
commands
that
are
purely
declarative,
with
nothing
compiled
in
to
the
client,
except
for
some
way
of
interpreting
data
right
and
the
the
comparable
art
or
prior
art
here
I
see
is
like.
D
If
your
web
browser
had
Google
compiled
in
in
all
the
forms
for
google
compiled
in
right
in
every
web,
page
compile
it
right
and
in
reality
there's
this
thing
called
HTML,
which
exposes
this
notion
of
forms
and
stuff,
and
so
a
web
browser
can
just
go
to
a
page
and
learn
about
what
it
can
send
and
what
a
user
can
fill
and
events
and
responses
back
right,
and
so
could
we
apply
similar
a
very
stupid
version
of
that
here,
but
at
least
have
some
of
those
same
lessons
right.
So
I
wrote
this
command.
D
D
And
so
what
have
attached?
This
thing
is
a
couple
annotations
with
commands
in
them
right
and
so
I've
encoded
into
this
as
a
proof
of
concept.
Only
these
annotations
that
have
a
list
of
commands
that
the
CLI
should
then
load
and
those
commands
look
like
this
and
then
I
have
another
one.
I
have
two
types
of
commands.
The
first
is
the
premises
like
the
command
expresses
a
a
end
point
in
the
form
of
a
resource
right.
So
that
would
be
like
this
request
here
with
a
group
version
resource
in
this
case
deployments.
D
The
HTTP
operation
in
this
case
create
a
body
template
that
it
will
send
to
that
end
point
and
then
response
values.
It
wants
to
save
right,
so
it
names
it
pulls
out
data
using
JSON
path
from
the
response
and
then
saves
it
under
something
right
and
the
template
can
access
flags
right,
so
the
command
itself
exposes
which
Cobra
flag
or
flags
Cobra
should
register.
So
in
this
case,
the
name
and
that
has
all
the
Cobra
metadata
like
the
type
the
default
value.
D
The
description
as
an
image
flag
has
a
replicas
flag,
the
default
value
of
one
namespace
flag
default
value
default
right,
and
then
this
the
request,
template
and
it
can
contain
these
of
these
flag
values
right.
So
it
will
load
the
flight
values
in
here.
This
is
just
a
go
template
and
then
it
also
has
all
the
other
Cobra
stuff
you'd
specify
such
as
command
aliases.
The
short
description,
the
long
descriptions
example
and
the
path
use
right
so
I
run
Maine.
D
It
will
spit
out
this
that
have
found
this
trade
command
right,
and
so,
if
I
run
man
create
H
I
will
then
spit
out
the
help
for
that
which
it
loaded
dynamically
from
here
right
so
says.
Well,
so
it
actually
supports
subcommands.
So
in
this
case
it
created
cnc
TL,
create
deployment
because
of
the
path.
So
I
going
to
print
out
the
help
there
it
spits
out
the
example.
Alias
is
the
usage
flag
straight
and
so
I'll
go
run,
name,
genetics,
gosh,
image.
D
D
And
so
this
is
like
a
very
simple
way
to
allow
users
to
expose
I
pray
commands,
but
is
actually
has
enough
power
to
do
more
interesting
things
than
that
as
well.
We
could.
The
the
point
of
this
is
to
provide
a
proof
of
concept
of
what's
possible,
not
to
say
this
is
the
UI
we
want,
or
this
is
exactly
what
we
want
to
expose
but
show
that
that
through
data
we
can
actually
do
very,
very
powerful
things.
D
We
can
do
sub
resources
here,
so
this
just
shows
a
resource,
but
the
logs
come
in,
you
can
imagine
would
be
in
your
implemented
this
way
or
arbitrary
other
commands,
and
so
some
of
the
benefits
of
this
include
that
one
version
skew
is
is
less
of
an
issue
right,
because
these
are
being
published
from
server
side.
You
don't
have
to
worry
about
testing.
Do
the
old
requests
work
with
the
new
one
and
do
the
new?
D
Does
the
new
client
work
with
the
with
old
clusters
and
that
sort
of
thing
the
commands
for
clients
if,
for
instance,
the
server
doesn't
support
a
command?
It
just
won't
appear
in
the
client,
as
opposed
to
appearing
in
the
client
and
then
either
doing
the
wrong
thing
or
erring
out,
and
it
also
allows
extensions
to
be
able
to
publish
porcelain
commands
such
as
the
Krait
sub
commands
or
the
sets
of
commands
or
additional
sub
commands
without
having
to
publish
a
binary.
D
That
then
needs
to
be
distributed
through
some
mechanism
and
then
needs
to
be
version,
skewed
and
upgraded
and
all
that
sort
of
stuff.
Another
issue
with
publishing
binaries
is
that
they
were
aren't
necessarily
then
available
in
like
containers
that
are
published
or
they're
not
necessarily
available
in
certain
companies
that
whitelist
what
binaries
can
be
run.
So
the
company
may
whitelist
to
control
as
a
binary,
but
won't
whitelist
every
plugin
and
these
sorts
of
things.
D
So
it
just
sends
a
JSON
request
to
a
service.
Instead,
instead
of
having
the
group
version
kind,
it
includes
the
URL
and
can
populate
the
URL
from
flags
again
and
params
as
the
params
and
saves
the
response
values.
One
thing
I
forgot
to
mention
for
both
of
these
is
that
it
also
has
an
output
template
so
as
an
input
template
for
the
request,
but
the
saved
response
values
can
then
be
used
in
the
output
template.
D
This
is
probably
the
most
complicated
part
of
it
is
execution,
but
it's
able
to
use
the
really
leverage
the
dynamic
client,
let's
go
templating
plus
Cobra,
to
do
almost
all
the
heavy
lifting.
So
there's
a
request
section
here
that
just
guards
on
what
the
request
type
is
and
then
uses
the
dynamic
client
to
to
load
the
request
body
realized,
request
and
pull
it
into
an
unstructured
object
and
then
send
it
to
that
thing.
That
pulls
a
group
version
kind
directly
from
the
request.
D
D
So
yeah
I
promised
I'd
deliver
demo
before
cou.
Con
I
think
win
machi
an
originally
discussed
this.
It
was
a
more
restrictive
version
of
just
like
a
couple
of
JSON
paths
and
not
full
votes,
including
I,
think
that's
possible
to
do
still
if
we
want
to
restrict
the
the
functionality
to
just
rake
bands
or
just
set
commands
for
instance,
but
if
we
want
to
open
it
up
to
our
invoke
an
arbitrary
sub
resources.
For
instance,
we
have
that
option
available
as
well
questions
yeah.
A
Oh,
this
looks
great
awesome
I'm,
actually,
even
though
we
talked
back
may
about
just
to
create
and
set
commands,
and
that
was
probably
mostly
because
we
we
consider
that
the
server
will
be
publishing
the
the
structure
of
the
request
being
sent.
The
surface
is
so
interesting
as
well
when
I'm
currently
interested
in
is.
We
should
probably
sync
with
the
API
missionary
sig
and
because,
if
I
understood
correctly,
your
current
implementation
Bay,
it
is
based
on
Serie
DS,
which
basic
means.
Well,
you
want
to
have
that
dynamic,
create
you
need
to
create
up.
D
A
Basically,
for
the
bill:
well,
obviously,
for
the
all
user
created
series
we
will
just
have
to
expose
expose
that
that
they
expand
the
current
see.
Are
they
definition
to
be
able
to
inject
the
the
create
commands
or
anything
like
that?
And
that
would
be
great
but
I'm
more
asking
about
the
building
commands,
whether
you
are
considering
going
through
API
machinery
and
having
this
be
part
of
the
building,
API
yeah.
D
I
think
there's
a
couple
of
ways
we
could
do
that,
like
the
be
like
least
invasive
route
for
API,
machinery
is
included
in
the
open
API,
which
we
fetch
anyway
right
and
open.
D
Api
supports
arbitrary
plugins
verse,
our
arbitrary
extensions,
so
we
could
define
our
own
extension
type
and
then
attach
it
there
and
that
I
think
means
like
no
is
the
lowest
risk
from
the
perspective
of
it
requires
zero
changes
to
kubernetes
and
allows
us
to
publish
the
data
if
we
wanted
it
to
be
more
tightly
integrated
and
be
like
a
first-class
notion,
we
could
expose
it
as
as
part
of
the
discovery
service,
for
instance.
If
we
wanted
to
or
we
could
come
up,
we
could
ask
for
a
new
API
endpoint
right.
D
Some
you
know
being
able
to
load
both
in
parallel,
for
instance,
or
maybe,
if
you
only
need
that
and
you
don't
need
the
open
API,
there
might
be
some
reason
to
have
it
as
a
separate
endpoint.
That's
vege
separately,
yeah,
a
great
idea,
I
think
we
should
talk
about
a
bit
if
we,
as
our
oh
yeah,
want
to.
A
Pursue
this
idea,
then
we
yeah
I.
Do
you
know
that
we,
just
you
know,
with
the
amount
of
resources
that
is
basically
growing
every
single
day,
and
especially
with
with
us
here?
Are
these
getting
more
and
more
popular
because
well,
obviously
the
building
the
built-in
types
is
one
thing,
but
on
top
of
that,
it
will
be
this
year.
Are
these
and
people
want
to
be
able
to
do
well?
Keep
CTL
create
my
fancy
CRD,
and
for
that
thing
we
will
need
to
expand
this.
A
Your
at
the
definition
that
we
currently
have
with
the
templating
language
that
you
describe
one
or
whether
there
will
be
exactly
this
or
there
will
be
slightly
different.
It
doesn't
matter,
but
this
type
of
template
Inc
language,
would
have
to
be
part
of
the
CRD
definition.
So
that
similar
is
you
hope
what
we
do
currently
for
server-side
printing
for
the
CRTs
with
we
would
be
then
allowing
people
to
define,
create
operation,
or
maybe
even
more
operation
more
than
the
CRT
definition,
so
that
definitely
yes,
it's
out
of
the
question.
A
D
My
pleasure
and
when
things
we
could
explore
as
possibilities
is
like
you
said
it's
like
API
change
for
CRTs
would
take.
You
know
we're
talking
six
months,
minimum
and
they're
the
rollout
to
all
the
new
clusters
right
and
that's
for
thing.
So,
like
one
possibility
is
arm.
A
lot
of
CRTs
are
generated
Princeton's
from
pipes
that
go
files
on
the
source.
D
So,
since
they're
generated
anyway,
we
could
say
we
could
check,
continue
to
check
for
the
annotation
just
for
the
purposes
of
supporting,
like
old
old
clusters,
with
with
their
aunt
bright
enough,
having
to
wait
nine
months
to
see
this
actually
be
viable
and
then
transition
from
the
annotation
to
the
for
first
class
as
it
becomes
available.
But
why?
Why
don't
I
put
together
a
cap
that
describes
like
the
format
of
the
command,
as
well
as
potential
rollout
strategies
and
support
through
CR,
DS
and
and
core
resource
types?
And
we
can
have
a
discussion
there?
Yeah.
A
I
thank
the
yeah.
The
cap
will
be
the
best,
the
best
way
to
store
this
and
then
maybe
during
cube
con,
we
can
get
with
the
sake
API
machinery
and
push
this
thing
forward
to
the
maybe
in
114.
We
could
get
some
alpha
versions
out
there
and
and
then
and
then
build
on
top
of
that.
So
at
least
we
have
the
API
in
place
and
then
we
will
be
catching
up
with
via
where
they
keep
CTO.
D
A
A
Next
time,
yeah
I
was
hoping
actually
to
hear
from
Sean
about
the
updates
on
the
progress
of
extracting
keeps
ETL
from
the
main
repo
but
I'm
guessing.
There's,
there's
still
some
work
going
on,
so
we
will
definitely
be
talking
about
it
in
two
weeks.
So
thanks
very
much
everyone
and
see
you
in
two
weeks,
bye.