►
From YouTube: Kubernetes SIG CLI 20180328
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
B
Nor
am
I
going
to
go
over
it.
It's
three
pages
worth
of
notes,
but
this
is
really
what
we've
been
working
on
in
the
last
three
months,
but
I
wanted
to
go
over
very
quickly.
Some
of
the
highlights
of
that
and
see
if
I
could
get
some
feedback
from
you
and
not
necessarily
now
either
because
we
probably
won't
have
time
for
that,
but
really
just
open
the
door
for
feedback
and
see
what
you
think
about
some
of
the
things
that
we've
been
working
on
first
this
year.
B
One
of
our
main
missions
is
to
make
sure
that
contributors
are
having
a
similar
experience
across
repos.
It's
very
important
for
us,
because
we
hear
a
lot
of
feedback
from
not
only
just
new
contributors
by
current
contributors,
that
different
repos
have
different
workflows
and
those
are--those
are
included
in
kubernetes
in
the
organization,
so
we're
just
trying
to
make
it
so
that
there's
one
single
source
of
truth
for
contributor
guide
items
and
some
of
those
things
are,
for
instance,
are
the
new
stale
issues
by
label
definitions.
B
But
if
you
do
have
your
own
contributor
guide,
that's
fine,
but
we
should
be
like
at
least
linking
that
into
the
main
source
of
truth.
As
far
as
what
your,
what
your
guidance,
how
your
guide
is
different
versus,
not
we've
also
been
working
on
some
issue
triage
and
issue
management
related
things,
and
we
wanted
to
see
what
their
thoughts
were
on
applying
the
current
issued
triage
guidelines,
so
we
have
across
all
kubernetes
github
repos
again,
link
is
in
the
agenda
for
feedback
there
and
then,
more
importantly,
how
do
you
find
out
about
these
changes
again?
B
This
is
one
of
the
main
reasons
why
we're
doing
this
Roadshow
and
we've
actually
defined
that
in
our
charter.
Now
one
we're
going
to
be
lazy,
consensus,
consensus
aim
with
a
time
box
of
at
least
72
hours
to
both
all
these
mailing
lists
and
I'm
about
to
that
I'm
about
to
say,
and
also
with
a
github
issue
link
and
that's
going
to
go
to
contributor,
experienced
email,
distro.
It's
also
going
to
go
to
the
cig,
leads
to
email,
distro
and
then
also
the
kubernetes
development,
email
distro.
B
And
then
we
are
going
to
be
starting
to
announce
these
major
changes
at
the
community
meetings
on
Thursday
and
the
announcement
section.
It's
very
important.
So
if
you
do
want
to
know
these
things
that
those
are
the
those
are
the
areas
that
you
should
be
following
and
on
that
same
topic
charter,
we
define
our
charter.
We
do
need
to
update
it,
because
we
are
now
following
the
guidance
that
the
steering
committee
had
proposed
with
chair
with
the
words
chair
versus
lead
and
technical
lead,
and
then
we
also
are
defining
our
sub
project.
Owners.
B
I
still
need
to
do
a
poll
against
that,
but
our
old
Charter
is
linked
in
the
meeting
agenda.
I,
don't
think
you
need
any
help
with
your
charter,
but
if
you
do
we're
here
to
help
you
and
then
on
the
same
topic
of
contributors
having
the
same
experience,
we
do
have
a
new
version
of
the
contributor
guide,
thanks
to
George,
Castro,
Gwyn,
Tim,
pepper
and
several
others,
and
if
you
could
review
and
make
comments
and
postholes.
B
That
would
be
awesome
for
us,
because
this
is
the
only
way
we're
going
to
get
and
a
single
point
of
truth.
Contributor
guide,
we're
also
now
underway
for
the
developer
guide
portion
of
that,
which
is
something
that
we've
heard
definitely
needs.
It
needs
to
be
reworked,
and
things
like
that,
so
the
issue
link
is
in
the
agenda
today.
So
if
you
could,
maybe
let
us
know
what
we
should
definitely
be,
adding,
maybe
like
from
fresh
documentation,
standpoint
or
updates
to
documentation
that
we
have
that
are
just
stale.
Let
us
know
that
as
well.
B
We
hope
to
have
something
out
within
the
next
three
months
as
far
as
a
first
shot
at
that,
and
then
the
next
thing
that
I
wanted
to
talk
about
was
mentoring.
Luckily,
in
in
your
crew
right
now,
you
do
have
a
few
people
that
are
working
with
me
on
some
of
these
mentoring
testing
activities.
I
guess
you
could
say,
because
all
of
these
are
different
than
what
other
open
source
projects
run
for
mentoring
and
we're
trying
them
out.
B
One
of
the
things
that
I
you
know
in
building
of
these
programs,
I've
taken
a
look
at
is
your
time.
Investment
in
most
mentoring
programs.
Time
is
usually
the
number
one
reason
why
people
don't
do
it.
You
know
you
just
don't
have
it
or
you
don't
have
the
time
to
devote
to
one
person
etc,
so
these
programs
are
created
with
that
in
mind,
one
of
them
is
mentors
on
mentors
on
demand,
which
is
meet
our
contributor
series.
B
Once
a
month,
it's
very
similar
to
office
hours,
a
bunch
of
a
bunch
of
contributors
get
on
the
call.
We
do
a
live
stream
on
YouTube
and
people
ask
questions
via
the
media
contributor,
slack
channel
or
Twitter.
Questions
have
ranged
from.
What's
your
favorite
color
how'd
you
get
into
open
source
like
kubernetes.
Why
is
my
and
and
tests
not
running?
B
B
B
So,
ideally,
what
we're
doing
here
is
giving
that
giving
people
workshops
and
in
a
very
elongated
timeframe,
so
between
two
and
three
months
at
there
himself
pace,
because
the
two
to
three
months
is
what
our
community
membership
guidelines
require
for
most
of
the
levels.
So
it's
just
pretty
much
a
self-paced
semi,
structured
learning,
environment
and
it's
not
one-to-one.
B
It's
like
in
a
longer
form
pretty
much
and
we're
doing
a
one-time
one-on-one
session
with
a
buddy
and
that
buddy
is
a
level
higher
than
you
and
you
can
either
pair
program
do
code
review
sessions
AMA
whatever
you
want
to
do
in
that
one
hour,
but
it's
only
a
one
hour
one-time
deal
so
we're
testing
that
within
the
next
two
weeks.
So
that
should
be
really
cool
and
more
information
about
that
shortly
and
then
I
know,
slack
maintenance
is
slack.
B
It's
a
very
hot
topic
with
our
crew
right
now,
but
I
wanted
to
talk
about
slack
maintenance,
because
we
are
onboarding
serious
amount
of
people
via
slack
right
now,
the
last
30
days
it's
2,000
people,
which
means
that
our
slack
instance
now
is
34,000
people.
So
we
have
created
slack
guidelines,
there's
slack
admins,
if
you
want
to.
If
you
want
to
join
that,
if
you
need
anything,
feel
free,
but
we
just
wanted
to
let
you
know
you
should
probably
start
pinning
things
to
some
of
these
channels,
because
that
might
help
you
things
like
your
charter.
B
Your
meeting
notes
your
agendas
and
then
last
but
not
least,
and
then
I
swear
and
done
with
this
rant
use
our
office
hours.
We
need
more
folks
to
help
us
answer,
questions
and
feel
questions
from
that
and
we're
fielding
questions
from
Stack
Overflow,
the
user
office,
our
slack
Channel
and
occasionally
Twitter,
and
then
that's
it
for
me
again.
If
you
have
any
other
feedback
for
us
want
to
get
involved
in
any
of
these
other
programs
reach
out
via
our
mailing
list,
our
slack
channel
pigeon
al
get
to
me.
However,
you
need
and.
A
B
A
C
About
this
new
working
group
that
we've
created
the
goal
is
to
move
the
apply
functionality
from
cube
CGL
to
the
solar
side.
This
is
a
very
significant
effort
which
is
going
to
last
from
multiple
colors
and
multiple
versions
of
commodities.
We
think
so
please,
if
you're
interested
in
in
working
on
that
journey,
I
will
include
the
link
in
the
notes
later.
I,
don't
have
my
machine
right
now,
so
I'll
include
the
link.
Please
join
if
you
want
to.
C
D
D
Releases
there
should
be
an
alpha
for
the
server
side.
Apply,
it's
gonna,
be
a
separate
endpoint,
so
initially
apply
is
not
going
to
be
affected.
In
fact,
apply.
The
current
apply
won't
be
affected
for
multiple
releases
yeah,
but
this
is
a
heads
up.
It's
going
to
be
an
extra
endpoint,
the
server
side
apply
and
we'll
keep
six
Eli
up-to-date
as
we
make
progress.
E
You
folks
working
on
a
kept
for
this,
because
this
is
exactly
the
type
of
effort
that
we
created
the
kept
process
for
where
there's
a
big
change.
That
needs
to
be
coordinated
across
a
lot
of
different
SIG's.
That's
going
to
have
a
large
impact
on
a
lot
of
the
community,
and
so
it's
going
to
do
this.
E
It's
going
to
do
that,
having
something
written
down
that
lays
out
exactly
what
it
is,
making
sure
that
there's
clear
before
folks
start
writing
code,
I
think
seems
to
be
appropriate
for
a
change
that
has
this
wide-ranging
it
wondering
if
we
needed
one.
So
that's
confirming
thank
you
and
I.
Think
you
know,
as
as
you
get
the
details
down,
you
know.
This
is
definitely
the
type
of
thing
that
sega
architecture
is
appropriate
to
bring
stuff
to
they're.
E
Already
presented
in
sick
architecture,
but
not
as
a
cap,
and
it
wasn't
in
a
formal
thing
that
could
be
approved-
it's
you
know
with
some
of
Daniels
early
work
describing
this
stuff
and
that's
still
very
far
from
a
spec.
That's
ready
to
be
implementable,
so
I
think
there's
still
quite
a
bit
of
groundwork
here
before
code.
You
start
getting,
he
start
being
written
on
this
yeah.
A
A
A
We
had
a
discussion,
a
rough
timeline
for
for
the
future,
and
we
also
agreed
that
initially
this
will
be
developed
in
a
future
branch
and
so
that
we
don't
have
to
worry
about
the
departing
with
kubernetes
and
that
will
at
least
take
time
to
get
this
thing
properly
set
up
and
actually,
when
we
can
start
cranking
some
code
and
once
as
Daniel
mentioned
once
he's
sufficiently
happy
with
the
code
to
call
it
alpha.
We
will
merge
this
into
the
main
grandes
repo
and
so.
E
Again
I
mean
you're
there,
you're,
focusing
on
the
code
in
the
process.
I
think
there's
a
bunch
of
work
that
needs
to
be
done
before
you
even
start
writing
code
in
a
future
branch,
because
once
code
is
written,
people
feel
attached
to
it.
They
don't
want
to
change
it,
and
so
we
really
need
to
explore
the
space
and
I
wouldn't
view
these
things
as
process
or
formality.
E
This
is
really
a
decision-making
process
to
make
sure
that
all
the
needs
are
taken
care
of
that
everybody
is
in
the
loop
so
that
you
don't
get
surprises
later
on,
because
the
last
thing
we
want
to
see
is
somebody
write
a
bunch
of
code
in
a
future
branch
only
to
find
like
well.
You
made
some
assumption
over
here
that
invalidates
a
whole
ton
of
work
that
folks
have
done.
So
this
is
really
about
communication
and
making
sure
that
decisions
don't
get
relitigated
once
they're
made
and
so
I
think.
E
A
lot
of
the
ways
that
that
API
machinery
has
worked
in
the
past
has
been
to
sort
of
do
stuff,
not
tell
anybody,
and
then
everybody
else
picks
up
the
pieces.
I
mean
maybe
that's
being
a
little
unfair
and
a
little
aggressive,
but
that's
the
way
it
feels
to
a
lot
of
folks
outside
of
API
machinery.
I
really
want
to
make
sure
that
this
particular
effort
doesn't
go
yeah.
E
Nobody's
saying
that
it's
gonna
happen
in
release
and
that's
another
reason
for
the
cap
is
to
be
able
to.
You
know,
set
an
effort,
that's
gonna
stretch
across
multiple
releases,
so
a
cap
is
beyond
a
single
release.
It's
really
an
effort,
that's
gonna!
That's
gonna!
You
know
take
a
lot
of
time
and
this
this
fits
exactly
into
that
type
of
thing
and
then
also
as
the
stuff
incrementally
shows
up
in
multiple
releases.
We
can
talk
about
in
the
cap.
What
are
the
things
that
got
done
in
this
release?
E
And
then
what
is
the
overarching
plan
for
how
everything
fits
together
and
where
things
are
going?
Which
really
brings
a
lot
of
a
lot
of
context
to
the
release
notes
and
it
also
becomes
a
huge
plus
when
it
comes
to
writing
the
NGS
or
documentation
for
features,
because
that
also
has
lagged
because
things
are,
you
know
like
we
forgot,
we
forgot
to
document
that
you
know
you
could
use.
You
know
specific
labels
in
the
downward
api
because
nobody
wrote
anything
down
the
code
just
got
written
and
then
boom.
E
F
F
Not
unfamiliar
with
design
proposals,
and
so
just
think
of
everything
that
you
would
have
been
asked
to
do
for
a
design
proposal
is
basically
asking
you
to
do
a
cap
with
a
little
bit
more
structure
and
a
little
bit
where
we're
trying
to
iterate
on
that
to
have
the
better
process
to
communicate.
To
take
some
of
the
deficiencies
of
the
previous
design
proposal
process
in
place.
Yeah.
E
And
I
think
I
think
the
other
part
of
it
is
that
design
proposals
you
like
boom,
it's
a
design
proposal
is
it
approved.
It
is
in
progress
what
it
so
we
didn't
have
a
lot
of
metadata
around
that
right
that
you
know,
if
you
look
in
the
design
proposals
directory,
there's
like
a
gazillion
things
who
knows
what's
current
and
what's
not
and
what's
abandoned
right,
that's
just
a
total
total
pile
of
markdown
and
then
the
other
thing
is
that
those
things
weren't
well
communicated
so
I
think
I
want
to
see
us.
E
You
know,
as
we
have
big
changes
like
this,
we
should
be
sending
out
pointers
to
these
things,
to
kubernetes
dev,
so
that
everybody's
aware
that
this
is
coming
down
the
pike
and
they
have
a
chance
to
actually
like
they
can't
say.
I
wasn't
warned
right,
you
know
and
I
think
we
don't
have
that
that
process
around
design
proposals
right
now.
It's.
A
E
E
The
goal
out
of
this
is
to
make
these
things
you
know
to
host
them
on
a
website
so
that
they're,
more
discoverable,
there's
SEO
people
find
these
things
when
they're
doing
when
they're
part
of
the
the
contributor
community
and
developing
things
beyond
that,
I
think
if,
if
there's
a
change
that
is
sort
of
scoped
within
a
cig,
it's
up
to
that
sig
to
decide
whether
they
want
to
use
that
process
internally
or
not
I
think
when
we
start
seeing
stuff
that
really
cuts
across
sig.
That's
the
places
where
I
think
you
know.
A
C
A
G
The
the
entire
printer
stack
in
control,
so
there
should
be
a
link
to
the
proposal
on
the
on
the
agenda
and
quickly
to
go
over
it.
The
gist
is
to
tackle
this
in
multiple
stages
by
introducing
essentially
structs
that
would
hold
flag
values
for
the
different
types
of
printers
that
there
are
in
good
control,
so
we've
had
we've
identified
five
different
types
of
printers,
ranging
from
template,
custom
columns,
name
and
success
and
phase
Nemo
and
then
human
readable.
G
G
We
would
basically
provide
an
all-encompassing,
struct
God
commands
would
then
use
to
be
able
to
essentially
interact
with
an
unobtainable
inter
so
the
way
that
would
work
is
the
own
compass
instructor
would
essentially
aggregates
all
of
the
other
struts,
and
it
would
abstract
the
logic
of
figuring
out
given
a
set
of
flags
from
a
command.
What
printer
is
best
suited
to
handle
the
output
for
that
command,
so
all
commands
would
do,
is
simply
instantiate
it
setting
the
additional
options.
G
They
need
then
call
up
to
printer
methods
and
then
the
to
printer
method
would
figure
everything
out
and
a
command
to
end
up
with
a
printer
that
knows
how
to
print
whatever
the
command,
and
so
the
last
step
on
there
would
be
to
sweep
the
codebase
afterwards
and
tackle
any
sections
in
the
codebase
that
also
make
use
of
flags
that
are
not
necessarily
commands
to
also
use
this
pattern.
So
the
goal
is
to
have
basically
every
portion
of
the
code
base
that
depends
on
flag
usage
and
flight
values.
G
Using
this
pattern
to
dana
printer,
so
the
human
Vil
print
printer
PR
is
currently
in
progress.
There
were
some
design
positions
that
had
to
be
thought
over,
since
basically,
the
human
readable
printer
is
mainly
used
in
the
get
command
so
that
there's
a
few
changes
that
need
to
be
considered
with
the
get
command
once
or
after
the
decentered
is
in
order
to
wire
this
printer
to
that
command.
G
A
A
The
generators
that
are
under
the
cover
keeps
ETL
run
and
mimic
docker
run
command
in
such
a
way
that
cube
CT
run
would
only
create
a
pod.
The
way
you
specified
it
so
that
is
logically
identical
to
a
docker
run
command
as
a
replacement
for
cube
CTL
run
to
be
able
to
create
whatever
resource
that
you
care
about
is
we're
proposing
to
move
towards
cube
CTL,
create
named
the
resource
that
you
want
to
create,
whether
that
would
be
a
job,
cron
job,
a
deployment,
a
pod
replication
controller
stateful
set.
A
Whatever
else
you
can
come
up
with
the
cube
CTL
create
will
be
the
road
to
go
with,
so
that
will
be
from
a
user
point
of
view.
From
the
implementation
point
of
view.
The
generators
that
are
behind
the
cube,
CTL
run
are
not
typed.
They
literally
hold
the
parameters
that
are
passed
through
the
run
command
as
a
map
interface
as
a
map
of
strength,
interface
objects,
which
means
that
we,
we
lose
the
type
information
above
about
the
the
flag
that
is
being
passed
to
the
command.
A
With
with
the
change
to
cube,
CTL
create
sub
commands,
each
generator
is
targeted
specifically
at
its
own
command,
and
we
can
fully
embrace
the
part
of
of
strong
typing
and
the
go
language,
and
this
is
what
we
currently
do
with
all
the
cube.
Ctl
create
some
commands
that
we
already
have,
and
actually
the
majority
of
resources
already
have
a
proper
cube.
Ctl
create
sub
commands
in
the
further
long
run,
Phillip
discussed
a
topic
that
we
will
be
able
to
dynamically
dynamically,
create
resources
based
on
the
resources
available
on
the
server.
A
But
for
that
there's
an
open
proposal.
I
will
be
talking,
hopefully
about
this
a
little
bit
more
during
the
face-to-face
meetings
during
coop
con
in
Europe
that
we
will
be
held
in
several
so
hopefully
we'll
be
able
to
move
the
proposal
a
little
bit
further
down
the
road.
Are
there
any
questions?
That's.
H
I
think
this
is
a
big
topic.
It
is
an
arbitrarily
deep
pool.
I
agree
like
Ron
I
think
we
can
all
agree
run
is
just
like
kind
of
a
mess
and
one
of
the
problems
with
it
is
like
it's
intended
to
be
like
this
easy.
You
don't
know
what
you're
doing
way
to
access
cavernous
ap
is,
but
then,
if
all
the
flags
and
all
the
complexity
yeah
we
had,
you
have
to
really
really
really
know
what
you're
doing
so.
H
It
kind
of
loses
the
validity
of
that
argument
for
all
the
other
flags,
because
at
that
point
you
should
probably
know
what
you're
doing
then
use
one
of
the
generator
sub
commands.
I,
think
the
generators
in
general
or
something
we
really
need
to
think
about.
There's
two
things
we
probably
want
to
discuss
I
do
think
they're
very
useful
I've
used
them
myself
when
the
documentation
is
a
little
bit
hard
to
understand
for
certain
things
or
whatnot,
you
can
just
get
something
up
and
running.
You
can
see
how
it
looks.
You
can
see
how
it
works.
H
You
can
poke
at
it
a
little
bit
and
then
backwards
figure
out
what
you
need
to
do
when
the
other
resources
aren't
available.
So
that's
really
useful,
but
for
what
I
do
wonder
is
if
creating
the
resource
is
the
right
thing
to
do
or
if
we
really
think
rethinking
generators
if
we
want
to
move
away
from
just
creating
the
resources
into
just
creating
the
convey
can
piping
that
to
something
that
creates
the
resource
to
have
a
more
consistent,
tooling
chain
and
have
interoperability
between
different
workflows.
H
Some
of
the
questions
they're
going
to
be
around.
Do
you
need
like
a
TTY
or
these
sorts
of
things?
It
may
impact
that
right,
and
it
may
mean
we
need
to
make
changes
to
our
other
tool
chains
to
support
or
to
our
other
tools,
to
support
what
the
generators
do
today.
The
other
thing
ma
che
mentioned
was
the
how
we
support
version
skew,
how
we
support
extensibility
and
how
we
have
a
maintainable
code
base.
H
There's
a
lot
of
ways:
we
could
do
this
that
make
all
those
things
much
more
much
better,
I
think
it's
a
matter
of
coming
up
with
which
ones
we
think
are
best.
We
can
do
it
a
schema
based
approach
where
it's
metadata
on
the
schema
that
exposes
certain
flags
and
then
populates
templates.
We
could
do
a
sub
resource
based
approach
where
a
request
body,
a
specialized
request
body
that
contains
the
flags,
is
exposed
and
does
the
business
logic.
H
We
can
potentially
explore
a
GRDC
based
approach
where
we
publish
a
service
JRPG
service
that
takes
protocol
buffer
requests
and
puts
the
flags
in
those
but
I
think
from
a
high
level.
It
breaks
down
to
either
we
create
it
was
done.
On
the
client
side,
we
published
a
schema
and
there's
just
one
generator
piece
of
code
that
can
look
at
a
generic
schema
and
then
figure
out
what
needs
to
do
or
move
the
whole
thing
server-side
and
expose
it
that
way.
Those
are
both
big
things
that
need
to
be
thought
through.
A
That
are
register
through
a
great
API
servers
or
resources
that
are
OCR.
These
will
make
them
consumable
by
the
by
the
client
more
easily
whether
versus
the
current
approach,
where
it
is,
it
is
hard
or
you
just
need
to
do
it
manually
by
creating
a
young
definition
of
the
of
the
resource
that
you're
you
want
to
have
cleaning
the
cube
detail.
Ron
is
more
like
a
tech
type
that
I'm
seeing
that
we
should
pay
to
the
community.
A
It
is,
it
is
currently
over
bloated
and
we
should
slowly
stop
adding
new
stuff
to
it
and
I
personally
slowly
starting
to
do
this
so
that
there
is
no
new
generators
introduced
to
the
cube
CTO
brand
command.
But
rather
we
will
be
slowly
preventing
from
new
stuff
appearing
there
and
in
the
long
run
deprecating
it
so
that
we
will
be
able
to
cut
the
the
future.
It
provides
to
just
a
single
cube,
CT
of
run
pod.
A
A
So
the
create
sub
comments
is
a
natural
thing
that
will
happen
if
we
for,
for
people
to
that,
they
can
interact
with
with
the
platform
I
spent
numerous
hours
talking
with
open
chief
evangelist
and
what
they
always
forced
me
to
do
is
to
provide
a
flow
for
whatever
I
come
up
with
a
solution
that
will
be
purely
CLI
base
or
UI
if
possible,
but
if
I
can
come
up
with
a
series
of
command
that
I
can
achieve
this,
whatever
I
told
them.
What
is
the
solution
to
their
problem?
A
A
The
the
biggest
reason
is
that
the
generic
create
will
require
us
to
propose
significant
changes
to
how
the
schema
is
described
on
the
server,
which
in
turn
requires
some
cross
cross
discussion
with
SiC
API
machinery
on
what
are
the
changes
that
we
require
for
the
for
the
generate
create
command.
So
the
generic
create
will
take
a
little
bit
more
time,
but
it
is
something
that
we
should
be
discussing
in
the
meantime
as
well.
Yeah.
H
Well,
I
think
that
makes
sense
as
part
of
control
apply
moving
server-side.
We
need
to
actually
re-architect
the
schema
or
resources
anyway.
So
I
don't
think
we
should
block
that
on
this
clearly,
but
it
is
the
same
area.
It's
the
same
codebase
in
the
same
area
of
ownership,
so
we
could
probably,
as
a
follow-up,
say,
hey
we
want
to
augment
the
new
schema
we
create
for
apply
with
this
worse
stuff.
Well,
I'll
be
a
kook
on
Copenhagen.
A
G
So
in
the
elastic
meeting
we
briefly
discussed
the
current
design
of
plugins,
as
well
as
potential
new
design
that
basically
could
resolve
the
shortcomings
that
have
been
identified
so
far
with
the
design.
So
currently
keep
control
invokes
a
plugin
binary
and
it
passes
flags
to
it
through
environment
variables.
So
this
had
this
set
of
flags
consists
of
global
Flags,
global
control
bikes,
as
well
as
any
Flags
that
the
plug-in
requested,
the
other
plugin
but
gamal
file.
So
basically,
this
design
ends
up
pretty
much.
G
So
this
leads
us
to
two
naming
conventions.
Basically,
I
believe
that
this
approach
would
simplify,
plug-in
distribution,
plug-in
installation
because
you
no
longer
have
you
know
a
directory
that
has
to
live
somewhere
and
that
directory
has
to
contain
you
know
a
specific
file
and
that
specific
file
has
to
basically
identify
the
plug-in,
and
it
has
to
you
know.
Basically,
this
credit
plug-in,
but
rather,
if
you
control
would
infer
the
name
from
the
file
name
of
the
plug-in
and
then
the
plug-in
itself
would
take
care
of
handling
any
flags
that
are
passed.
G
The
arguments,
any
information,
that's
provided
via
environment
variables,
so
the
plug-in
itself
would
have
full
control
over
basically
what
it
does
with
the
user
provides.
This
leads
to
an
outcome
that
allows
plug-in
authors
to
sort
of
inject
themselves
into
subcommands.
So
if
they
have
specific
behavior
for
a
resource
in
a
particular
sub
command,
then
they
are
able
to
do
that
with
this
design.
Visor,
basically,
you
know
filtering
a
specific
behavior
and
then
making
their
plugin
active
only
in
certain
parameters
for
attitude,
control,
amanda
pest,
so
the
next
section
is
flight
and
handling.
G
G
So
I've
included
in
the
proposal
document
a
few
examples
of,
or
at
least
a
few
short
descriptions
of
how
other
systems
tako
plugin
design,
so
deaf
plugins
mimics
the
design
proposed
quarterly
closely
helmet
mimics
field
design
a
bit
in
that
it
also
requires.
Directories
exists
in
a
particular
location
and
in
the
directory
a
file
has
to
exist
as
a
plugin.
I
Andre
takes
a
slightly
different
approach
and
that
it
actually
provides
an
API
for
a
plug-in,
and
essentially
so
it
doesn't
force
users,
but
it
encourages
users
of
important
authors
to
go
through.
G
These
API
calls
to
obtain
very
specific
information
about
very
specific
resources
on
their
intelligence
service.
So
I
feel
like
of
all
of
these
three.
In
my
opinion,
while
all
three
provide
use
cases
for
fairly
complex
behavior
Wow,
the
get
plug-in
model
would,
in
my
opinion,
would
be
the
one
would
be
the
easiest
to
consolidate
just
do
of
distribute.
I
Have
a
comment
and
two
questions:
the
first
is
I
want
to
give
a
little
bit,
maybe
more
detail
to
the
helm
structure
here
so
helm
plugins
have
the
ability
to
there's
a
lot
of
static.
Is
it
just
me
who's
hearing
that
okay,
there's
one
home
plugins,
have
the
ability
to
kind
of
hook
into
something,
and
so
there
are
places
where
it
provides
a
default.
For
example,
home
fetch,
HTTP
colon,
slash,
slash
something
it
says:
okay,
HTTP
HTTPS.
I
We
have
a
default
for,
but
somebody
can
come
in
and
change
the
scheme,
and
so
you
can
say
home
fetch,
that's
three
colon
slash,
slash
and
whatever
the
rest
of
the
URI
is
and
says:
hey,
I,
don't
know
how
to
handle
s3
I'm
gonna
go
see
if
there's
a
plug-in.
That
knows
how
to
handle
it,
and
there
actually
is
a
plugin
you
can
install
and
if
you've
got
it,
helm
drops
down
to
the
plugin
for
that
passes
it
the
information,
so
it
can
plug
in
to
things
along
the
way.
I
So
to
speak
with
kind
of
a
hook
like
mechanism
that
says
hey,
we
may
not
implement
you
know
it's
not
the
default.
You
can
go
off
and
plugins
can
go
implement
that,
and
so
there's
a
complexity
there
in
the
way
it
works.
The
home
plug
in
the
mo
file
doesn't
just
say,
here's
stuff
about
it,
but
it
also
says:
there's
hooks
when
you
install
a
plug-in
or
when
certain
events
fire
with
inside
of
helm.
I
You
can
hook
into
that
and
do
things
and
Hum
will
reach
out,
and
so
that
goes
beyond
kind
of
the
get
plugins
which
are
get
space
some
name,
and
it
looks
for
get
in
the
pattern
for
it,
because
it
gives
you
the
ability
to
actually
plug
into
the
execution
along
the
way
and
do
things
and
in
the
helm,
3
design
we're
actually
looking
to
expand
the
use
of
that
into
more
places
based
on
requests
we've
had
during
the
home
cycle.
That's
one
of
the
things
about
helm,
plugins.
That
actually
gets
me
to
my
two
questions.
I
Okay,
I
did
have
one
more
question:
real,
quick
turn
and
I'm
asking
cuz
I,
don't
think
Carolyn
is
here
this
week
and
she
brought
it
up
last
week,
so
there
were
places
where
folks,
like
the
Service,
Catalog
and
I
can
imagine.
Others
would
want
to
take
over
commands
like
get
and
describe
to
say.
Okay
in
my
case,
maybe
I
want
to
describe
things
differently
in
a
special
way
or
provide
other
information.
So
I
could
do
something
like
coop
control
get
some
object,
that's
instituted
as
a
CR
D
and
then
the
desired
was
to
say
hey.
I
A
We
intentionally
left
that
bit
for
the
discussion,
because
we're
hoping
to
open
a
a
proposal
to
replace
the
old
one
and
start
the
discussion
over
there,
because
we're
not
100%
sure
whether
the
approach
of
yes,
I
wanna,
allow
override
all
the
core
commands
or
we
don't
allow.
We
don't
allow
overriding
any
is
not
where
we're
not
seeing
out.
There
are
lots
of
pros
and
cons
for
each
of
the
approach,
but
none
of
the
approach
currently
stands
out
significantly
but,
like
I,
said
to
address
a
little
bit
more.
A
What
the
use
case
are
they
want
to
override
the
get
bit.
We
were
imagining
that,
for
certain
cases
you
will
be
able
to
extend
not
just
the
main
commands,
but
also
sub
comment
of
the
command.
So
in
case
of
cube
CTL
get
my
full
resource.
You
would
write
a
plugin
with
the
name
cube
CTL
get
full
which
would
replace
the
behavior
for
keep
CTL
get
foo,
but
nothing
like
that
is
set
in
stone.
A
H
A
H
H
I
Might
be
useful
as
general
tooling,
the
way
it's
written
today
is
it's
part
of
helm.
It
is
a
sub
package,
I
want
to
say
within
helm,
that's
handled
separately,
but
I
could
probably
be
broken
out
and
refactored
into
a
common
library.
Okay,
the
plug-in
handling
is
maybe
more
similar
to
something
like
the
hook
system
in
some
programming
languages
that
says
hey
before
we
fire
off
the
default
or
in
the
case
where
it
isn't
our
default.
We're
gonna
go
look
to
see
what
else
is
out
there.
What
other
implementations
are
out
there?
I
H
I
Designed
to
be
embedded,
there
are
multiple
implementations
as
go
libraries
and
so
we're
Java
Script.
Getting
all
the
cross-platform
for
all
the
things
you
might
want
to
cross
compile
to
can
be
a
little
bit
of
a
pain.
There
are
just
peer,
go
implementations,
multiple
of
them,
and
that's
one
of
the
things
that
caught
our
attention
was
the
ability
to
easily
do
that
as
a
plugin
language,
because
it's
designed
to
be
embedded.
H
Of
a
couple
one
is
we
want
to
talk
about
like
the
direction
of
coop
control
for
plugins,
because
what
this
there's
two
kind
of
architectures
one
is
a
Kaveri
composable
unix-style
focused
commands,
doing
focus
things
architecture,
and
this
kind
of
is
the
opposite
of
that
right.
We're
in
both
cases
we
want
to
to
foster,
is
the
ability
to
develop
independent
pieces
of
the
system
and
have
folks
publish
them
and
then
them
to
be
used
together
with
unique
style
of
windows
publishes
their
own.
H
I
Think
that
actually
gets
to
the.
Why
of
the
helm,
plugins
work.
The
way
they
do
is
is
we
like
the
UNIX
philosophy
or
the
Linux
philosophy
of
piping
one
thing
into
the
next
so
to
speak,
and
we
you
know
having
just
all
kinds
of
arbitrary
subcommands
doesn't
always
necessarily
make
sense,
because
if
you
could
just
pipe
them
together,
is
there
a
reason
to
have
them
as
a
plug-in
right?
Why
have
that
composition
based
approach?
H
G
H
A
Networking
I
would
rather
and
not
do
any
commitment
on
where
we
want
to
move
that
with
this.
Definitely
beta
is
not
a
goal
at
this
point
in
time,
I'm,
not
sure
how
far
we
will
be
able
to
go
with
it.
This
is
more
to
just
kick
off
the
discussion.
Currently,
there
has
been
a
lot
of
discussions
and
over
the
issues
over
the
PRS
we
are
which
are
trying
to
fix
the
current
plug-in
mechanism.
A
So
it's
rather
to
kick
off
discussion
to
move
this
forward
to
something
that
will
be
more
reliable,
simple
at
the
same
time,
but
with
the
possibility
to
extend
further
down
the
road,
like
you
said,
so
that
that
will
be
the
current
approach
that
we
would
like
to
have.
The
idea
is
to
replace
the
old
mechanism.
This
will.
A
This
probably
means
that
this
will
live
together
for
for
a
short
while
and
then
will
entirely
ditch
the
old
mechanism,
and
hopefully,
with
some
feedback,
we'll
be
able
to
further
expand
the
current
mechanism
and
as
one
of
the
inputs
for
our
the
current
approaches.
What
Carolyn
said
last
last
time
was
that
they
literally
end
up
parsing
all
the
flags
by
themselves,
which
leads
us
to
the
to
the
current
approach
and
service
catalog
being
the
biggest
consumers
for
for
the
plugins
in
in
what
we
discussed
is
probably
the
one
of
the
important
inputs
yeah.
H
I
think
this
is
clearly
better
than
the
other
approach.
The
other
thing
wasn't
playing
us
that
much
I
think
we
had
the
complexity
of
Matt's
approached
suggested
for
helm,
which
is
intended
to
do
more
powerful
things,
but
we
weren't
actually
doing
more
powerful
things,
so
it
just
it's
kind
of
a
mess
that
sounds
good
without
her
moving
forward
with
this
yeah.