►
From YouTube: Kubernetes SIG CLI 20170830
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
Okay,
I'll
go
I'll,
get
started
actually
so
I
have
some
status
updates
in
regarding
to
CLI
plugins.
So,
as
I
mentioned
a
couple
meetings
ago,
we
are
officially
alpha
with
plugins
for
the
1.8
release,
which
means
unnecessarily
there.
You
know
many
new
features
coming
in
the
1.8.
Riddle
is
actually
I
think
most
of
what
we
had
as
an
alpha
release
was
already
in
the
one
7
release.
However,
we
are
being,
we
are
going
to
be
officially
alpha,
which
means
there
will
be
some
documentation
around
plugins
and
things
like
that.
A
Some
Doc's
and
I'm
working
on
those
right
now,
I'm
actually
almost
done
with
a
proposal,
hopefully
going
to
send
up
a
request
for
the
documentation
section
today
when
I
was
working
on
docs
for
plugins
I
realized
that
we
don't
have
many
document
docs
for
for
cube
CTL
on
the
documentation
website.
So
that's
something
I
wanted
to
shout
here.
So
if
you
guys
are
feeling
you
know
excited
about
writing
a
lot
of
dogs
for
cube
CDL
other
than
the
ones
we
generate
automatically.
A
So
that's
something
we
are
kind
of
in
the
need
of
so
there
are
not
many
4
for
cube
CTL,
but
time
anyway
for
plugins
I'm,
almost
done
and
hopefully
going
to
open
up
requests
later
today,
and
then
we
will
have
some
documentation
about
the
you
know
the
underlying
plug-in
structure,
but
that's
it
that
lets
it
for
me
in
terms
of
status
updates
so
other
than
that
a
lot
of
code.
The
reviews,
as
always
I,
have
at
the
moment
a
good
amount
of
open
tabs
on
my
browser
that
I
can't
even
see
Fab
five
icons.
A
B
A
question
on
the
plugins
okay,
one
thing,
I
think
that's
really
nursing
to
me
about
the
plugins
is
the
distribution
piece,
because,
without
that,
it's
it's
like
downloading
the
binary
and
stuff
is
not.
You
could
run
them
individually
right
if
you
download
the
binary
no.
A
A
B
I
mean
I,
think
I'm
thinking
a
couple.
Different
things
are
possibilities,
but,
like
one
thing,
I
can
think
of
is
if
you
have
a
CRT
or
if
your
plugins
are
tied
to
manipulating
certain
resources.
It's
like
you
introduced
to
CRD
for
a
new
resource
type.
Now
you
want
a
new
KU
control
commands
to
manipulate
that
resource
like
we
could,
at
least
in
the
CRT
like
add
an
annotation
that
links
to
what
the
plugin
is
right
and
allow
just
control.
Mm-Hmm
install
it
off
of
that
I'm,
not
thinking
like
some
massive
repository
or
something
like
that.
B
A
We
have
we
have
that
in
Des
Plaines.
However,
you
know
we
don't
have
a
deadline
for
Firdos.
Currently
you
know
to
install
plugins
it's
basically,
you
know
it
depends
on
how
the
plugins
are
distributed.
But
if
you
get
like,
for
example,
tar.gz
package,
you
would
extract
those
under
your
cube
config
directory
the
plugins
subdirectory
under
your
cube,
config
directory.
But
there
isn't
nothing
really
automated
at
the
moment.
For
for
something
like
that,
plugins,
that's
something
I'm!
A
Writing
in
details
in
the
documentation.
Plugins
has
three
places
where
it
searches
for
the
actual
plugins
so
other
than
under
your
cube
config
directory.
You
can
also
install
plugins
following
the
xdg
directory
structure,
so
in
places
like
us,
our
share
and
some
of
those
you
know
default
places.
So
that
means
the
the
the
package
manager
of
your
you
know.
Operating
system
could
be
installing
plugins
in
one
of
those
you
know
default
places
and
the
plugins
framework
would
be
able
to
attack
them
I
see,
but
specifically
for
plugins.
A
Exactly
you
could
have
them
packaged.
You
know
the
way
your
personal
system
expect
them,
or
even
things
like
brew
or
I,
don't
know
other
package
managers.
So
it's
not
only
under
your.
Your
home
directory
can
be
installed
in
some
places
on
your
operating
system
and
plugins
will
be
able
to
detect
those,
but
anyway
I'll
write
that
in
details,
and
eventually
you
know
for
people
that
distribute
just
distribute
plugins
they'll
we'll
have
some
options
about
how
to
you
know
they
deliver
them
other
than
just
having
you
know
to
download
them
and
extract
the
package.
Yet.
C
A
A
B
So
he
wrote.
Antoine
also
is
changing
the
validation
from
swagger
to
open
API,
and
this
has
a
couple
of
benefits.
One
is
the
Swagger's
being
deprecated
and
not
supported
anymore.
So
that's
gonna
be
gone,
so
we
had
to
switch
over
it.
Also,
the
open
API
will
aggregate
extensions.
So
when
you
have
extension,
API
servers
installed
that
can
actually
validate
they
were
schema
as
well.
B
It's
happens
to
be
a
lot
faster
I
think
he
got
about
a
10x
performance,
speed
up,
which
is
just
nice,
and
it's
a
lot
cleaner.
Now
the
old
validation
code
that
evolved
organically
and
had
to
had
to
work
through
things
like
multiple
API
groups
existing,
but
not
having
all
that
data
in
one
place,
and
so
the
code
actually
got
pretty
complicated
as
it
tried
to
switch
back
and
forth
between
API
groups
and
so
the
open,
API
spec.
B
One
allows
us
to
do
things
much
more
clean,
simple
way
and
having
the
hindsight
of
looking
at
how
the
code
evolved
over
time
were
able
to
rewrite
it.
Accommodating
all
the
acquirements
requirements
up
front
in
a
clean
way,
and
then
the
other
piece
I
thought
I
just
add
a
small
thing
on
here.
Looking
at
what
we
were
able
to
do
with
the
validation
and
how
much
that
was
able
to
impact
the
structure
of
the
code,
I
started
just
prototyping
what
it
would
look
like
to
rewrite
apply
from
scratch.
B
Knowing
now
the
that
gonna
have
multiple
merge
strategies
and
we're
gonna
want
to
be
able
to
evolve
those
merge
strategies
over
time
and
add
new
ones
that
we're
gonna
add
new
patch
directives
over
time.
In
that
we
should
be
able
to
simply
drop
in
new
pass
directives
for
particular
patch
operations,
without
updating
dozens
of
functions
and
multiple
files
or
many
many
files,
and
that
sort
of
thing,
and
so
all
probably
next
week,
just
push
out
a
example
of
what
that
might
look
like.
But
it's
using
there's
a
couple
things:
structural
changes.
B
So
one
thing
it
does
is
it's
instead
of
doing
to
two-way
disks,
which
is
a
clever
idea
and
kind
of
a
nice
algorithm
thing
the
to
two-way.
This
made
it
really
hard
to
evolve
the
structure,
because
the
there
are
many
arguments
that
had
to
be
threaded
through
the
entire
piece
about.
Are
you
adding?
Are
you
just
doing
ads?
Are
you
just
doing
deletions
and
so
when
a
given
function
would
do
very
different
things
based
on
the
context
in
which
it
was
executed?
Like
the
end
result,
an
intent
of
the
function
was
very
different.
B
So
when
you
look
at
it,
it's
hard
to
figure
out
how
to
modify
it,
because
he
didn't
understand
all
the
ways
it
was
being
used.
So
one
thing
does:
is
it
just?
Does
one
three-way
diff
and
is
one
structural
change,
I'm
proposing
and
then
another
structural
change,
I'm
proposing
is
breaking
it
up
into
multiple
sub
components
that
do
separate
things
so,
like
the
past
generation
itself,
into
directives
in
JSON
would
be
abstracted
away
from
the
actual
creation
of
the
diff
right
and
the
diff
would
just
say.
B
Add
this
element
or
delete
this
element
and
the
patching
code
itself
would
figure
out
what
directives
need
to
be
added
as
a
part
of
that,
instead,
the
diff
code
figuring
out
what
directives
and
then
another
feint
separates
out
is
the
the
generation
of
the
tree.
So
the
current
code
has
the
diff
code
also
traversed
the
tree,
but
because
it's
walking
three
trees
at
once.
It's
somewhat
challenging
or
because
it's
walking
two
trees
together
two
times
and
then
combining
the
results
of
two
tree
walks
into
a
third
tree
thing.
B
And
so
we
just
look
at
three
elements
and
then
decide
what
depression
we
need
to
take
based
on
the
values
of
those
elements
and
what
them
our
strategy
is,
and
so
both
it
will
be
using
the
visitor
pattern,
pretty
heavily
to
be
able
to
independently
add
element,
types
like
lists
or
Maps
or
primitive
items,
as
well
as
independently
add
merge
strategies
for
each
one
of
those
types
so
being
able
to
say.
Okay,
replace
verses,
merge,
verses,
just
replace
the
keys
versus
another.
A
B
One
thing
I
definitely
want
to
do
is
have
the
merge
strategy
be
sent
to
the
server
so
I
went
I
want
the
server
to
be
able
to
take
a
patch
and
merge
it
without
looking
at
any
other
server
state
right,
because
one
problem
we're
having
is
when
the
server
decides
to
when
the
server
has
a
way
it
wants
to
merge
something
and
then
realize.
That's
not
the
correct
way
of
merging
that.
There's
this
weird
bug
caused
by
merging
it.
In
that
way,
the
past
requests
the
how
to
merge.
That
is
not.
B
It
needs
to
be
agreed
upon
by
the
client
and
the
server
okay,
and
so
when
they
don't
communicate
that
between
them.
We
can
never
change
it
because
they
have
to
agree
upon
it
and
they
don't
communicate
it,
and
so
I
want
the
server
one
to
communicate
its
pad
semantics
to
the
client
and
that
works
for
apply.
But
as
Jordan
and
some
people
from
API
machinery
raised,
like
not
all
patched,
clients
use,
apply
and
look
at
that
meditate
in
some
posh
clients
want
just
a
stable
patch
interface
right.
A
B
The
client
actually
has
to
send
it
back
to
the
server.
So,
even
though
the
server
tells
the
client
how
it
thinks
it
should
create
the
patch,
the
client
needs
to
send
it
back
to
the
server
I
paid
attention
to
what
you
said
or
I,
just
totally
ignored
what
you
said.
You
said
oh
yeah
and
Fabiano
to
get
directly
to
answer
your
question.
I
think
the
flags
might
be
interesting,
I'm
not
sure
for
apply
if
it
makes
sense
to
supply
flags,
maybe
like
annotations
on
the
objects
themselves,
but
I
do
think.
A
Yeah
I'm
thinking,
you
know,
I
think
one
of
the
that
those
are
definitely
very
welcome.
Issue
yeah,
very
welcomed
improvements
and
definitely
very
needed,
because
I
think
one
of
the
the
big
issues
we'd
apply
today
is
that
it's
not
really
predictable
right.
You
submit
it
and
you
don't
know
you.
You
expect
something
to
happen,
but
you
don't
know
exactly.
What's
the
outcome
of
you
know
and
that
kind
of
makes
it
really
predictable.
A
You
know
you,
you
know
exactly
what's
going
to
happen
and
a
definite
all
you
don't
want
to
expose
too
much
of
the
underlying
system
to
the
end
user,
like
we
do
with
generators,
for
example,
so
I'm
not
sure
if
that
really
good
but
yeah
it
would
be
an
option.
You
know
to
be
able
to
choose,
because
if
you're
doing
a
apply
you're
already
doing
some
really,
you
know
underlying
complicated
stuff.
So
anyway,
it
could
make
sense,
but
I'm
happy
I'm
a
lot
happier
Eddy.
B
B
B
All
right,
that's
all
I
got
for
status,
o
topics
so
yeah.
One
thing
we
talked
about
papiano
I
mean
couple
weeks
back.
We
had
two
tests
that
were
broken
460
li
tests
and
were
broken
for
like
weeks
and
in
both
cases
we
knew
about
one.
It
took
us
a
little
while
to
figure
out
they're
broke
and
not
horrible.
B
So
it's
pretty
hard,
but
one
thing
that
came
out
of
it
that
I
saw
was
in
both
cases
the
folks
who
approved
the
PRS
didn't
actually
once
they
were
notified
about
the
fix,
they
didn't
actually
fix
them
and
they
didn't
they
weren't
the
ones
who
followed
up
and
saw
that
the
tests
were
broken.
So
this
actually
makes
it
really
hard.
When
we
have
the
project
maintainer
czar,
not
the
same
people
who
are
approving
the
PRS
makes
it
really
hard
to
maintain
the
state
and
we
don't
actually
have
a
bill
top
rotation
either.
B
So
well.
I
think
what
we
need
is
one.
We
need
kind
of
a
built
fabrication
of
folks
who
are
looking
at
the
test
regularly.
So
it's
not
just
like
another.
In
one
case,
another
team
notified
us
about
a
test
being
broken.
In
another
case,
someone
from
6cl
I
saw
that
it
was
broken,
but
after
about
a
week,
it'd
be
nice.
If
we
just
had
one
person
who
looked
every
day,
just
did
all
the
tests
and
most
the
time
they
are
green.
B
If
you
break
a
test
like
we
want
to
be
able
set,
expectations,
say
we're
going
to
roll
back
your
PR
if
it's
not
fixed
within
X
hours,
right
so
having
an
SLO,
would
be
very
helpful
that
we
could
publish
and
just
say
we're
gonna
roll
back
the
ARS
and
then
lastly,
for
the
build
cop
rotation.
I
think
it
makes
sense
to
have
the
folks
who
are
responsible
for
approving
code
and
looking
at
to
make
sure
that
the
contributions
make
sense.
B
A
Specifically
about
this
failure,
yeah
one
thing
I
have
some
trouble
dealing
with
is
that
you
know
when
it
was
a
frayed
or
related
to
a
skew
test
right
and
that's
something
that
does
not
run
on
up
or
quests.
So
you
developed
something
you
open,
you
open
the
for
requests,
tests
pass
and
after
it
got
merged
one
weekly
later
we
figure
out
it
was
broken.
You
know
for
first
Q
test,
specifically
11.7
against
a
1.8
cluster
or
Rd,
the
the
opposite,
I'm,
not
sure,
but
anyway,
yeah
in
the
in
the
the
tests.
A
Dashboard
is
not
something
I
personally
keep
an
eye.
You
know
on
a
daily
basis,
so
that's
something.
Definitely
we
have
to
fix
I'm,
not
sure,
yet
it
it
couldn't
make
even
sense.
Maybe
to
have
a
mechanism,
something
I'm,
not
sure.
If
you
we
have
that
already,
that
notifies
people
through
email
automatically
and
we
take,
for
example,
the
list
of
approvers
for
keeps
ETL
stuff,
and
we
do
something
like
on
call
week,
rotation
for
the
people
that
gets.
You
know
notified
about
the
test
failure
so
that
I
would
see.
A
You
know
if
I'm
on
call
this
week,
I
would
receive
an
email
about
the
failure
and
I
would
be
in
in
charge
of
you
know,
speaking
to
the
people
that
you
know
generated
the
the
problem
and
eventually
you
know
reverting
the
become
it
or
something
like
that
anyway,
but
definitely
that's
definitely
something
that
we
have
to
fix,
because
in
this
case
specifically
I
think
we
look
like
a
couple
weeks
right
until
having
a
fix
and
the
actual
fix
is
not
even
marked
yet
it's
it's.
It's
in
the
March
view
right
now,
right
sitting
there
you.
B
A
We're
just
waiting
for
the
robots
to
get
it
Mart's,
but
but
anyway,
that's
something
we
have
to
to
have
fixed
and
I'm,
not
sure
if
your
other
six
CLI,
if
other
teams
have
something
similar.
But
for
us
specifically
it's
a
big
issue,
because
we
have
these
a
skew
problem
right
so
test
that
does
not
run
on
pull
requests
but
can
fail
later
yeah.
B
We
actually
have
a
number
tests
like
that,
so,
for
instance,
the
gke
tests
were
broken,
which
don't
run
on
pole
requests
and
what's
interesting
is
they
were
broken
in
a
way
that
was
not
specific
to
G
K?
It
was
just
if
you
say,
like
GCE
tests
like
a
specific
environment,
a
set
of
different
things
and
then
G
K
tests,
different
set
of
things,
some
of
which
are
specific
to
G
ke
and
some
are
just
open
source,
but
slightly
different
configuration.
B
A
I
would
like
also
to
bring
something
somehow
related
to
to
that
not
step
tests
specifically,
but
this
kind
of
tasks
that
we
have
to
do
from
time
to
time
and
I,
don't
think
we
are
doing
right
now,
it's
related
to
checking
our
dependencies
in
terms
of
things
that
we
need
to
update
or
not
yet
update.
So
I,
don't
think
anyone
today
is
doing
the
job
of
checking,
for
example,
Cobra
and
PFLAG
from
time
to
time,
for
you
know
important,
but
fixes
that
you
know
would
help
us
and
rebase.
A
The
newer
newer
versions
are
of
those
that
dependencies
so
cover
and
reflect
is
just
one
example,
but
we
recently
had
an
issue
related
to
to
the
JSON
or
ml
parser
that
something
one
has
been
working
on
so
and
that
I
bet
we
are.
We
probably
have
some
a
number
of
dependencies
that
are
directly
related
to
gypsy
child.
Like
Cobra
MP
flag
are,
and
we
are
probably
very
much
behind
you-
know
the
latest
stable
releases
of
them.
A
A
You
know
tabulated
output
of
flags
so
that
that's
something
that
we
will
actually
help
us,
because
a
the
way
we
display
flags
in
our
help,
especially
if
you
have
a
good
number
of
flags
in
a
given
comment.
It's
it's
not
very
good.
You
know
so
they're
now
doing
like
tabulated
outputs
of
flags,
but
I
know
when
we
get
to
rebase
that
it
will
break
a
couple
minor
things
that
we
will
have
to
face.
So
that's
that's
something
else.
A
I
try
to
do
that
from
time
to
time,
but
currently
we
don't
have
a
defined
process
of
checking
our
dependencies
and
pulling.
You
know
bug
fixes
from
them
from
time
to
time.
So,
basically
you
know
we
do
that
when
we
figure
out
you
know
some
some
new
issue
that
is
related
to
a
given
the
dependency.
Then
we
will
rebase.
You
know
a
next
version
to
fix
that,
but
we
don't
have
a
specific.
You
know
process
for
for
checking
that
them
from
time
to
time.
So
that's
something
we
could
also
think
about.
A
A
A
D
C
D
Also
during
the
fetching
information
procedure,
if
the
transmission
got
some
problem
like
networking
issue
or
cluster
dong-gook
control
will
fall
back
to
the
previously
hard
code,
but
legacy
API
resources
and
there's
only
one
blemish
I've
come
up
with
is
the
some
little
latency
problem.
So
previously,
when
user
executes
coop
control
get
help
message,
it
will
cost
like
zero
point,
two
or
three
second
to
prompt
that
whole
bunch
of
messages.
But
when
we're
switching
to
use
the
discovery,
client,
the
first
time
invocation,
it
may
cost
like
one.
D
D
D
B
D
B
The
the
discovery
client
for
one
eight
is
still
gonna
be
cached
using
the
old
mechanism.
It
won't
refresh
two
minutes
as
the
as
like
a
10-minute
refresh.
So
if
you
install
extension,
API
server
and
then
control
yeah.
E
B
Not
gonna
appear
there
for
like
ten
minutes
which
isn't
the
end
of
the
world.
It
doesn't
happen
that
much,
but
it
would
be
nice
if
we
could
fix
that,
because
I
imagine,
a
common
workflow
is
I'm.
Gonna
do
apply
the
install
the
extension
API
server
and
then
check
to
see
that
I
can
actually
use
it
right.
Okay,.
D
D
D
E
D
A
Yeah
that
sounds
really
interesting.
The
only
concern
I
have,
and
it's
the
the
first
time
that
any
help
in
our
set
of
commands
is
dynamic
right
because
help
today,
you
know
it
is
really
hard
coded
everything
in
relation
to
help.
So
there
are
a
few
minor
details
that
I
would
check
to
make.
Sure
first
is
that
we
only
do
that
request
when
we,
you
actually
call
the
help
for
yet
because
help
is
built.
You
know
every
time
you
run
the
help
for
any
command.
A
A
We
generate
automatically
from
our
help
content
right,
because
I
think
we
have
some
documentation,
our
some
things
that
are
generated
automatically
automatically,
based
on
our
help,
so
need
to
check
how
you
know
having
a
dynamic
help
with
play
with
those,
but
anyway,
just
a
couple
concerns
because
I
think
it's
the
first
time,
that's
anything
and
in
help
is
dynamic
and
I
know.
We.
We
had
reasons
in
past
to
try
to
avoid
that
I.
A
That,
specifically,
is
a
is
a
good
reason
for
having
them.
However,
we
just
need
to
check
you
know
those
corner
cases
and
make
sure
it
doesn't
break
anything
else
right,
because
you
know,
for
example,
if
my
cluster
is
down
I,
don't
want
to
to
break
help
for
cube
city.
All
right,
I
I,
want
to
still
be
able
to
show
help
if
the
cluster
I'm
currently
pointing
at
is
down
so
I
don't
want
to
see
a
in
there.
I
still
want
to
see
help
so
yeah.
E
D
A
B
A
really
good
point,
probably
like
two
things
I
think,
but
that
made
me
think
of
one
is
like
we
generate-
help
documentation
that
we
then
post
to
our
reference.
Talk
page
and
that's
like
needs
to
be
statically
generated
because
there's
no
cluster
behind
it,
so
we
may
it
may
be
that
the
help
thing
we
generate
should
be.
B
We
should
be
able
to
disable
that
piece
of
it,
for
instance,
and
that
makes
sense
like
I,
think,
even
if
your
clusters
down,
you
should
be
able
to
say
like
help,
local
or
I,
don't
know
what
it
is,
but
something
like
that
and
we
may
want
to
fall
back
if
we
are
unable
to
talk
to
the
cluster,
for
instance,
it
may
be
that
me
want
to
fall
back
on
some
canned
help
message.
Yeah.
A
It's
exactly
just
if
you
can
acts
as
the
cluster.
You
know
you
either
fall
back
to
a
hard-coded
list.
However,
we
probably
want
to
get
rid
of
those
hard-coded
lists,
because,
even
though
we
fix
the
issue
by
by
doing
it
dynamically,
we
will
still
have
that
in
our
code
base
and
it
still
have
to
be
maintained
right.
So
we
probably
want
to
get
rid
of
the
the
hard
code
that
resets
list
at
some
point,
so
I
would
say.
A
A
B
A
F
B
And
that's
that's
a
hard
area
of
code
to
work
with
so
thanks
migi,
that's
a
gonna,
be
a
big
help
and
probably
good
a
good
groundwork
to
show
all
the
other
uses
for
open
API,
as
we
like,
for
instance,
generating
the
help
messages.
I
can
look
at
how
you
use
open
API
a
little
bit.
It's
a
little
bit
different
use
case,
but
we
can
look
at
it
too.
As
an
example
of
how
we
dynamically
build
things
around
open,
API.
D
C
C
C
So,
obviously,
right
now
the
like
that
the
names
of
the
fee
options
or
not
not
really
set
in
stone
sit,
but
they
provide
sort
of
a
basic
gist
of
so
maybe
we'd
have
like
a
command
runtime
set
of
options
which
would
contain
everything
the
command
needs
to
run
and
print
outputs
and
then
command
attributes,
or
maybe
another
name
that
would
receive
things
such
as
a
base
name
and
then
sorry.
Now.
This
is
particularly
something
we
run
into
downstream
we're
a
bunch
of
commands
up
for
us.
C
C
We
have
that
peasant,
essentially
just
maybe
like
a
parent
name
string
argument,
so
this
would
solve
this
by
essentially
allowing
us
to
have
a
custom
signature,
a
function,
signature
for
every
single
command
and
yeah.
Allow
maybe
balance
dream
projects,
for
example,
to
speak
options
and
maybe
even
further
parts
of
the
code
base.
A
Yeah
I
think
that's
a
nice
move
in
the
direction
of
having
our
own
interface
for
declaring
commands
I
mean
any
an
interface
that
would
make
sure
that
every
command
has
the
complete
validator
and
pattern.
For
example,
I.
Don't
think
we
are
at
that
point
yet
of
having
the
the
interface.
However,
you
know
having
a
at
least
a
default
way
of
declaring
that
that
ties
to
a
given
signature
of
a
given
function
is
a
is
already
a
step
in
that
direction,
especially
because
today
you
know
like
we
talked
about
the
string.
A
Cube
CTL
is
hard-coded
in
many
places
and
we
would
like
to
make
that
parameterize
risible
and
also
like
many
comments,
don't
take
the
in
reader
and
the
out
and
never
out
writer.
So
we
have
to.
We
want
to
make
sure
that
you
know
every
command
takes
that
and
uses
that
writing
to
to
the
output.
So
yeah
I
think
it's
a
first
step
in
that
direction
and
will
already
improve
things
a
little
bit
in
terms
of
having
a
a
default
pattern.
You
know
for
declaring
names.
D
I
still
have
a
question:
I
mean
I
made
some.
You
know
couple
of
my
PR
just
cuz
some
basil
field
field
issues
which
I
think
may
relate
it
to
oscillate
cook
control
into
an
independent
repo.
So
anyone
who
has
the
knowledge
of
how
to
fix
that
I
would
be
really
appreciated.
If,
if
you
could
tell
me
how
to
do
that,
I'm
really
not
you
know
have
that
knowledge
about
the
they
little
kind
of
things.
So
I'll
be
really
appreciative.
If
someone
could
help
me.
Thank
you.
Yeah.
D
A
F
Yeah
I
was
working
on
that
by
recently
I
haven't
touched
that
but
I
have
some
experience
with
basil.
I
suspect,
maybe
because
it
gives
a
different
password
for
the
page.
Also
I
guess
maybe
you
have
some
dependency
for
the
test
data
and
you
must
have
included
in
their
plate
so
beautiful
to
make
it
work.
Okay,
so.
D
F
C
E
I
ask
a
question:
was
there
some
reason
why
so
much
of
cube,
I'm
CTL
is
included
as
excluded
from
linting.