►
From YouTube: Kubernetes SIG CLI 20210310
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Channeling,
my
mache.
We
have
good
morning
good
evening
good
afternoon,
depending
on
where
you
are
thank
you
for
joining
us
for
the
march
10th
6
cli
bi-monthly
meeting.
My
name
is
eddie
zineski
and
I'll
be
your
host.
Today
we
have
a
couple
quick
announcements
to
get
started
with
code.
Freeze
was
yesterday,
so
hopefully
we
got
everything
in
that
we
needed
to.
We
were
scrambling
a
bit
last
minute
to
approve
some
prs.
A
A
Probably
we'll
take
a
look
at
the
contributor
survey.
This
is
used
to
provide
feedback
to
sig
controbex,
to
see
how
the
health
of
the
contributors
and
the
project
are
going
so
take
some
time
fill
it
out.
It
shouldn't
take
you
that
long.
A
And
we've
definitely
seen
change
come
from
that
survey,
so
we
have
work
in
progress
for
the
annual
report.
Mache
kicked
this
off
before
you
went
on
vacation,
and
so
this
is
just
kind
of
like
a
quick
update.
It's
looking
for
feedback
from
contributors
and
folks
in
the
community.
I
think
I
I
have
to
go
through
it
as
well.
Still
too,
I
haven't
looked
at
it,
but
give
it
a
read.
A
This
is
our
chance
to
share
everything
with
the
rest
of
the
community
and
I
believe
the
cncf
and
there's
a
I
think,
a
meta
issue.
There
we
go
so
this
is
actually
what
we
need
out
of
this.
A
B
Donnie
donnie
and
katrina
from
apple
and
phil
has
contributed
a
lot
to
kml
as
well
so
yep
anyway.
Let's
not
celebrate
until
it's
actually
shipped
and
not
rolled
back.
A
A
Okay,
going
once
and
twice
feel
free
to
speak
up
later.
If
you
want
all
right-
and
on
that
note,
the
first
topic
we
have
in
the
agenda
is:
we
want
to
work
towards
building
a
more
diverse,
sig
and
team,
and
especially
leadership
team.
So
I'll
toss
over
to
sean
to
get
us
kicked
off
discussing
some
brainstorming.
We
can
do
there.
C
Yeah,
I
was
just
hoping
to
take
a
few
minutes
here
at
the
beginning
of
our
meeting
to
to
query
you
guys
to
to
find
out
what
to
basically
brainstorm
and
to
to
ask
you
guys,
do
you
are
there
any
you
know.
Are
there
any
things
that
we
can
undertake
in
order
to
to
make
our
team
more
diverse
and
inclusive?
C
So
so
we
recognized,
and
it's
kind
of
been
you
know
it
was.
It
was
a
very
small
shock.
We
we
actually
tried
to
we
set
up
and
scheduled
a
six
cli
presentation
for
our
for
kubecon
eu,
and
we,
you
know
once
we.
D
C
Did
it,
you
know
you
look
at
the
schedule
and
there
are
four
four
guys
who
are
you
know
pretty
much
all
the
the
same
color
and
it
kind
of
really
brought
into
relief.
The
idea
that
we,
whatever
our
diversity,
diversity
and
inclusion
goals,
are
we.
It
doesn't
appear
that
we
are.
We
are
approaching
them,
and
so
we
wanted
to
to
query
you
guys
query
the
rest
of
the
team.
C
What
what
is
it
that
we
can
do
you
know
from
the
small
to
the
large,
from
the
cultural
to
maybe
even
just
small
little
programs?
I
I
put
on
their
mentorships.
I
know
that
we
we
met,
we
cncs
has
mentorships
and
years
ago
we
had
been
included
and
involved
ourselves
in
that.
C
But
what
is
it?
Does
anybody
have
any
ideas
on
again
from
the
small
and
cultural?
You
know,
maybe
what
we
do
with
the
beginning
of
every
meeting.
Does
anybody
have
any
ideas
on
how
we
could
make
our
team
more
diverse
and
inclusive.
C
D
I
think
mentorship
is
is
a
great
angle
and
I'd
be
curious
to
know
more
about
what
that
program
has
looked
like
in
the
past
and
for
other
cities
that
you're
mentioning.
I
know
it's
can
be
tough
to
get
involved,
especially
if
you
don't
have
anybody
else
say
in
your
company
or
any
more
direct
con
contacts
with
projects
that
are
ongoing
and
there
could
be
a
fairly
substantial
barrier
to
figuring
out
how
to
help
so
mentorship
could
be
an
easy
way
to
get
people
feeling
more.
A
part
of
the
community.
C
I
think
our
previous
mentorship
was
through
outreachy,
which
is
someone
has
has
put
on
on
the.
C
And
I
think
it
was
it
was
last
time
it
was
spearheaded
by
by
phil
who's,
not
here
today,
so
I'll
put
that
on
the
dock
as
well,
that
we
should
talk
to
phil
about
his
the
the
previous
mentorship
that
he
he
spearheaded
talk
to
paris.
That's
a
good
idea.
A
I
forgot
how
they
described
it,
but
it
was
basically
a
a
program
that
is
designed
to
help
people
move
from
a
like
a
reviewer,
so
to
speak,
to
a
kind
of
like
sig
chair,
lead
role,
and
so
we,
it
was
very
initial
conversation.
We
had,
I
think,
we're
going
to
discuss
it
more
in
depth,
but
it's
definitely
something
that
folks
are
realizing
is
we
need
to
start
nurturing
and
and
training,
especially
underrepresented
groups,
for
the
next
generation
of
sick
leads
so
yeah
there's.
Definitely
the
pressure
from
other
places
as
well
to
everyone
wants
to.
A
D
Yeah,
I
guess
that's
a
really
good
point
too-
that
there's
there's
a
couple
different
levels
of
mentorship.
That
would
need
to
happen
if
our
target
is
to
get
people
in
leadership,
because
you
have
to
have
more
people
involved
in
the
first
place
to
develop
even
candidates
for
that,
and
then
you
have
to
have
a
program
to
help
them
get
up
to
speed
to
the
level
that
is
required
for
leadership.
A
Yeah,
the
contributor
summit's
been
a
great
place
in
the
past.
That's
actually
how
I
got
involved
with
the
sig
in
the
first
place
and
we
had
the
meet
this
meet
the
contributors
there
and
I
don't
think
they've.
I
I
brought
it
up
yesterday
as
well
and
it
sounds
like
it
hasn't
been
as
successful
digitally
not
enough
mentors
were
showing
up.
A
So
if
we
can
push
for
that
again,
possibly
and
folks
can
volunteer.
I
actually
even
know
that
these
programs
are
going
on.
So
I
think
it's
a
marketing
issue
as
well
as
a
participation
issue,
so.
A
I
know
that
the
eu
one
no
the
na
the
na1,
didn't
happen.
I
know
it
was
cancelled
because
it
was
digital.
Obviously
I
I
don't
know
if
they're
doing
one
for
eu,
so
I
can
check
in
on
that
la
at
the
end
of
the
year
is
north
america,
for
los
angeles,
is
supposed
to
be
in
person
as
of
now.
So
I
don't
know
if
that's
actually
going
to
happen
or
not
so.
C
So
I
was
so
for
this
for
this
topic.
I
was
just
hoping
to
kick
it
off.
I
know
that
we
can.
You
know
we
can
add
whatever
items
to
our
arsenal
of
how
we
intend
to
make
the
the
team
more
diverse
and
inclusive,
and
I
was
and
then
for
this
anyway.
I
just
wanted
to
kick
it
off.
C
Put
the
idea
in
people's
minds
come
up
with
a
few
brainstorm,
a
few
ideas
on
where
to
start
and
yeah
again,
mostly
just
kind
of
wanted
to
put
the
idea
in
people's
minds
that
this
is
something
that
that
you
know
not
only
should
we
pay
lip
service
to,
but
actually
we're
going
to.
A
For
yeah,
I
think
we
talked
about
carving
out
better
good
first
issues,
so
that's
a
good
place
to
start
getting
some
new
contributors
too.
I
think
I
I've
thrown
this
statistic
around
for
years
and
I
no
longer
have
a
source
for
it,
but
if,
if
you
respond
as
a
maintainer
to
someone's
issue
or
pull
request
within
48
hours,
they're
90
percent
likely
to
contribute
again.
So
I
think
that
definitely
puts
a
little
burden
on
us
on
us
as
maintainers
and
contributors
to
kind
of
help.
A
B
Yeah
I'm
looking
at
the
contributor
list
for
customizers
now
you
know
263
people
and,
of
course
the
contributor
is
anybody.
Who's
even
just
filed
one
one
issue,
but
you
know
responding
on
pr
suggesting
how
to
write
unit
tests.
How
to
make
comments
better.
All
that
stuff
is
pretty
easy
to
do
and
simple,
but
and
in
terms
of
diversity,
it's
really
hard
to
tell
from
somebody's
it
it's
hard
to
tell
if
you're,
addressing
any
sort
of
advertis
diversity
issue,
you're,
certainly
increasing
the
number
of
contributors
which
should
automatically
pull
in
diversity.
B
A
What
thank
you
for
putting
this
on
the
agenda?
Sean
for
sure.
It's
something
we
need
to
start
putting
a
bunch
of
effort
into
so.
C
So
so
maybe
it's
something
we'll
revisit.
You
know
for
a
couple
minutes
at
the
beginning
of
our
meetings,
just
to
see
just
to
make
sure
that
we
don't.
I
don't
drop
the
ball.
We
don't.
A
Yeah
and
I'll
I'll
dig
more
into
the
mentorships
and
and
we'll
see
where
the
the
sick
leads
discussions
go
as
well
I'll
share
any
updates.
I
get.
D
D
Yeah,
so
this
is
an
issue
on
cube
and
I
would
like
to
start
work
on
one
of
the
options
that
I
proposed.
None
of
this
is
a
huge
change,
but
one
of
the,
I
think,
probably
the
best
option,
in
my
opinion,
would
involve
exposing
a
flag
on
additional
commands
that
it
doesn't
exist
on
today.
So
it's
something
I
wanted
to
bring
up
with
all
of
you.
D
D
That
is
doesn't
necessarily
need
to
be
the
case,
but
it
is
the
case
today.
I
have
another
issue
that
I'm
that
I'm
following
up
on
for
for
the
side
of
like.
Why
is
that
the
case,
but
on
the
client
side,
I
think
we
could
help
with
this,
because
what
happens,
or
what
can
happen
is
that
some
of
our
commands
end
up
well,
they
don't
expose
any
way
for
the
user
to
configure
this.
D
So
they
make
a
query
with
no
limit
which
gets
propagated
to
a
query
with
no
limit
on
fcd.
If
you
have
a
sufficient
volume
of
a
given
type
of
resource.
Pod
is
the
most
obvious
example
of
where
this
can
happen,
but
theoretically
it
could
be
any
resource
in
scd.
Then
you
end
up
with
grpc
errors,
because
you're
trying
to
return
like
an
obscene
volume
of
data
back
from
etsy
all
at
once,
even
though
you
may
only
need
you're
going
to
filter
it.
D
So
you
may
only
need
two
records
when
it
comes
down
to
it.
So
the
two
approaches
that
I
think
are
most
reasonable
would
be
to
just
add
a
default
limit
under
the
hood.
Git
already
has
the
chunk
size
parameter,
which
enables
the
customers
who
are
using
cubescale
to
configure
it,
but
there
are
commands
notably
drain
and
describe
that
make
list
requests
under
the
hood
and
don't
exp
don't
expose
a
chunk
size
option.
So
we
could
just
add
that
limit
under
the
hood
is
option
one
and
not
expose
the
flag.
D
It
will
probably
work
work
for
most
people,
but
it
could
increase
latency
in
some
cases
and
option.
Two
is
actually
take
that
chunk
side's
flag.
Whether
or
not
we
continue
to
call
it
chunk.
Size
is
another
question,
but
just
more
generally
take
that
junk
size
flag.
Add
it
to
those
commands
that
are
actually
making
list
calls
any.
C
Hi
katrina,
I
was
hoping
to
ask
just
a
question
just
because
I
I
don't
know
enough
about
the
drain,
and
if
we,
if
you
have
a
second,
maybe
we
we
could
just
dig
in
for
a
couple
seconds
on
how
the
chunk
size
affects
the
drain.
So
it
looks
like
you
mentioned
that
there's
a
a
pod
listing
that
happens
during
the
drain,
and
this
is
that's
what
you're
trying
to
target
it
sounds
like
nodes
that
have
a
lot
of
pods
is
where
this
might
help.
Is
that
correct.
D
Actually,
the
interesting
thing
is
that
it
actually
does
not
matter
how
many
pods
are
on
the
node
itself,
because
the
request
that
we're
making
is
for
pot
to
list
pods
with
a
selector-
and
that
means
that
we're
only
ever
going
to
get
returned
a
small
number
of
pods.
Theoretically,
as
the
number
of
pods
per
node
can't,
be,
you
know
gigabytes
well,
maybe
it
can
be.
That
would
be
pretty
crazy.
D
The
the
problem
is
that,
even
though,
ultimately,
there
may
be
a
small
number
of
pods,
the
query
that
gets
an
ends
up
getting
made
to
at
cd
to
retrieve
the
list
that
then
gets
filtered
in
the
api
server
down
to
what's
actually
on
the
node,
it's
it's
querying
for
every
pod
that
exists
in
the
database.
D
So
the
the
request
on
on
our
side
is
get
with
a
selector,
but
that's
still
causing
a
full
load
from
it.
Cd.
C
Yeah
we're
smashing
that
cd
with
a
query
of
okay
does
it
is
that
is
that
a
high
level.
C
So
so
my
understanding
of
drain
is
that
there's
I
think
that
there's
a
cordon
initially,
where
the
scheduler
stops,
we'll
we'll
stop
scheduling,
pods
and
work
actually
workloads
onto
a
particular
node.
C
Once
that
has
happened
for
a
while,
then
the
the
control
plane
is
going
to
try
to
kick
everything
off
of
that
node
and
yeah.
It's
actually.
D
Sorry,
it
might
just
be
worth
pointing
out
this.
Doesn't
I
opened
about
drain
specifically,
and
then
I
dug
in
a
bit
more,
and
it
seems
like
this
affects
not
only
drain
but
definitely
describe
node
and
possibly
also
cluster
info
dump
and
top
pod,
based
on
looking
at
just
looking
through
the
code,
but
I've
been
able
to
confirm
with
drain
and
describe
node.
So
it's
anything
that
makes
that
that
query
that
is
based
on
an
in
memory
like
that
does
actually
ultimately
limit.
D
C
Cool
appreciate
the
the
descript,
the
description
just
so
I
could
have
a
better
idea.
What's
going
on
thanks.
A
I
tagged
you
on
another
issue
just
now.
If
you
could
take
a
look
at
that
at
some
point,
it
might
be
tangentially
related.
I
just
I
try
to
connect
the
dots
on
all
these
things,
but
we
did.
We
had
issues
with
streaming
results
back
with
trunk
size
and
request
limit,
so
it
could
not
be
related
at
all,
but
I
want
to
make
sure
it's
on
your
radar.
D
Okay,
I'll
take
a
quick
look
at
that
any
objections
to
exposing
that
flag
on
describe
and
drain
at
the
at
minimum.
D
Okay,
I'll
work
on
a
pr
for
that,
then
thank
you.
A
A
So
the
okay,
we'll
start
with
this.
So
I
a
lot
of
what
I
usually
focus
on
is
developer
experience
for
things
and
so
I'm
sure
you've
all
had
to
write
yaml
manifest
from
scratch
and
it's
it's
kind
of
a
pain,
especially
when
you
don't
know
all
the
parameters
and
especially
for
like
a
beginner
jumping
in
you,
either
go
from
starting
with
an
empty
file
or
copy
and
pasting
some
an
example
or
another
like
internal
resource.
A
A
A
Okay
cool,
so
so
the
the
language
server
basically
does
the
autocomplete
for
for
yaml
manifests.
It
works
with
just
about
any
demo,
that's
out
there,
but
it
needs
a
json
schema
which
is
kind
of
like
an
open
api
schema,
but
there's
some
mixed
up
different
keywords
and
stuff.
So
anyways
right
now
you
can't
really
auto
complete
your
your
cluster
aware,
like
crd
resources,
for
example,
and
I
built
a
cubecontrol
plugin
that
lets
you
take
your
open
api
schema
turn
that
into
a
json
schema
and
print
it
to
standard
out.
A
A
I
don't
use
vs
killed,
usually
okay,
so.
A
I
don't
I
don't
know
how
to
name
file
there.
We
go,
you
know
manifest.yaml,
and
so
the
yama
language
server
from
red
hat,
which
most
people
are
using
under
your
editor
somewhere,
has
support
for
defining
a
schema
in
line.
You
can
also
configure
this
somewhere
else
in
your
config
files,
and
so
you
basically
put
in
here
yaml
language
server
and
you
say,
schema
equals,
and
then
you
give
it
a
url
path.
So
this
is
a
file
path
or
an
http
pass.
A
A
A
I
can
actually
pull
in
that
cluster,
aware
crd,
auto
completion,
so
I
can
toss
on
here.
You
know
certificate
authority
right
and
it
basically
it
knows
all
the
values
from
the
schema.
A
It
has
support,
you
know
it's
basically
the
completion
you're
expecting
with
your
enums
acne.
Oh
here
it
is
okay,
so
it's
acne
alright,
so
you
can
toss
in
your
all
your
values
right,
so
you
get
the
idea
where
this
is
going
right
and
so
the
the
thing
is
that
this
relies
on
that
json
schema
and
so
right
now,
there's
no
good
way
for
a
user
to
kind
of
get
that
schema.
This
obviously
isn't
the
ideal
experience
right,
a
user
having
to
like
print
out
the
schema.
A
I
thought
about
potentially
like
wrapping
the
yaml
language
server
with
this
functionality
to
grab
it
from
your
current
cluster
right
because
in
ideal
world
a
developer
should
just
be
able
to
open
up
a
manifest.
It
should
talk
to
their
current
context,
cluster
and
figure
out
how
to
give
them
completion
there.
So
that's
just
a
quick
demo.
I'd
love
some
thoughts
or
feedback
on
this
I'll,
put
a
link
in
the
chat
to
the
github
repo,
but
I
don't
really
know
where
to
go
from
here
other
than
playing
with
it
and
getting
feedback
so.
A
So
right
now
my
my
thought
is:
it
would
go
off
your
current
context.
So
if
you
swapped
your
current
context
in
your
cube
config,
you
know
it
would
be
cluster
aware.
B
Poop
cuddle-
and
I
thought
I'd
just
mention
that
the
biggest
difficulty
in
doing
that
was
the
shocking
number
of
dependencies,
which
became
visible
because
cube
cuddle
and
kubernetes
in
general
in
general
vendors
everything,
so
they
vendor
all
the
depths.
So
when
you
submit
a
pr
the
dependencies
you're
bringing
in
are
very
visible,
that's
a
good
practice
customize.
It
doesn't
do
that
kind.
It
doesn't
do
that,
but
I
think
we're
going
to
start
doing
that
so
for
a
couple
of
examples
went
right
wrong,
so
opening
api
dependencies
were
brought
in
and
turns
out.
B
There's
transitive
dependencies
there
on
mongodb
we
had
a
dependency
on
hashicorps
go-getter
functionality,
which
is
really
a
nice
way
to
download
resources
and
custom
and
customization
files
from
various
sources
like
s3
and
github,
and
as
your
data,
storage
and
gcp
buckets,
and
naturally
that
pulled
in
a
boatload
of
dependencies
too.
The
dependencies
are
just
really
out
of
control
for
that
particular
module.
B
So
on
the
customized
side,
customize
in
the
future
is
going
to
continue
to
go
forward
as
a
standalone
cli.
But
it's
also
going
to
be
a
library,
and
we
just
generally
need
to
be
much
more
careful
about
what
dependencies
get
pulled
in
that
library,
as
opposed
to
what
get
pulled
into
the
cli
cli
can
be
a
little
bit
more
relaxed.
B
B
So
if
anybody
has
any
ideas
or
tools
to
help
with
the
like.
I'd
really
like
to
see
the
transit
dependencies
without
having
a
vendor,
because
vendor
is
going
to
make
a
big
mess
because
you're
copying
all
the
code
into
the
repository.
But
that's
what
we're
going
to
do
if
we
can't
find
a
better
solution.
D
B
Some
so
I
got
rid
of
hashicorps
the
go-getter,
which
then
forced
a
major
version,
change,
of
course
in
in
the
customized
library,
because
a
big,
that's
a
big
dependency,
the
drop
that's
going
to
impact
people.
I
want
to
bring
them
back
in,
but
through
the
mechanism
I
just
described
where
the
api
depends
on
it.
Out
of
the
api
does
not
depend
on
it,
but
the
cli
does
starlark
dependencies.
The
dependency
in
the
starlock
interpreter
is
still
in
there,
but
we're
going
to
get
rid
of
that
in
a
a
patch.
B
That's
coming
coming
down
the
pipeline,
so
yeah
some
things
I
we
kept
out,
but
some
things
was
just
too
late
and
they
went
in.
It
was
better
to
introduce
a
couple
of
dependencies
that
don't
really
matter
and
then
get
them
out
in
in
patch
pr's,
then
to
like
miss
the
deadline,
and
then
there
was
other
it
was
it's
just
a
it's
a
trade-off.
B
We
got
rid
of
a
lot
of
bad
dependencies
by
introducing
a
few
new
bad
dependencies
so,
but
we
can
get
rid
of
those
new
ones
now
very
easily
by
just
the
the
pr's
now
to
coop.
Cuddle
will
just
be
incrementing
a
patch
number
in
a
go
mod
file,
as
opposed
to
you
know
crazy
code
reviews
that
have
to
be
reviewed
very
carefully.
E
C
So
so,
for
those
not
that
familiar
with
customize
jeff,
the
the
biggest
change
that
that
we've
introduced,
or
at
least
one
of
the
biggest
changes,
is
that
we,
the
customize,
is
no
longer
depending
on
a
specific
version
of
the
api.
Is
that
correct.
B
Right
right,
the
you
know,
customize
was
a
big
experiment
in
the
idea
of
adding
functionality
and
solving
some
long-standing
issues
that
were
in
cube
cuddle.
Its
origins
were
in
issues
that
were
in
coop
cuddle,
but
it
was
developed
outside
of
coop
cuddle
to
explore
this
notion
of
a
rap
development
of
a
freestanding
and
freestanding
repo.
B
So
we
now
have
this
new
new
module
called
k,
ammo,
which
takes
the
yaml
libraries
that
everybody
uses
that
ku
kettle
depends
on,
and
other
people
depends
on.
It's
sort
of
the
go
standard
for
managing
ammo.
This
new
line,
a
new
library,
is
called
kml.
Our
new
module
is
called
kml
and
it
adds
kubernetes
awareness
to
the
ammo
libraries.
So
things
like
object,
meta
and
type
matter
are
now
in
those
libraries.
So
you
can
do
all
sorts
of
cool
things
in
the
kubernetes
world
in
this
sort
of
cross.
B
In
the
it's,
the
it's,
it's
the
venn
diagram
of
kubernetes
and
yaml,
so
that
library
is
now
maintained
as
part
of
the
customized
repository,
and
I
think
more
functionality
is
going
to
shift
from
customized
down
into
that
library
until
the
customized
layer
is
really
a
thin
thing
and
it's
and
it
kind
of
fades
into
some
fades
into
cube,
cuddle
and
fades
into
k.
Ammo.
C
Sounds
kind
of
like
an
unstructured
object,
as
opposed
to
like
core
v1
pawn.
C
E
I
think
so
yeah
yeah,
I
think
you
answered
just
before
you
dropped
off
so
yeah,
but
partially.
I
think
you
said
that
you
would.
What
would
be
the
mechanics?
If
would
I
do
there
be
something
which
automatically
detects
that
context
switch
and
you
would
regenerate
the
schema
or
like
what
would
be
the
layering.
A
So,
in
a
in
a
like
ideal
world
I'd
imagine
it's
I'd
hate
to
wrap
the
language
server
from
from
yaml
from
red
hat,
but
that's
the
only
way.
I
see
it
being
like
a
really
good
experience
and
it
would
just
go
off
of
your
current
cluster.
So
if
you
switch
your
current
context
in
your
acute
config,
then
it
would,
you
know,
grab
the
schema
and
regenerate
that.
A
E
D
Maybe
a
somewhat
related
problem
is:
is
that
you
can
with
the
customized
configuration
functions,
you
can
write
or
look
a
lot
like
client-side
crs
and
you
can
you
can
write
schemas
for
those.
D
I
know
I've
worked
with
some,
some
of
that,
so
those
wouldn't
be
available
on
the
server,
but
it
would
be
cool
if
I
could
have
like
a
directory
that
would
like
union
some,
some
types
that
I
provided
schemas
for
from
my
from
my
client
side,
types
with
what's
actually
available
on
the
server
and
how
that
get
picked
up.
A
B
Yes,
there
was
I'm
not
sure
if
natasha
is
in
the
call,
but
she's
she's,
adding
more
open
api
foo
to
the
customer's
libraries.
A
Yeah
and
so
right
now,
I
it
just
prints
out
the
json
schema
right,
so
I
just
have
q
controlled,
schema
json,
but
could
easily
just
you
know,
put
an
open
api
and
it
can
just
write
out
the
open
api.
The
same
way,
the
the
benefit
I
see
there
is
otherwise
you
have
to
like
do
you
know
you
got
a
proxy
and
then
you
got
to
curl
your
endpoint
and
it
kind
of
just
automates
all
that
for
you.
A
A
Okay,
any
other
topics
for.