►
From YouTube: Carvel Community Meeting - July 15, 2021
Description
Carvel Community Meeting - July 15, 2021
Announcing new day/time for our meetings! We now meet every Thursday at 10:30am PT. We'd love for you to join us live!
This meeting is packed full of big announcements and an in depth demo on kapp-controller. Check out the notes here: https://hackmd.io/F7g3RT2hR3OcIh-Iznk2hw#July-15-2021-Agenda
A
Hi
everyone
welcome
to
this
week's
edition
of
the
carville
community
meeting.
We
are
now
meeting
every
thursday
at
10.
30
am
pacific
time.
It
is
part
of
our
announcements
that
we'll
get
to
in
a
moment,
but
if
you're
tuning
in
from
home,
watching
a
recording
and
you've
watched
our
previous
previously
recorded
meetings,
we
have
we
were
meeting
every
monday,
but
now
we
are
meeting
every
thursday
at
10
30
a.m.
Pacific
time,
so
we
would
love
to
have
you
join
at
our
new
time?
A
That's
just
to
help
us
keep
track
of
who's
attending
making
sure
that
we
keep
the
lines
of
communications
open,
we're
able
to
reach
out
at
a
later
time
and
make
sure
we
don't
lose
that
contact
with
anybody
from
our
community
and,
like
I
said,
one
of
the
announcements
that
we
have
today
is
we've
been
playing
around
with
some
some
different
times
and
in
days
for
the
community
meetings
and
we
were
doing
office
hours
every
other
thursday
and
community
meetings
every
monday.
A
But
now
we've
decided
to
move
the
community
meetings
to
be
every
thursday
at
10
30
a.m.
Pacific
time
we're
no
longer
having
office
hours.
But
the
new
format
of
the
community
meetings
is
also
changing
to
allow
for
more
time
for
discussion
and
and
anything
that
you
may
need
help
with
previously
we're
doing
more
of
agile
scrum
discussion
with
backlog
items.
But
now
we're
going
to
be
focusing
more
on
broader
topics
within
the
project
roadmap
and
giving
the
community
status
updates
on
those
particular
items
and
then
discussing
anything
regarding
the
project
itself.
A
So
hopefully,
with
this
new
format,
it
allows
for
just
more
in-depth,
hands-on
discussion
and
more
high-level
overview
of
what
the
project
is
up
to
and
with
that
the
next
few
announcements
we
have
are
a
couple
of
releases,
one
of
them
being
a
pretty
important
release
of
the
team,
has
been
working
on
for
quite
some
time
with
the
ytt
schemas
and
there's
also
a
little
bit
of
a
bug
fix
within
that.
So
does
anyone
from
the
team
want
to
go
over
this
particular
release.
B
I'm
happy
to
talk
about
it,
so
this
is
officially
the
ga
of
our
schema
feature
for
ytt,
so
this
will
basically
enable
you
to
provide
a
declarative
way
to
state
what
values
are
accepted
by
your
templates
for
ytt.
So
you
can
now
declare
a
schema
file
that
says
what
values
are
accepted
and
also
what
types
for
those
values
are
accepted
and
the
structure.
B
So,
for
the
most
part,
those
that
are
using
data
values
right
now
can
just
slap
a
different
annotation
on
their
data
values
document,
which
is
data
values
dash
schema.
There
are
a
few
like
exceptions
for
like
array
items,
so
I
definitely
recommend
checking
out
the
documentation
in
the
release
there.
The
how
to
use
data
values
with
schema,
there's
also
a
migration
guide
on
our
website.
B
So
if
you're
looking
to
take
advantage
of
this
feature,
definitely
take
a
look
at
the
docs
they're,
updated
and
also
happy
to
hear
any
feedback
in
the
karbol
slack
channel.
As
you
check
this
out
and
adopt
it.
A
Great
thanks
carrie
and
then
I
there's
just
this
little
bug
fix.
Do
you
want
to
touch
on
that.
B
Yeah
so
also
as
part
of
the
35
0
release,
we
did
some
automation
to
create
the
releases
in
an
automated
pipeline.
As
part
of
that,
we
realized
that
some
of
the
generative
files
for
the
website
were
not
being
created
properly
so
because
of
some
issues
with
that
release
process.
The
350
release
has
some
missing
files
for
running
the
website
locally,
so
this
patch
release
zero
35.1
includes
all
the
files
needed
to
also
run
the
website
command
and
there's
more
information
in
that
issue.
There.
A
A
Okay,
next
up,
we
have
image
package
0.13.
C
Yeah,
so
I
can
talk
about
o13
and
also
oh
14,
which
was
released
the
day
after
032
was
released.
But
013
is
this
performance
improvements
when
determining
when
doing
an
image
package
pool
in
order
to
determine
if
it's
been
relocated,
we
make
less
cost
to
the
registry,
which
also
means
the
only
performance
improvement,
but
you
don't
need
to
specify
certain
credentials
anymore,
to
have
red
initial,
read
access
so
and
there's
a
bug
fixed
as
well.
C
When
you
know
pushing
or
generating
a
lock
file,
it
was
actually
specifying
the
wrong
bundle,
image
reference
in
the
output
log
file,
so
that's
fixed
and
then
zero.
Fourteen
zero,
essentially
is
just
introducing
a
new
flag.
So
if
you're
using
image
package
against
a
slow
registry
in
the
registry,
for
some
reason,
is
returning
back
http
headers,
while
we
return
back
http
headers,
you
can
set
the
response-
header
timeout
flag,
now
to
accommodate
for
that.
A
Okay,
next
thing
we
have
is
cat.
A
Controller,
this
one's
also
a
really
big
one
that
the
team
has
been
working
on
for
several
months.
We
have
a
demo
scheduled
also
in
this.
This
call
to
go
over
it,
who
wants
to
take
this
release
on
to
give
an
overview
of
what
what
the
community
can
expect
from
this.
D
I'm
happy
to
give
a
brief
overview
of
it.
So
for
those
who
have
been
following
along
with
cap
controller
since
its
inception,
cap
controller
was
mainly
focused
on
the
concept
of
continuous
delivery
via
its
app
custom
resource
definition.
D
But
in
this
new
release
of
cap
controller,
which
you
can
expect,
are
these
new
custom
resource
definitions
that
have
been
introduced
that
are
meant
to
represent
aspects
of
package
management
for
any
additional
information
about
the
api
definitions
themselves
or
actually
going
through
the
walkthroughs.
We
have
quite
a
bit
covered.
A
D
But
the
the
big
idea
here
is
that
we're
going
to
have
these
kubernetes
custom
resource
definitions
that
allow
you
to
encapsulate
software
as
these
package.
Custom
resources
in
a
very
immersive
version
and
stable
manner,
and
we're
going
to
have
ways
for
people
to
very
easily
install
and
configure
the
underlying
software.
That's
been
released
by
these
packaging
concepts
so
that
it
can
ease
the
process
by
which
you
actually
install
things
and
deploy
things
out
to
your
cluster
on
another
version,
stable
manner.
A
A
Great,
thank
you
so
much
next
up,
we
have
status
updates
regarding
the
project
roadmap.
E
Yeah,
I
can
speak
to
these
so
as
we
kind
of
just
went
through,
we
accomplished
a
couple
big
milestones
already
this
month
with
the
ytt
schemas
and
the
cap
controller
package
management
apis.
E
E
Oh,
I
should
have
the
regular
disclaimer
that
these
are
all
subject
to
change,
but
this
is
the
plan
for
now.
We
also
have
some
work
coming
up
in
ytt,
allowing
you
to
export
ytt
schema
as
open
api
schema
as
a
way
to
just
provide
better
compatibility
with
with
tools
that
use
that
use
the
open
api
notation.
E
I
guess
like
initiative
that
has
been
somewhat
in
flight
and
just
wanted
to
call
it
out
here
as
a
way
to
surface
that
it's
ongoing
work
and
even
though
the
timeline
says
august,
that's
that's
mainly
just
to
show
that
I
know
we
plan
to
continue
to
invest
in
just
bringing
the
general
project
healthiness
up
and
iterating
on
the
documentation.
E
I
would
call
them
enhancements
that
are
tbd,
but
since
we
just
released
those
package
management
apis,
we
expect
to
learn
quite
a
bit,
so
it
just
kind
of
penciled
in
you
know
some
some
work
upcoming
there
and
then
we
get
into
the
future
further
on
which
I
don't
think
we
need
to
dive
into
yet.
But
if
you
have
any
questions
or
are
curious,
please
follow
those
links
to
learn
more.
A
So
next
we
have
the
package
management,
api
intro,
the
demo,
for
the
release.
Regarding
cap
controller,
I
will
stop
sharing
my
screen.
F
Cool
so
yeah
up
to
this
point,
our
discussions
about
packaging
stuff
have
been
very
hand
wavy
and
descriptive.
Instead
of
concrete
with
actual
examples,
so
dan
and
I
threw
together
a
little
demo
to
hopefully
solidify
everyone's
understanding
of
how
it
works,
what
it's
doing
and
hopefully
show
some
of
the
value
in
it.
F
We
split
it
up
into
sort
of
two
sections
where
I'll
be
showing
the
package
author
side
so
I'll,
be
writing
a
package
publishing
it
in
a
package
repository
and
putting
that
into
an
oci
registry,
and
then
daniel
will
sort
of
take
the
role
of
a
packaged
consumer
who
wants
to
install
this
package
onto
their
cluster.
F
What
specific
deployment
tool
to
use!
So
all
that
to
say
I'm
starting
with
an
app
and
it's
pretty
simple:
it's
got
a
service
and
a
deployment
and
just
prints
out
a
little
hello
message
when
somebody
connects
to
it
from
or
connects
to
the
service,
and
then
it
has
some
configurable
values.
So
you
can
configure
what
port
the
service
will
listen
on
and
then
you
can
also
configure
part
of
the
hello
message
that
will
be
displayed
to
users.
F
F
So
I'll
start
by
sort
of
creating
that
directory
structure
for
that
file
or
directory
or
bundle
and
then
moving
over
my
manifests
to
that
directory.
F
So
now,
if
we
take
a
look
at
package
contents,
you
can
see,
we've
got
this
config
directory
where
our
config
is
stored,
and
then,
since
this
is
an
image
package
bundle,
we
also
need
some
of
the
image
package
metadata
files
in
the
dot
image
package
directory,
so
I'll
make
that
and
also
populate
them.
And
to
do
this
we
want
to
create.
F
We
want
to
populate
the
images
yaml
with
references
to
images
used
by
our
application
config,
so
we'll
run
the
config
directory
through
k-build,
which
will
pull
out
any
image
references
present
in
the
deployments
or
any
other
resources
it
finds,
and
it
will
output
this
image
lock
file
into
the
images.yaml
so
running
that
we
can
then
take
a
look
at
what
that
output
looks
like,
and
you
can
see
here.
We
have
a
record
that
this
bundle
references,
the
kate,
simple,
app
docker
image.
F
F
That
is
what
this
looks
like,
so
we've
got
our
name,
which
is
just
a
combination
of
the
package
name
and
what
which
version?
The
specific
definition
is
for
some
release
notes
and
then
a
value
schema,
and
it's
important
to
note
that
this
is
for
informational
purposes
only.
So
this
is
a
way
for
package
authors
to
convey
to
consumers.
F
F
And
then,
as
we
get
down
to
the
actual
package
definition
where
it
instructs
cap
controller
like
where
to
fetch
the
configuration
from
so
here
we're
using
that
package
contents
bundle.
We
just
pushed
that
stores.
All
of
our
configuration
we're
telling
cap
controller
that
you
then
need
to
template
it
using
ytt,
which
will
render
all
of
our
manifests,
run
it
through
k-build
to
sort
of
replace
any
image
references
that
may
have
changed
throughout
the
relocation
process
and
then
deploy
it.
Finally,
using
cat
with
this
definition,
users
won't
have
to
know
about
any
of
this.
F
F
Now
we
also
need
to
make
sure
that
the
package
repository
is
able
to
reference
all
the
package
content
bundles
that
we
previously
created
so
that
the
fetch
stages
of
the
package
are
able
to
succeed
so
we'll
again
create
this
images
lock
file
inside
of
the
package
repository,
and
if
we
look
at
this
again,
we
see
that
we
now
have
a
reference
to
that
demo.
App
package
contents
bundle
that
we
had
created
at
the
start
of
the
demo
and
then
finally,
we
can
push
this
repository
to
an
oci
registry.
D
So,
as
eli
has
kind
of
touched
on
in
the
first
portion
of
this
demo,
what
cap
controller
allows
us
to
do
is
basically
capture
not
only
the
configuration,
the
associated
images
at
every
level
that
you're
sort
of
deploying
here
right,
like
we
resolve
the
container
images
in
our
manifest
to
their
digest
format
and
the
image
package
bundles
themselves
for
packages
are
resolved
to
this
digest
format.
D
So
our
expectation
here
is
when
I
create
this
package
repository.
What
I
should
see
is
that
I
have
packages
available
on
my
cluster,
so
in
order
to
do
that,
I'm
going
to
come
out
to
my
terminal
and
I'm
going
to
really
quickly
run
this
command
here
with
cap,
and
so
what
we
should
expect
after
we
run.
D
So
if
I
go
ahead
and
click
yes
here,
we
get
this
confirmation
message
just
basically
saying
that
the
package
repository
is
being
created
and
if
I
just
create
a
quick
watch
here,
what
we
should
be
able
to
see
is
that
the
package
repository
is
in
this
state
of
reconciliation
and.
D
What
that
means
is
that
this
package
repository
is
a
request
to
cap
controller,
to
basically
go
ahead
and
take
everything.
That's
in
the
image
package,
bundle
definition
associated
with
it
and
create
those
packages
on
my
cluster.
So
now
that
it's
in
this
reconcile
succeeded
stage,
what
we
should
be
able
to
see
is
that
these
packages
are
now
available
to
actually
be
installed.
D
So
if
I
do
a
control
c
here
and
then
I
just
do
a
k
get
packages,
what
we
see
is
that
I
have
this
package
definition
that
eli
just
created,
but
it's
now
actually
living
on
my
cluster
in
such
a
way
that
it's
easy
to
find.
We
can
see
that
the
actual
version
of
it
is
1.0,
and
so
this
we
can
expect
to
be
the
1.0
version
that
eli
just
created
the
next
thing
that
we'd
want
to
do.
As
far
as
this
process
of
discovering
packages
is
figure
out.
D
You
know
what
are
some
configurable
aspects
of
this
package.
So
if
I
go
ahead
and
just
take
a
quick
look
at
the
package
definition,
what
I
can
see
is
the
value
schema
here
and
the
properties
that
are
associated
with
it.
So,
from
my
perspective,
as
a
consumer,
I'm
using
this
value,
I
can
configure,
and
one
of
the
properties
that
looks
interesting
to
me.
D
Is
this
hello
message
property,
so
the
description
of
it
says
name
used
in
hello
message
from
app
when
app
is
pinged,
so
my
expectation
here
is
that
this
package
is
creating
an
app
and
that
this
value
that's
specified
here
is
what
the
response
from
the
application
will
be,
and
I
see
that
the
default
is
stranger.
So
that
looks
good
to
me
from
the
perspective
of
what
I
expect
when
I
install
this
package.
D
So
after
looking
over
this
package
definition
and
seeing
what
is
it
that
I'm
going
to
install
the
next
process,
is
I
want
to
select
what
package
I
want
to
install
and
what
version
of
that
package
I
want
to
install
and
that
can
be
accomplished
through
a
a
cap
controller
custom
resource
called
a
package
install.
D
So
this
is
basically
a
request
that
can
be
made
to
cap
controller
to
specify
what
package
you
want
that's
available
on
the
cluster
and
to
have
it
actually
deployed
out
to
a
particular
namespace
on
your
cluster
and
then
what
version
of
that
package
you'll
also
see
some
commented
out
information
here
and
we'll
discuss
that
a
little
bit
later
in
the
demo.
But
for
now
what
you
can
focus
on
is
spec
of
the
package
install
which
defines
a
service
account.
F
D
So
you'll
note
notice
here
that
this
service
account
name
is
needed
for
the
package
install
because
we
are
actually
going
to
create
resources
on
our
cluster
that
are
defined
in
the
package
itself,
which,
if
you'll
remember
back
to
the
beginning
of
our
demo,
was
the
deployment
and
the
service
associated
with
this
package,
so
we're
basically
defining
the
appropriate
level
of
our
back
that
this
package
install
needs
to
have
to
create
things
on
our
cluster.
D
So
if
I
come
back
out
to
the
terminal
now
and
I'll
just
do
a
quick
keep
control
get
all
here
just
show
we
don't
have
anything
in
our
namespace
currently,
but
after
we
create
our
package
install
with
cap,
we
should
expect
to
see
that
we
have
our
deployment
and
our
service
running
that's
associated
with
our
package.
D
D
It
corresponds
to
the
correct
package
name
and
the
version
that
we
expect
and
now,
if
I
do
acute
control
get
all
again,
I
have
this
deployment.
I
have
this
pod
created
and
I
also
have
an
associated
service.
So
the
last
thing
that
we'd,
maybe
want
to
check
is
to
see
if
that
hello
message
has
that
default
value
of
stranger
when
we
ping
it
and
the
easiest
way
that
we
can
do.
That
is
by
setting
up
a
port
forward
here
and
if
we
just
do
a
curl
local
host
on
port
3000.
D
I
get
back
this
confirmation
message
here
that
says
hello,
stranger,
so
we've
been
able
to
through
this
request
of
a
package
install
actually
have
this
software
running
on
our
cluster.
That's
defined
by
this
package
definition,
but
one
thing
we
also
mentioned
is
that
this
package
install
is
this
way
to
configure
these
values.
D
We
can
basically
say
that
this
hello
message:
I
want
to
change
it
to
carvel
community,
so
we
should
expect
here
that
when
we
recreate
this
package
install
or
update
it,
what
will
happen
is
we'll
get
a
different
response
back
from
the
application.
That's
been
deployed
so
coming
back
out
here.
If
I
now
go
ahead
and
rerun
this
command
to
create
the
package,
install
you'll
see
that
we
have
this
update,
that's
going
to
take
place
for
the
package
install
and.
D
The
secret
I'll
go
ahead
and
create
this
now
and
we'll
just
make
sure
to
see
that
it
finishes
reconciling
and
picks
up
our
update,
so
there
it's
gone
through
and
we
can
see
that
everything
should
be
running
successfully
now.
So,
if
I
come
back-
and
I
do
my
port
forward
again,
my
expectation
is
that
I
get
this.
D
Hello,
karvel
community
message
back
so
one
of
the
things
that
we
can
do
here
is
not
only
highlight
a
lot
of
the
aspects
of
you
know
the
configuration
and
how
we
deploy
things
out
to
our
cluster.
We
can
also
make
it
really
simple
to
configure
these
values
in
such
a
way.
That's
useful
for
end
users
to
not
have
to
think
about
all
the
details
about
what
is
being
deployed
and
how
we're
going
to
deploy
something.
D
So
this
concludes
our
demo
from
a
package
author
and
package
consumption
standpoint
and
learning
about
these
package
management
apis
that
cap
controller
introduces
for
anyone
who
is
interested
in
more
information
on
the
work
that
we're
doing
on
cap
controller.
We
have
this
whole
website
here
and
it's
got
tons
of
information
on
cap
controller.
D
A
Everyone
thanks
daniel
and
thanks
eli
congrats
on
the
release
and
really
excellent
demo,
really
appreciate
you.
Providing
that
today.
Does
the
team
have
anything
that
they
wish
to
share.
C
I
have
a
question
but
like
that
was
really
cool.
I
have
a
question
for
the
package,
author
eli
and
it's
I
notice.
Whenever
we
have
an
image
reference
or
a
bundle
reference,
we
always
use
a
digest,
except
in
the
package
cr
in
the
1.00
yaml
file.
C
If
you
see
a
screen,
I
think
I
saw
it
on
that
file
where
the
test
app
the
bundle
that
represents
the
test
app
assets.
We
used
a
tag
instead
of
a
digest
in
that
file.
I
was
just
more
wondering.
Is
that
something
that's
recommended?
Is
that
to
help
with
upgrading
that
application
bundle
yeah,
it's
just
more,
I'm
wondering
about
why
we
didn't
use
a
digest.
In
that
scenario,
1.0.
F
Yeah,
you
are
so.
F
This
is
like
an
easier
way
for
folks
to
develop
their
stuff
and
author.
These
things
like
it's
easier
for
me
to
just
know.
Oh
I've
published
this
package.
Contents
bundle
for
v1
and
I
tagged
it
1.0.0.
Luckily,
we
run
everything
through
k-build
before
we
well.
We
run
it
through
k,
build
in
order
to
record
the
image
references
so
there
it
gets
resolved
to
a
digest.
F
So
the
only
image
that
will
be
moved
along
with
this
repository
is
the
actual
digest
reference
and
then,
on
the
flip
side,
when
we're
installing
packages
into
the
cluster
from
a
repository,
we
also
run
them
through
k-build
again,
which
will
then
take
any
digest
references
from
the
repository
images
lock
and
replace
them
in
the
package
definitions.
So
I
think
maybe
daniel
could
show
that
by
the
time
this
package
gets
into
the
cluster,
it
actually
is
a
digest
reference.
F
C
That
helps,
and
so,
like
suppose,
we
create
a
bundle
for
this
and
before
we
kind
of
do
an
image
package
copy,
you
know
getting
that
running
it
through
cable
to
get
the
digest.
Suppose
somebody
updates
1.00
to
another
digest
like
eventually
down
the
line.
We
might
get
a
can't
find
this
image
because
it
didn't
get
copied
across
an
image
package.
Copy
bundle.
F
A
A
F
Sure,
and
so
I.
D
I
think,
can
we
actually
nancy?
Can
we
do
like
a?
Can
we
go
back
to
gallery
view
real
quick,
because
I'm
I'm
a
little
curious
on.
Can
I
get
like
a
yes,
I
know
exactly
what
godoc
is
no,
this
is
unfamiliar
and
we
should
really
talk
about
it
in
detail.
321.
Yes,
I
know
what
it
is
or
I
I
want
to
hear
a
lot
about
it.
D
Okay,
so
thanks,
then
I
guess
I'll
share
my
screen.
If
that's
okay
and
I'll
I'll
talk
about
what
it,
what
it
is
real,
quick,
so
maybe
something
that's
familiar
to
some
of
you
is
like
javadoc
or
well
anyways,
other
automated
documentation
systems,
and
so
what
you
know,
what
godoc
does?
Is
it
actually
like
it
auto
generates
documents
that
look
like
the
you
know,
whatever
the
ones
that
you
find
here.
D
So
these
can
all
be
auto-generated
by
putting
comments
in
your
code
and
those
comments
might
be
a
little
bit
verbose.
So
there
is
like
a
signal
to
noise
ratio.
Godok
should
not
be
confused
with
the
children's
book.
Go
dog
go.
I
my
mom
can
tell
you
some
embarrassing
stories
about
how
long
I
spent
sounding
out
each
each
word
of
go
dog
go
but
despite
their
obvious
similarities,
so
I
did.
I
made
a
demo
here
of
exactly
what
this
linter
would
do.
D
So
I'm
here
you
can
see
this
linter
has
run
against
the
change
I
made.
I
made
an
unused
undocumented
exported
struct.
You
can
see
that
it
complains
that
this
exported
type
should
have
a
comment
or
be
unexported.
D
The
a
separate
linter
complained
that
this
private
field
is
completely
unused,
and
so
here
here's
an
example
where
I
in
this
function
that
I
added
I
did
put
the
comment,
and
so
the
the
godoc
convention
is
that
the
whatever
the
symbol
you're
exporting,
should
also
be
the
the
first
word
of
the
the
godoc
comment
and
then
just
as
I
was
showing
you
you
would
you
know
that?
That's
how
you
get
you
know
this.
So,
like
oh
shucks,
this
is
not
the
best
example.
D
I
could
have
anyways
that
there's
a
comment
above
this
function
that
starts
with
slash
funk
defines
a
flag
with
the
so
it
it
would.
You
know
it
turns
exactly
this
comment
and
godoc.
In
contrast
to
javadoc
and
some
other
tools,
godoc
does
not
require
you
to
also
exhaustively
document
each
parameter
or
if
there
was
return
value.
So
that's
that's
still
a
little
bit
up
to
you
and
I
think
that
that
can
help
decrease
the
noise.
D
So
if
parameters
are
obvious-
or
you
hope
that
they're
obvious,
then
you
don't
have
to
exhaustively
document
them
but
anyways.
So
hopefully
this
example
is
provides
a
little
bit
of
clarity
and
the
other
thing
I
would
point
out.
You
can
see
that
it's
complaining
about
these
functions
and
structs
that
I've
added,
but
it's
not
complaining
about
this
old
code
that
was
already
here.
D
So
everything
that
predates
this
change
is
grandfathered
in
and
this
this
would
let
us
ratchet
up
our
our
commenting
and
documentation
game
without
having
to
do
some
exhaustive
pass
where
we
go
through
so
okay,
that's
that's
what
what
it
is.
What
I'm
thinking
of,
I
guess!
In
conclusion,
the
sales
pitch
is
hey.
You
know
I
we
I
haven't
even
met
all
of
you,
I'm
new
and
I
would
love
a
code
base
with
better
documentation.
D
E
Yeah,
I
guess
one
one
piece
of
this-
that
I'm
certainly
in
support
of
is,
is
that
it
will
help
ease
the
ramp
upper
onboarding
to
new
code
bases.
I
know
that
we've
had
comments.
I
think
in
the
past
of
either
it's
difficult
to
know
at
a
high
level
like
without
reading
into
the
actual
contents
of
a
function
like
what
might
be
going
on.
So
generally,
I'm
pro
comments,
but
I
also
know
that's
a
very
well
depending
it
can
be
a
pretentious
topic.
B
I
have
kind
of
a
clarifying
question,
so
this
linter
will
kind
of
enforce
these
rules.
Does
it
also
generate
the
docs
or
is
that,
like
next
steps.
D
D
So
I
I
think
it
one
could
right
if,
if
we
wanted
to
go
back
and
retractively
decorate
many
things
in
one
swoop,
you
know
take
your
scripting
language
of
choice
and
have
at
it.
But
I
think
often
it's
the
the
insight
that
a
human
who
knows
the
system
can
provide
is
the
the
valuable
piece,
in
my
opinion,.
E
D
C
A
big
curious
about
thoughts
around
are
we
thinking
of
maybe
making
those
that
go
doc
web
page
like
more
public.
I
I
just
noticed
that
we've
had
questions
in
the
past.
Where
people
ask
hey,
can
we
just
consume?
You
know
your
project
as
a
go
library
say,
and
that
just
opens
up
a
different
kind
of
consumer
api
to
our
users,
where
we're
telling
people
no
use
the
binary.
If
we
have
a
documentation
page
that
has
all
the
functions
publicly,
you
know
documented
it
might
communicate
that
yeah.
C
E
I
topic
just
to
respond
to
dennis,
I
think
so
far
we've
been
recommending
that
people
would
shell
out
to
the
binaries
themselves
versus
using
the
code
as
like
go
libraries.
Someone
else
can
please
correct
me
if
I'm
wrong
there
so
yeah
that
that
would,
I
think,
be
another
consideration.
We
would
have
to
keep
in
mind
if,
if
we
wanted
to
go
that
route
and
like
publicly
host
the
the
godoc
documentation.
C
Yeah,
it
might
be,
like
you
know,
look
fyi
like
if
you
use
it
as
a
go
package,
we
may
not
be
able
to
support
that
any
issues
you
may
run
into
you
know
we
don't
provide
any
backwards,
compatibility,
at
least
on
certain
code
basis
for
certain
functions.
So
if
you
use
it
and
depend
on
it,
it
might
break
on
you
somewhere
in
bold
letters
or
something
that
communicates
that.
I
don't.
C
B
Well,
I
definitely
see
that
this
would
add
like
more
resources
for
people
who
are
new
to
the
code
base,
and
I
think
that's
something
that
we
want
to
optimize
for
on
this
project.
Given
that,
like
we're
actively
hiring
and
we
want
support
from
the
community
so
like,
I
think
it's
a
great
idea,
I'd
like
to
see
everybody
on
the
team
on
board
with
it
like,
because
it
will
be
something
when
I
have
to
like,
maintain
these
comments
and
make
sure
that
they
don't
get
out
of
date.
D
So
I'll
add
briefly
that
I
did
run
this
past
dimitri
before
I
started
prototyping
and
he
he
he
gave
it
a
sort
of
grudging
like
yeah.
It
might
be
time
to
do
this
and
is
similarly
john
ryan
before
his
vacation
gave
it
yeah.
Why
don't
you
ask
around
and
see
if
there's
a
like
is
there
is
the
reason
we're
not
doing
this
already,
just
because
we're
not
doing
it
already
or
is
there?
D
Is
there
actual
resistance,
and
I
think
the
the
answer
that
I'm
hearing
is
there's
not
I'm
not
hearing
anyone
say.
Oh,
I
don't
want
to
maintain
that,
but
I'm
hearing,
let's
make
sure
we're
all
ready
to
maintain.
That,
is
that
fair,
so
maybe
as
a
step
forward,
I'll
merge
this
into
cap
controller
and
we'll
we'll
start
there
for
a
few
weeks
and
I'll
I'll
put
this
back
on
the
agenda
sometime
in
august.
A
E
Yeah
for
for
this
one,
I
don't
think
we
need
to
get
into
it
necessarily
the
actual
content.
I
more
so
I
just
wanted
to
bring
awareness
to
this
currently
drafted
pr,
but
trying
to
capture
our
issue
triage
process
just
to
make
it.
E
E
So
my
primary
ask
is
for
folks
to
take
a
look
at
the
the
formatted
markdown
version.
It's
going
to
be
easier
to
read,
but
if
you
just
wanted
to
take
approval,
see
if
things
make
sense,
there
are
a
number
of
assumptions
being
made
here
so
feel
free
to
question
things.
Things
like
the
maintainers
marked
on
file.
E
E
So
yeah
there's
also
some
thoughts
in
there
about
ways
we
can
also
categorize
things
by
kind
of
changing
our
our
labeling
system
in
some
way,
by
using
an
example,
would
be
like
a
category
slash
specific
item,
so
it
could
be
like
priority,
slash,
zero.
It's
like
it's
a
priority,
zero
thing
or
it
could
be
a
kind,
slash
bug
kind,
slash
enhancement.
E
So
there
are
just
like
general
ideas-
and
I
just
wanted
this
to
kind
of
kick
start
a
conversation
and
get
folks
thinking
about
this,
but
do
so
publicly.
So
we
can,
you
know,
just
kind
of
hash
out
what
sort
of
agreements
we
want
to
put
in
place
as
a
as
a
team.
A
One
of
the
comments
that
came
to
mind
was
maintainers
are
listed
in
the
maintainers
file
for
each
repo,
I
think
for
karbal.
My
suggestion
would
be
to
have
just
the
one
centralized
maintainers
file
in
the
karbol
repo
and
then
just
link
out
to
to
that
file
within
each
of
the
repos.
So
we
don't
have
to
maintain
15
different
maintainers
files.
It's
just
all
one.
E
E
E
E
Agreement
and
it
is
largely
inspired
by
kubernetes
triage
process,
so
between
kubernetes
triage
process
and
then
what
we
were
just
already
doing
more
organically
as
a.
E
A
Well,
thanks
aaron,
for
putting
this
together
and,
if
you're,
watching
this
from
home,
and
you
and
you're
looking
this
over
and
you
want
to
provide
some
feedback.
Please
do
so
within
the
pr
or
you
can
even
find
us
on
slack
and
the
kubernetes
select
workspace
within
the
cardinal
channel.
A
We
have
two
minutes
left.
Is
there
anything
that
the
team
wants
to
to
bring
up
before
we?
We
depart.
E
A
Yeah,
absolutely
thanks
erin,
so
with
that,
we
do
hope
that
you're
able
to
meet
with
us
at
our
next
meeting
again
we're
now
meeting
every
thursday
at
10
30
a.m.
Pacific
time
for
our
community
meetings,
we
are
no
longer
having
our
office
hours.
The
new
format
lends
itself
to
having
more
time
for
those
discussion,
topics
and
more
in-depth
items.
Hope
that's
what
we're
hoping
for-
and
I
seems
like
this
first
me
being
allowed
for
that.
So
we
encourage
you
to
attend
the
meetings.
A
If
you
have
feedback,
please
provide
that
in
all
the
different
areas
that
we
have
available
and
yeah.
We
wish
we
hoped
to
see
you
at
one
of
the
meetings
coming
up
soon
and
with
that.
Thank
you
have
a
good
day.