►
From YouTube: Carvel Community Meeting - March 1, 2021
Description
Carvel Community Meeting - March 1, 2021
Topics discussed include: Announcement of office hours, project roadmap for March, and backlogged items.
Details and Agenda here: https://hackmd.io/F7g3RT2hR3OcIh-Iznk2hw
A
All
right
welcome
to
this
week's
edition
of
the
carville
community
meeting,
just
a
reminder
it's
being
recorded,
so
please
adhere
to
the
code
of
conduct
when
engaging
in
this
meeting.
A
A
All
of
this
is
listed
out
in
the
agenda
and
we'll
be
posting
it
to
our
website
and
the
repos.
With
all
of
that
information,
the
same
zoom
link
will
be
used.
A
The
one
that
we
use
for
the
carville
community
meeting
will
also
be
used
for
the
office
hours
and
the
way
that
it'll
work
is
the
first
30
minutes.
We
dedicated
to
any
discussion
items
that
we
didn't
get
to
for
the
community
meeting
or
anything
that
we
need
to
discuss
at
that
time
and
then
the
last
30
minutes
more
ever.
A
B
Cool,
so
what
you're
looking
at
is
the
carnival
roadmap
for
the
next
month
or
two,
as
you
can
so
for
march
and
april,
and
as
you
can
see,
I
kind
of
grayed
out
the
april
part
because
roadmaps
for
the
future
types
of
work
is,
is
still
being
flushed
out
and
as
we
go,
we
would
learn
more
of
what
we'll
be
planning
work.
What
we'll
be
delivering
in
april
so
just
wanted
to
call
that
out.
B
So
you
see
march
and
april,
and
then
we
have
some
of
our
tools
highlighted
on
the
left,
ytt
image
package,
cap
k,
build
and
cap
controller
and
also
wanted
to
provide
an
disclaimer
there.
We
also
have
a
lot
of
other
experimental
tools
that
you
might
be
aware
of,
such
as
kwt
terraform
provider
or
vendor.
And
such
you
you
the
reason
why
you're
not
seeing
them
it's
not
because
we
it's
it's
not
because
we
don't
care
about
them.
It's
because
you
know
within
our
bandwidth.
Within
our
capacity.
B
There
are
some
of
the
things
that
we
prioritize,
but
please,
if
you
have,
you
know,
need
improvements
or
defect,
fixes
that
you
would
like
for
other
tools
that
are
not
reflected
on
this.
Please
please,
please
communicate
that
through
kubernetes
slack
or
you
can.
You
know,
feature
vote
on
github
issues
by
showing
adding
thumbs
up
or
various
emojis,
then
we
will
take
that
input
and
prioritize
them
accordingly.
So
don't
read
this
as
oh.
B
We
forgot
about
all
the
other
tools,
so
I
wanted
to
call
that
out
first,
so
the
way
it's
kind
of
also
represented
here,
as
you
can
see,
some
are
you
know,
some
boxes
are
filled
with
colors
and
some
blocks
are
just
outlined,
so
the
boxes
that
are
filled
like
this
one,
for
example,
yt
schema
type
checking
these
are
actively
being
developed
at
the
moment.
So
those
are
delivery,
work
or
development.
B
Work
versus
outlined
items
are
things
that
we
are
working
on
coming
up
with
the
solutions
for
it
designing
of
it,
or
even
on
this
trying
to
understand
the
problem
space
better,
so
we're
kind
of
distinguishing
that
here
so
art
our
team.
Does
you
know
that
dual
agile
track
work,
where
we're
doing
both
delivery
and
discovery
at
the
same
time?
B
And
that's
why
you
see
those
all
kind
of
represented
here
and
we
wanted
to
be
transparent
on
you
know
not
just
what
we're
building
day
to
day
week
by
week,
but
we
are
what
we
are
also
thinking
of
building
in
the
future.
So
some
of
these
items
that
we
are
doing
discovery
work
on
will
come
in
in
the
future
delivery
work
so
yeah
with
those
kind
of
like
things
shared
for
ytt,
currently
we're
actively
working
on
the
type
checking
piece.
So
it's
currently
noted
as
experimental
toll.
B
So
our
goal
is
to
remove
that
experimental
tool
flag
and
making
sure
the
schema
work.
Is
you
know
solid
that
you
can
use
it
for
your
production,
not
just
as
an
experiment.
So
that's
what
we're
working
on
and
then
in
parallel,
the
discovery
work.
What
we're
doing
is
abstracting
away
data
values.
What
I
mean
by
that
is
for
users
of
cad
controller
who
are
managing
their
packaging,
that
they
don't
need
to
know
under
the
hood
of
cad
controller.
B
It's
using
ytt
for
schema
validation
because
technically
it
could
be
any
kind
of
any
other
tool
that
does
schema
validation.
So
that's
something
that
we're
going
to
start
trying
to
understand
the
problem,
space
more
and
then
coming
up
with
the
solution.
So
we
will.
B
We
will
also
push
out
those
design
docs
to
y'all
as
well,
so
that
you
can
also
provide
feedback
if
there's
any
so
that
discovery
work,
we're
gonna
work
on
and
after
that,
we'll
continue
to
do
more
discovery.
Work
on
ytt
schema
work.
How
can
we
generate
open
api
compatible
schema
from
ytt
schema
is
the
next
track
of
discovery.
What
we'll
be
doing
last
week,
we
already
kicked
off
this
discovery
work.
So
this
is
a
docs
improvement
guides
and
adding
guides
and
code
examples
to
ytt.
We've
heard
that
ydt
is
hard
to
ramp
up.
B
It
has
a
lot
of
good
reference
materials
on
what
features
are
available,
but
maybe
the
next
step
of
how
to
use
those
features
might
be
lacking.
So
we
heard
on
your
feedback-
and
we
want
to
you
know,
provide
those
how-to
guidelines
with
providing
a
code
example,
so
that,
ideally,
you
could
maybe
just
copy
paste
some
of
those
and
get
started
and
coming
up
with
the
pocs
proof
of
content
much
easier.
B
So
that's
what
we're
working
on
and
for
image
package
we're
currently
actively
working
on
performance
improvement
of
image
package,
the
next
we
are
working
on
recursive
bundles,
so
we've
already
worked
on
both
designing
and
I
think
we
we've
already
kicked
off
the
first
story
for
recursive
bundles,
so
how
to
bundle
a
bundle,
that's
what
we're
working
on
and
in
parallel,
we
also
have
a
pair
actively
engaged
in
trying
to
come
up
with
a
solution
for
image
signing.
B
I'm
gonna
jump
real,
quick
to
cap
controller
lan,
so
we
have
also
packaging
api
work.
That
is
currently
in
progress.
The
alpha
version
is
available,
so
you
can
check
it
out
from
the
carbo.dev
cap
controller
site
you'll
be
able
to
access
the
documentation,
so
we
will
continue
to
work
on
building
out
the
packaging
api
for
cap
controller.
B
Cap
and
k
build,
so
we
don't
have
like
other
tools
like
big
chunks
of
feature,
work
that
we
have
identified,
but
we
will
continue
to
work
on
future
improvements
like
small
feature,
improvements
and
bug
fixes
along
the
way,
and
we
actively
have.
We
usually
have
about
two
or
three
tracks
of
work
per
day
that
based
on
our
capacity
that's
available,
so
based
on
that
capacity,
we
will
be
able
to
pull
in
these
prioritized
github
issues
for
a
cap
and
cable.
B
So
you
know
there's
we
know,
there's
a
lot
more
coming
a
lot
more
still
in
the
pipeline,
so
we'll
continue
to
prioritize
them
and
again,
if
there
are
specific
issues
that
you
want
us
to
prioritize
to
work
on,
please
let
us
know
and
yeah.
We
would
like
to
either
get
those
work
started
within
the
team,
or
we
would
like
to
help
you
to
possibly
contribute
to
working
on
those
github
issues
as
well
to
involve
you
in
more
of
the
collaboration
there.
B
So
this
is
the
high
level
roadmap
for
carville
this
month
and
then
next
month,
during
the
community
meeting
I'll
share
more
about
what's
coming
up
in
april
and
may
thank.
C
D
Yeah
yeah,
I'm
just
wondering
for
the
rectangles
that
are
colored
in
I.e,
the
ones
that
are
signifying
feature,
work
that
can
be
delivered.
Suppose,
there's
a
user.
Does
it
communicate
when
it's
going
to
be
released
in
a
version.
So
if
someone
is
really
waiting
for
schema
type
checking
is
this
kind
of
communicating
that
mid-march
there
might
be
a
ytt
version
that
has
schema
type
checking
available
to
use.
B
At
least
that's
what
we're
aiming
for
it's
as
any
romance,
it's
there's
room
for
change,
but
that
is
what
we're
aiming
for.
B
E
E
E
Just
to
clarify
that
one
actually
specific
example,
because
there's
a
bunch
of
features
that
we're
collecting
under
this
a
a
feature
flag.
So
there's
an
experimental
schema
enable
flag
on
ytt,
although
that
feature
might
be
available,
might
still
be
considered
experimental
as
we
work
out
the
full
set
as
well
so
anyway.
Sorry,
I
wanted
to
add
that
clarification
thanks
nancy.
A
Yeah,
my
question
is
just
about
the
accessibility
of
the
roadmap.
Is
it
on
github
or
anywhere
that
the
community
can
also
view
the
roadmap.
B
We
will
make
it
available.
I
think
we'll
decide
what
will
be
the
best
way:
okay
to
be
accessible,
yeah,
okay,
great,
not
a
secret.
So
all
right,
great
thanks.
C
All
right,
let's
dive
into
some
stories,
then
so
we'll
start
with
recursive
bundles
today,
right
now
we
do
have
a
story
that
dennis
just
picked
up
this
morning.
I
believe,
but
we'll
talk
about
a
couple
of
these
stories
that
we
haven't
discussed
yet
so,
starting
with
this
one,
so
copy
recursive,
bundles
to
tar
and
import
the
tar
to
registry.
C
C
And
then
so,
there's
a
couple
of
pre-requirements
in
here,
and
these
are
just
some-
I
guess
examples
that
will
be
used
repeatedly
throughout
this
epic,
so
just
just
to
provide
some
grounding
here.
C
There's
this
bundle
one
and
it
has
an
images
yaml
that
looks
like
this
has
a
a
single
image
it's
pointing
to
and
then
a
bundle
bundle
2
that
it's
pointing
to
so
bundle
1
has
a
recursive
bundle
and
then
looking
at
bundle
2
its
images
lock
just
has
an
image,
so
you
can
think
of
it
as
a
a
simpler,
a
simple
bundle
that
doesn't
recurse,
and
then
these
are
just
kind
of
listed
as
as
the
two
images
being
used.
C
And
then,
when
we
run
copy
this
from
from
the
registry
to
atar,
then
we
should
see
that
it
writes
something
locally
here
to
a
purple,
local
tarball,
and
then
this
last
step
is
really
just
to
show
that
you
know.
If
you
were
to
extract
it,
it
would
be
what
you
expect.
The
the
individual
layers
here.
E
I
have
a
kind
of
fundamental
question:
I'm
trying
to
remember
where
this
landed.
Do
we
know
how
do
I
know
whether
a
particular
image
is
a
bundle
or
not.
C
E
D
E
D
File,
oh,
when
we
push
it,
I
should
say
so.
When
we
pull
it,
we
can
check
the
annotation
that
that
the
I
guess
it's
in
the
manifest
or
it's
in
or
in
the
on
the
image.
Config
there's
a
labels
section,
and
we
have
a
key
that
signifies
it's
a
bundle,
but
definitely
also
the
directory
structure
inside
is
if
we
have
a
dot
image
package,
it's
bundled
directly.
I
can't
remember
what
it
is,
but
this
is
also
our.
D
C
Cool,
so
the
second
scenario
here
is
user
imports
from
recursive
bundles
tar.
So
this
is
copying
from
atar
to
a
repo.
C
I
talked
with
sojourn
wrote
these
stories
he's
just
not
here
today,
so
I'm
just
kind
of
representing
the
track.
We
talked
about
this
one
a
little
bit
and
we
took
off
the
experimental
recursive
flag
because
the
deduping
really
should
have
happened
from
like
this
sorry.
Well,
this
whole
scenario,
the
first
scenario,
so
we
felt
that
we
didn't
want
to
introduce
any.
C
E
F
D
E
D
C
E
E
I
think
in
terms
of
helping
the
user
envision,
what's
going
on
that,
like
the
bundle
sort
of
leads
its
flock
of
images
along,
it
can
be
very
helpful
that
the
output
kind
of
helps
match
that
sort
of
my
two
sense
that,
like
they,
they
don't
just
look
like
one
among
them,
but
they
sort
of
the
bundle,
images
kind
of
lead
the
rest
of
them
and
that
whatever
was
contained
in
a
particular
bundle.
We
could
see
oh
yeah,
it's
still
still
in
there.
E
C
D
D
We
don't
have
a
lot
of
control
over
the
importing
like
the
line
items,
although
after
the
performance
story,
we
do
have
a
lot
more
control
in
how
we
do
that
because
we
changed
it
used
to
be
like
you
know,
each
line
item
used
to
be
shown
up-to-date
progress
of
what's
been
uploaded
because
we
were
handling
all
the
uploads
itself
now
everything's
been
uploaded
by
the
library
at
once,
and
then
we
print
out
the
importing
messages.
So
we
do
have
a
lot
more
control.
C
D
E
E
C
Cool
we'll
move
on
to
the
third
scenario,
which
is
a
failure
case
of
not
using
the
experimental
flag
on
a
recursive
bundle.
D
E
I
feel
like
it's
really
friendly
to
do
so
to
say,
I
think
what
you
meant
to
say
was
everything
from
like
opportunity.
Cost
perspective
like
it
does
add,
just
just
a
bit
more
to
the
story
and
wondering
if,
like
perhaps
we
can
get
sufficient
mileage,
especially
something
where
there's
no
parameters
for
the
keyword,
arg,
it's
just
it's
an
additional
flag
and
we'd
say
include
that
I
up
arrow,
add
in
that
additional
flag
and
hit
enter.
E
Maybe
that's
not
too
terribly
bad,
but
maybe
in
other
cases,
in
our
tools
it
might
be
helpful
to
like
oh
yeah.
This
might
take
a
little
bit
to
calculate.
Let
me
just
show
you
what
what
it
probably
would
be
is
even
greater
value
for
for
the
user,
so
I'm
proposing
perhaps
that
we
could
consider
this
part
just
mention
the
flag
so
make
sure
you
include
this.
If
this
is
what
you
want
to
do
with
your
previous
command.
G
Question
I
have
is,
I
know
the
story
that
we
talked
about
last
week,
which
was
pushing
and
pulling
recursive
bundles.
We
use
the
flag
dash
dash
recursive,
I
wonder
like.
Is
there
a
reason
why
we're
not
using
the
same
play
here
as
the
previous
story.
C
D
E
No,
no,
I
I
think
these
were
meant
to
be
two
different
things.
The
dash
r
genuinely
means
do
this
recursively.
E
E
So
unless
we've
decided
to
change
what
the
default
behavior
is,
I
suspect
it's
just
a
minor
omission
of
like
we
needed
to
include
the
the
dash
r
or
dash
dash
recursive
to
make
sure
that
it
happens
recursively
and
that
this
other
one,
this
experimental
flag,
is
a
feature
flag.
Is
whether
or
not
we
even
turn
this
feature
on
at
all.
E
D
Yeah
so
back
to
carrie's
question
then
like
should
the
copy
command
also
have
that
dash
dash
recursive
flag?
I
think
I
mean
with
if
you
don't
have
it,
what
does
it
mean
to?
I
guess
you?
Don't
you
don't
know
you
don't
recurse
down
through
the
bundles
and
you
just
miss
some
of
the
layers.
That
seems
like
not
what
the
user
intended
most
of
the
time.
So
maybe
it's
emitted
intentionally.
You
always
want
to
recurse
bundles
when
you
copied
to
a
tar.
G
E
Maybe
for
today,
in
terms
of
having
clarity
on
the
story,
what
if
we
stipulate
the
experimental
flag
was
on
purpose
and
it's
meant
to
enable
or
disable
anything
really
having
to
do
with
recursive
bundles
period,
in
that
there
might
be
hints
elsewhere,
whether
or
not
there
needs
to
be
a
flag.
Let's
say
if
you're
copying
to
atar
or
not,
but
that
the
overall
functionality
here
that,
like
creating
a
tar
ball
from
recursive
bundle,
needs
to
be
able
to
happen,
quote
unquote
somehow
so
like.
E
E
And
that
we're
not
trying
to
do
the
the
truth
table
of
okay
did
you
include
the
dash
r
and
the
experimental
and
the
dash
r
without
the
experimental
and
then
reverse
like
I,
I
don't
think
that's
the
intent
here.
E
D
Just
wanted
to
make
sure
I
understood
the
last
sentence
in
this
thing,
which
was
something
about
a
note
making
sure
something
delays
get
deduped
if
you
scroll
down
yeah.
That
note,
is
that
just
coming
for
free
or
is
it
something
we
need
to
do
for
that?
Is
it
like
why?
Why
do
we?
What
do
you
think
we
added
that
night
here
for.
C
I
guess
I
yeah
I'm
trying
to
remember
my
conversation
for
a
while,
and
I
think
this
should
have
already
been
done
as
part
of
the
first
push
story,
but
I
think
the
note
is
just
to
ensure
like
let's,
let's
just
again,
double
double
check.
So
I
think
that's
what
that
was
about.
D
C
Cool,
let's
see
we'll
keep
going,
then
I
think
with
the
three-pointer
in
flight
and
then
a
two
after
that
we're
probably
going
to
be
okay
for
the
week.
Does
that
sound
right
with
folks
yeah
cool?
E
Okay,
so
if
we
remember,
we
have
schemas
that
are
defined
in
our
what
we
call
our
root
library
or
base
library
like
that
base
directory,
where
your
ytt
files
are
at,
and
you
can
include
a
schema
in
that
as
well.
Now.
What
we
want
to
do
is
turn
our
eyes
toward
a
more
advanced
usage
where
you're
able
to
collect
some
templates
and
data
values,
and
you
stick
them
into
what's
called
a
private
library
which
sort
of
sits
there.
E
It
also
means
that
those
things
sort
of
operate
as
as
they're
called
libraries
they
can
get
vendored
in
and
then
pull
and
then
incorporated
in
multiple
ytt
invocations.
E
So
we
want
to
make
sure
that
a
that
this
is
going
to
be
working
for
that
usage
and
b,
that
there
are
proper,
either
interactions
or
or
lack
of
interactions
between
the
root
library
and
the
private
library.
E
So
so
here
we
have
the
first
scenario,
and
this
is
where
we
we
do
have
a
private
library
and
it
does
have
a
schema
defined
in
that
private
library.
So
here
our
private
library
is
named
foui
and
it
has
a
schema
file
here.
It's
describing
the
values
that
are
expected
by
its
template
and
then
when
we
invoke
sorry
so
here
here
that
here's
that
actual
the
contents
of
those
files,
so
there's
the
fui
schema
and
the
food
template
you
can
see,
foo
is
defined
and
it's
going
to
be
an
integer.
E
We
use
that
there
and
then
in
our
root
library.
We
happen
to
have
a
values
file
here,
let's
see
sorry
so
in
our
in
our
root
library.
All
we're
doing
is
we
are
loading
up
that
foui
library
set
library
get
fue,
we're
evaluating
it.
So
now
we've
got
a
doc
set,
that's
been
fully
rendered
now
it's
just
yaml
and
we're
going
to
say:
hey,
take
this
document
start
and
replace
it
with
the
contents
of
that
evaluated
library.
E
That's
what
this
line
is
doing
so
if,
in
effect,
we're
just
invoking
the
private
library,
then
the
other
file,
that's
sitting
here
in
the
root.
Is
this
tricky
little
data
values
file
and
it's
tricky
because
it's
a
data
values,
but
it's
targeted
declaratively
to
the
private
library.
So
here
this
is
an
attempt
to
change
the
value
of
foo
from
the
default
42.
E
E
It
doesn't
because
this
schema
file
is
currently
not
scanned
as
a
piece
of
schema
for
the
outer
library,
and
so
it
actually
gets
interpreted,
as
I
think
just
like
a
template.
I
don't
know
it's
it's
it's
awkward,
it's
not
right,
it's
not
right!
E
So
no,
there
is
work
that
needs
to
be
done
to
ensure
that
when
files
are
being
fetched
in
the
same
way,
note
that
there's
prior
art
here
about
like
how
are
data
data
value
files
being
handled
they're,
probably
assuring
some
when
we
think
about
that
same
code
path
for
schema,
there's,
probably
some
short
circuiting
somewhere
where
we
well.
E
E
E
So,
instead
of
that
values
file
that
was
at
the
root
of
our
library,
instead
of
it
having
foo,
that
is
an
integer,
it's
actually
a
string
which
is
the
wrong
type,
but
note
we're
still
targeting
phui
with
that
value,
and
so
when
we
go
to
run
that
pile
of
ytt
stuff,
we
should
actually
get
the
corresponding
schema.
D
This
was
mentioned
last
time,
but
the
error
message:
it
says:
values
yaml,
we're
going
to
include
the
path
to
the
private
library
to
differentiate
between
the
private
library
values.
Actually
not
given
the
path,
I
should
say
to
values
yaml.
If
you
have
multiple
value,
multiple
or
maybe
I'm
talking
about
schema.yaml
yeah,
maybe.
C
E
So
the
net
of
all
that
is,
that
is
a
really
good.
That
would
be
a
great
improvement.
It
would
be
out
of
scope
for
this
story,
so
we
want
to
do
that.
Yes,
but
not
as
a
part
of
this
work.
E
E
E
Well,
instead,
what
we've
done
is
we've
actually
used
a
programmatic
approach
to
do
the
same
thing,
so
first
we
defined
a
function,
it's
called
dv's
data
values
from
root,
and
then
we
define
this
variable
and
it's
foo
from
root
library,
and
I
guess
to
keep
things
interesting,
I'm
going
to
jump
down
to
the
schema
here.
I
guess
to
keep
things
interesting.
We
decided
to
change
the
type
to
a
string,
so
it's
no
longer
think
foo
is
now
a
string
as
defined
by
the
schema
okay.
E
So,
coming
back
to
that
root,
yaml,
it's
the
same
business
library.getfui
and
here's
where
the
interesting
part
is
where
above
we
just
immediately
evaluated
now
we're
calling
that,
through
this
fluent
interface,
we're
calling
this
with
data
values,
and
so
what
that
does
is
it
allows
you
to
set
the
data
values
programmatically.
This
is
effectively
the
exact
same
thing
as
what
happened
above
from
the
yaml
that
you've
set
here.
E
So
here's
your
schema.
It's
apparently
a
string
this
time
and
then
the
template,
that's
going
to
be
evaluated.
Just
displays
foo,
real
simple,
and
when
you
run
it
you
should
get
your
nice
output
interesting.
One
way,
I
guess
that's
legit
yaml!
I
don't
know
if
that's
exactly
right,
but
just
imagine
it
is.
E
E
Okay,
I'm
seeing
nodding
heads
feel
free
to
interrupt.
Let's
keep
going
all
right,
don't
worry
this.
Is
it
so
scenario?
Four
is
same
thing
as
you
guessed
it
it's
identical
to
what's
above,
except
we
put
in
lucky
number
13,
instead
of
a
string,
and
so
we
should
be
getting
the
opposite
complaint
that
we
expected
a
string
as
defined
by
the
schema,
but
we
actually
got
an
integer
instead.
D
E
No
dude,
that
was,
it
was
copypasta,
so
it
would
that's
a
good
question.
I
can't
remember
how
we
render
that
out.
We
will
have.
This
is
called
in
it
in
the
code.
We
call
it
an
associated
name
fancy,
so
we
will
have
some
value
there
for
that.
I
can't
remember
what
it's
called
when
you're
getting
it
from
here.
It
might
actually
end
up
being
be
from
the
the
root.yaml
that
might
be
where
that
comes
from.
E
C
E
Let's
see,
let
me
think
about
the
things
that
I
think
are
so
there's
these
two
different
code
paths,
there's
the
declarative
versus
the
programmatic
and
within
those
there's,
some
part
of
like
scanning
for
schema
that
I
mean
we'll,
hopefully
be
able
to
just
pull
that
in
maybe
maybe
there's
a
way
in
which
that
folds
in
so
that
was
one
source.
E
E
I
want
to
say
like
like
they're:
it's
import.
The
part
of
that
enumeration
is
like
doing
doing
the
that,
probably
generally
generalizing
how
we're
like
plucking
and
setting
aside
schema
values
separate
from
other
other
documents,
so
I
just
think
there
might
be
like
some
additional
refactoring
work
there.
So
there's
just
enough
uncertainty
with
all
with,
like
the
meat
of
all
those
different
things
to
in
in
my
head
that
I
was
like
all
right.
Well,
it's
if
I
had
a
four,
I
might
throw
four
but
we're
doing
fibonacci.
E
F
F
C
So
we
got
three
threes
and
a
five:
how
about
we
go
with
majority
of
threes?
You
okay
coming
down
john,
but
we
have
this
recorded,
so
we
can
go
back
in
time.
C
Cool,
let's
see
if
we
can
fit
in
one
more
story:
a
cap
story,
so
hopefully
you're
not
getting
too
much
whiplash
with
all
this
context,
but
john,
would
you
be
able
to
speak
to
this
one
as
well.
E
E
E
Okay,
so
boy
how?
How
timely
was
that
that
garrett
gave
us
our
boxes
and
lines
on
cat,
because
now
we
can
collectively
talk
about
the
dag
about
the
diff,
dag
and
one
way
of
another
way
of
like
complimenting
this
description.
Is
that
hey
there
isn't
there's
this
resource?
That's
in
the
cluster
and
the
most
obvious
one
is
a
crd
where
it's
not
in
the
pile
of
manifests.
It
got
created
by
something
else
in
the
middle
of
the
deployment,
usually
a
controller.
E
So
I
want
to
be
able
to
tell
cap
that
that
will
exist,
but
how
how
to
do
that.
So
that's
what
this
explore
is
about
and
instead
of
just
focusing
on
crd,
let's
talk
about
well
any
resource
that
we
might
wait
on.
That.
We
know
is
going
to
be
generated,
that
the
the
author
is
going
to
know
is
generated
by
some
machinery
inside
the
cluster.
E
So
this
gives
us
the
ability
to
sort
of
point
at
a
thing
that
we
don't
have
the
definition
for
that.
Somebody
else
will
generate
for
us
nice
and
then
wait
on
that
thing
at
the
right.
E
D
So
I'm
just
gonna
assume
that
the
the
work
to
just
like
we,
this
is
clearly
like
something
that
we're
lacking
in
cap.
Like
you
know,
we
have
change
groups,
we
have
those
other
kind
of
ways
of
grouping
resources
and
and
having
some
sort
of
dependency
using
those
potentially
but
like
that.
Just
doesn't
cut
it
for,
for
these
cases
at
least,
and
we
want
to
explore
a
solution
and
we
have-
and
all
the
links
to
you
know,
to
describe
the
problems
like
link.
D
E
Yeah,
so
the
knee
was
expressed
in
terms
of
crds,
so
we
know
for
sure
that
that
is
a
genuine
need,
whatever
whatever
feature
we
built
will
meet
that
need,
and
what
we
want
to
do
is
think
about
it.
More
generally,
our
solution,
although
it'll,
solve
a
specific
problem,
there's
real
value
in
thinking
about
a
generic
mechanism,
more
general
mechanism
that
will
satisfy
this,
and
so
today,
in
order
to
affect
a
change
group,
I
need
to
have
a
resource
on
which
I
am
dropping
that
annotation,
but
if
it's
being
created
by
something
else,
then
what
so.
E
This
is
a
this
is
an
attempt
to
explore.
Well
what
would
be
event
what
so
there's
all
kinds
of
ways
to
approach
this
but
like
in
order
to
make
it
a
little
bit
concrete
like
what,
if
we
had
some
way
of
describing
a
resource
like
it's
another
file,
but
we've
marked
it
in
some
way
that
it's
not
a
resource
that
gets
applied.
E
There's
challenges
with
that
like,
for
example,
we've
done
work
in
the
past
where
we
said
well,
we
want
to
be
able
to
take
the
exact
same
yaml.
You
could
pipe
in
a
cap
and
we'd
want
to
be
able
to
up
use
cube
control
to
just
allow
it
to
go
through
without
it
having
any
hairy
challenges.
E
So
we've
done
we've
like
wrapped
cap
controller
in
a
config
map
before
to
sort
of
allow
for
for
that
to
happen,
so
that
sort
the
same
spirit
still
exists.
We
want
to
be
able
to
co-exist
with
other.
E
E
There's
this
described
sort
of
like
a
suggestion
of
these
outcomes
so
like
working
on
a
proposal
getting
conversation
going
so
there's
people
here
that
that
we
could
talk
to
who've
reported
issues
talked
to
folks
who
have
mentioned,
go
back
and
say:
hey,
you
know,
there's
an
idea.
Is
this
gonna
how's
this
gonna?
How
does
this
feel
how's
this
looking
kind
of
thing
and
then
synthesize
that.
E
D
It
breaks
cool
and
so,
like
tying
us
back
to
the
round
of
feedback,
we
reviews
to
present
the
proposal
we
want
to
help
me
tie
that
again,
are
we
thinking
of
like
asking
gatekeeper
or
the
decks
like
what
they
think
about
a
proposal?
Is
this
just
making
sure
that
our
feet
our
thing
works
with
gatekeeper
index?
E
A
part
of
it,
the
other,
is
that
whoever
reported
issue
52,
which
was
sort
of
the
spark
of
this
and
those
who
have
responded
in
kind
and
on
that
issue,
and
I
think
we
might
have
like
some
references
to
slack
conversations.
Any
of
those
folks
are
great
people
to
come
back
to
to
say:
hey.
We
have
this
idea.
We
want
to
make
sure
that
it's
going
to
meet
your
needs
before
we
commit
implementation
to
it,
got
it
great
questions.
C
Folks
comfortable
pointing
to
see
where
people
are
landing
right
now.
E
Well,
one
thing
that
that
can
be
helpful
here
with
with
the
time
box,
is
sort
of
giving
us
a
sense
of,
or
putting
some
back
pressure
on
getting
ensnared
in
lots
of
detail,
but
having
enough
space
to
to
really
think
this
through
and
and
poke
and
explore.
These
things
also
note
that,
like
many
of
us,
are
aspiring
to
be
reviewers
in
this
code
base,
so
that's
some
work
to
to
load
up
the
design
of
what's
there
as
well.
So
I'd
want
to
incorporate
that
in
the
time
box.
E
E
C
E
And
anything
like
this,
we'll
want
to
lean
on
folks
who
are
like
any
approvers
of
the
tool,
or
you
know
my
job
is
to
help
support
these
things
too.
So
it's
not
about
going
off
in
a
corner
and
sliding
pete's
boxes
under
the
door
until
an
answer
comes
out,
but
like
really
having
that
space
to
have
time
time
to
think
things
through,
but
lean
on
lean
on
your
teammates.
C
C
All
right
or
a
couple
minutes
over,
but
we
got
through
a
few
stories
anything
else
from
others
before
nancy
closes
out.
A
Okay,
so
thank
you,
everyone
for
joining
the
community
meeting.
Today
we
had
some
really
great
topics
to
to
go
over
looks
like
we
didn't,
have
any
discussion
items
that
we
didn't
have
time
for,
but
we
will
have
another
community
meeting
on
monday,
which
is
march
8th
and
we'll
also
kick
off
our
office
hours
next
week,
which
will
be
on
march
11th.
A
So
if
you
are
watching
this
from
home
and
there's
something
that
you
would
like
to
bring
to
the
team
to
discuss
regarding
karmal
they're
happy
to
help
you
in
any
way,
whether
that's
how
to
get
started
with
it
or
if
there's
something
that
you're
blocked
with
on
implementing
on
your
own,
please
please
join
those
office
hours
join
the
community
meetings.
We
would
love
to
have
you
participate
all
right
with
that?
We'll
see
you
next
time.