►
From YouTube: Carvel Community Meeting - January 27, 2021
Description
Carvel Community Meeting - January 27, 2021
Meeting information and notes can be found here: https://carvel.dev/community/
A
Cool,
so
everyone
see
with
the
agenda
right.
The
shared
screen
great,
so
welcome
everyone
to
carvalho
community
meeting
today
is
january.
27Th
we're
excited
to
meet
again
so
you'll
see
the
agenda
that
we
have
it
up
here
and
you'll
see
also
the
maintainers
joining
today.
So
if
you're
a
maintainer
wave
your
hands,
so
we
can
see
you
good
to
see
you
all
beautiful
faces
so
yeah.
A
So
today
we're
gonna
start
with
announcements,
what
we're
working
on
and
then
we
have
few
topics
that
we
want
to
talk
about
today
and,
as
always,
you
can
find
the
recordings
on
youtube
or,
if
you're,
actually
watching
you're
on
youtube
already
so
announcements
here,
big
announcements,
please
note
that
we're
making
changes
to
this
community
meeting
we're
no
longer
gonna
meet
on
every
other
wednesdays,
but
we're
gonna
meet
every
monday.
A
So
more
frequently
you
get
to
see
us
at
11,
30,
pacific
time,
so
los
angeles
or
san
francisco
time
or
portland.
Seattle
time
there
11
30
every
monday,
so
you'll
get
an
updated
invite
so
be
on
the
lookout
for
it.
You'll
still
be
able
to
connect
to
the
same
zoom
link.
A
So
don't
worry
about
that
so
see
you
next
monday,
and
another
big
announcement
is
that
we
are
going
to
move
to
get
up
projects
so
so
far
the
maintainers
we've
been
using
a
different
tool
and
we're
moving
to
github
projects
so
that
you
know
folks
in
the
community
have
greater
visibility
in
what
we're
working
on
and
so
that
you
can
also
chime
in
and
contribute
together.
So
starting
next
week,
you'll
see
all
the
work
that
we're
gonna
do
reflected
on
the
carvill
github
project.
A
A
Another
thing
that
is
exciting
is
so
we
want
to
try
out
a
github
issue
voting
or
what
we're
calling
like
a
feature
voting.
So
we
want
to
hear
from
you
what
you
would
want
us
to
work
on
next.
A
So
if
you
see
a
particular
issue
and
you're
signed
into
github,
please
let
us
know
what
issue
that
you
would
like
us
to
work
on
or
what
you
would
like
to
contribute
on
by
throwing
that
thumbs
up
we're
hoping
to
get
this
experiment
going
and
see
how
whether
we
can
you
know
hear
get
your
thoughts
reflected
in
how
we
prioritize
so
yeah
we'll
share
that
again
next
week
and
then
also
let
you
know
exactly
where
you
can
do
that
voting.
A
So
we
have
a
few
releases,
so
image
package
version
0.3.
We
have
now
we're
introducing
this
concept
of
image
package.
Bundles,
so
you
can
read
more
about
that
here
in
this
link
and
then
we
also
are
supporting
this
copy
command,
where
you
can
relocate
bundles,
so
read
more
and
then
there
are
some
bug
fixes
along
with
it.
So
you
can
look
into
each
git
of
issues.
A
A
So
those
are
the
big
announcements
we
have
also
carrie
and
dennis
will
be
able
to
share
what
we're
currently
working
on
so
carrie.
Do
you
want
to
kick
us
off
on
what
we're
working
on
there
with
ytt.
B
Sure
yeah,
just
a
quick
update
on
what
we've
been
working
on
lately,
so
for
one
of
our
tools,
the
yaml
templating
tool,
ytt
working
on
a
feature
for
adding
schema
support,
so
this
will
basically
be
a
way
to
specify
exactly
what
inputs
that
you
expect
into
your
data
values
and
your
templates
in
ytt
as
part
of
the
schema's
work.
B
So
we
have
been
focusing
on
improving
our
error
messages
around
schemas,
so
we
added
some
features
to
basically
make
the
error
messages
consistent
across
schemas
and
also
to
add
a
little
bit
of
information
around
where
an
error
is
happening
in
your
schema
and
what
we
expected
so
hopefully
that
information
plus
we
have
a
few
hints
to
help
you
get
going
and
debugging
in
schemas
and
then
another
thing
we've
been
working
on
is
a
schema
file
as
data
values.
B
So
this
is
basically
the
way
that
ytt
works
right
now
is
you?
Can
you
basically
add
your
inputs
to
ytt
as
a
data
values
file?
B
C
Yeah,
so
the
non-distributable
layers
feature.
This
is
a
feature.
That's
going
to
be
really
useful
for
folk
that
are
running
that
want
to
run
images
in
an
air-gapped
environment
in
an
internet-like
environment
that
they
want
to
run
an
image
that
contains
a
layer
that
has
been
marked
as
a
non-distributable
layer
and
currently
without
this
feature
implemented.
That's
not
possible.
This
feature
is
all
about
introducing
a
new
flag
dash
dash
include.
C
It
says
here,
foreign
layers,
but
it's
going
to
be
called
dash.
Dash
include
non-distributable
layers
where
it's
a
flag
for
the
copy
command
and
if
you
specify
that
flag
to
the
copy
command,
when
you
copy
an
image
that
contains
a
layer
that
has
been
marked
as
non-distributable,
it's
going
to
get
copied
from
the
source
to
the
target.
So
in
the
before
use
case,
you
have
an
air
gap
environment.
You
have
an
image,
let's
say
it's
a
windows,
microsoft
image
that
has
been
that
has
a
layer,
that's
called
that's
been
marked
as
non-distributable
you've
specified.
C
You
use
the
copy
command
with
this
flag.
You
copy
from
that.
You
copy
that
image
to
a
table.
You
put
that
table
on
a
usb
disk
and
you
go
to
your
data
center.
It's
an
air
gap
environment
and
then
you
upload
that
table
to
a
registry.
Now
your
registry
inside
your
air,
grabbed
environment
has
a
microsoft
image
that
has
a
non-distributable
layer
and
you
can
use
that
to
run
in
your
kubernetes
cluster
or,
however,
you
want
to
run
that
image.
C
Current
is
still
in
flight.
If
I
was
to
make
a
bet,
it
would
probably
land
on
the
next
release
of
image
package.
A
Cool
thanks
for
the
update
dennis
okay,
so
we
have
a
last
announcements
here,
so
we
have
new
web
pages
for
now
image
package
and
vendor.
So
if
you
recall
a
few
weeks
ago,
we
announced
that
we
have,
you
know
new
pages
for
each
of
the
projects
and
also
like
a
whole
refresh
of
karbol.dev.
A
Now
you
can
click
into
image
package
and
also
vendor
to
learn
more
about
those
two
specific
tools.
So
you
have
each
of
the
pages
also
have
their
own
documentations
and
yeah.
So
you
can
now
go
to
cargo.dev
to
read
all
the
dots
instead
of
needing
to
go
to
github
one
project
is
still
pending,
so
cat
controller
is
still
linking
to
github
docs
so
that
we
are
currently
working
on
actively
working
on,
but
all
the
rest
of
the
docs.
You
can
come
here
to
carbo.dev.
A
So
now,
let's
move
on
to
discussion,
so
I
see
two
topics
added
here
and
as
a
reminder,
committee
members
y'all,
can
add
a
topic
here.
We
usually
have
this
agenda
up
ahead
of
the
time.
A
A
E
D
F
E
C
D
Okay
can
y'all
see
my
screen,
my
nice
discussion
topic
in
github
cool,
so
this
is
basically
just
like
the
discussion
topic
that
I
created
underneath
carvel
to
so
we
can
talk
about
this
and
what
is
the
what's
the?
Why
behind
this?
So
currently
our
if
we
look
at
our
tests
and
and
in
the
way
they
that
they
are
set
up,
I
feel
like
that.
There's
they
are
a
little
bit
hard
to
read.
So
if
we,
if
we
take
an
example,
for
example
like
in
ytt,
let's
go
to
image
package
image
package
all
right.
D
So
if
we
take
a
look
at
at
our
end-to-end
tests
and
to
these
bundle
tests,
we
can
see
that
the
big
thing
that
is
telling
us
what
this,
what
this
test
is
doing,
is
like
this
huge
name
here
that
we
have
here
and
personally
as
this
is
like
a
personal
note,
I'm
not
very
good
at
reading
snake
case
sentences,
but
yeah.
D
So
this
was
one
of
like
the
first
pain
points
that
I
had
when
I
joined
this
project
is
that
for
me,
it
was
a
little
bit
hard
to
understand
the
the
the
content
like
of
each
test,
because
I
felt
like
I,
I
needed
like
a
a
bigger
sentence
in
english.
To
tell
me
what
this
test
was
doing
and
like
the
way.
The
way
the
tests
are
structured
as
well.
Sometimes
it
was
not
like
straightforward
to
try
to
understand
what
they
were
doing.
D
The
effective
assertions
started
and
what
were
part
of
the
assertions
and
what
were
not
so
in
order
to
try
to
get
this
try
to
understand
like
is
there
like
a
an
a
different
way
that
could
be
easier
for
people
to
understand
what
we're?
D
D
That
would
be
interesting,
so
I
I
did
a
quick
google
and
I
added
some
options
here
that
currently
we're
using
the
golang
vanilla
like
what
comes
from
the
center
library
testing
framework,
and
it
works
pretty
good
right,
like
there's
a
lot
of
prior
art
and
a
lot
of
projects
that
have
this
as
their
testing
framework,
then
I
added
like
two
other
frameworks
that
I
knew
about.
D
One
is
spec
that
is
just
like
a
fine,
thin,
very
thin,
wrapper
on
top
of
tests,
testing
framework
that
it's
not
linking
outrageous.
Oh,
what
is
that?
D
Okay,
I
think
there's
like
a
problem
with
that
link
that
it's
basically
just
a
thin
wrapper
on
top
of
the
test,
but
it
gives
it
a
little
bit
more
structure
and
you
can
have
something
like
you're
running
a
test,
and
you
can
write
letters
about
what
this
is
doing.
So,
for
example,
it
should
have
some
default,
for
example
right,
and
it
has
like
a
more
bdd
structure.
D
So
another
option
that
I
saw
it
was
ginkgo
and,
for
example,
for
some
reason
ginkgo
is
okay,
but
the
other
one
is
not
that
somewhat
just
uses
more
or
less
the
same.
The
same
ideas
for
the
bdd
I
do.
I
I
explored
just
spec.
I
didn't
explore
gingko,
but
I
do
believe
that
there's
like
some
sort
of
like
an
issue
with
ginkgo
that
ginkgo
doesn't
do
very
well
with
threading.
D
If
I'm
not
mistaken,
and
it
is
a,
it
is
like
a
more
complete
framework
instead
of
being
like
just
a
thin
wrapper
like
spec
is,
and
you
can
see
that
spec
a
lot
of
things
are
just
passed
around
with
the
t
testing
t
around,
while
gingko
even
has
its
own
implementation
of
the
testing.
T
so
it
completely
isolates
the
go
framework
from
testing
framework.
D
We
really
need
to
do
this
if
statement
when
we're
just
trying
to
compare
like,
for
example,
to
strings
and
as
a
matter
of
fact,
there
are
some
other
options
that
I
collected
here
that
could
help
us
out
making
the
tests
a
little
bit
more
readable.
When
you
read
like
the
assertion,
you
know
what
what
is
being
asserted
on
so
like
I
I
looked
into
again.
This
link
is
broken.
D
Fine
I'll
just
cut
this
here,
so
I
looked
into
testify
that
basically
it's
just
an
assertion,
library
that
allows
you
to
have
like
commands,
like
I'm
going
to
assert
that
this
is
equal
and
then
you
have
the
expected
and
then
the
actual
value.
And
then
you
can
have
like
a
sentence.
If
you
want
to
and
say
like.
D
Oh,
I
wanted
oranges
or
like
I
was
expecting
oranges,
and
that's
not
what
you
gave
me
and
I
think
this
way
it's
a
little
bit
easier
to
understand
what
we're
trying
to
compare
and
what
we're
trying
to
do.
And
there's
like
a
bunch
of
assertions
on
this
library
that
we
could
piggyback
on
another
option.
It
was
go
mega
and
go
magazine.
A
search
is
a
it's
a
matching
library.
It's
not
an
assertion
library
which
kinda.
D
I
don't
understand
much
the
difference,
but
it's
more
or
less
the
same
thing
right,
and
this
is
the
the
the
library
that
is
recommended
to
use
with
ginkgo
and
it
basically,
it
contains
more
or
less
the
same,
the
same
functionality
as
require.
D
Did
I
click
in
the
place
that
I
was
expecting
to
see
all
available
match
matches,
so
it
is
a
little
bit
of
a
different
syntax,
but
in
the
end
it
is
doing
the
same
thing.
It's
saying
that
is
this:
the
value
that
I
have
it
should
be
equal
to
blah
right.
So
this
is
more
or
less
like
the
the
frameworks,
the
other
option
that
we
that
I
was
told
about
to
be
fair.
D
I
did
not
investigate
this
a
lot
there's
like
this
library
that
can
be
used
for
ytt
tests
that
allows
you
to
do
some
matching
for
for
us
and
it
brings
by
default
some
matchers
that
you
can
check
out
here.
So
there
are
like
some
functions
and
it
works
in
a
similar
way
to
other
assertion.
Matching
libraries
right.
D
Okay,
okay,
cool!
So
thanks
for
clearing
that
out
dennis
so
and
like
I
started
this
discussion
to
try
to
understand
if
there
were
like
other
ideas
and
I
attached
to
that
a
little
bit
of
the
of
like
a
prototype,
let's
call
it
of
a.
I
picked
up
this
one
particular
file
and
I
said
like
okay,
I'm
going
to
try
to
make
convert
this.
This
file,
this
test
file
into
a
spec,
testify.
D
Just
scroll
down,
do
I
have
no.
I
cannot
do
that
thing.
So,
there's
like
already
some
dimitri
did
some
some
comments
here,
but
in
the
end,
if
we
look
at
this,
it
becomes,
I
think,
a
little
bit
easier
to
reason
about
what
is
being
done
in
terms
of
testing,
what
we're
trying
to
test
and
then
our
assertions.
D
I
think
they
become
a
little
bit
more
understandable
if
there's
like
a
lot
of
text
there,
but
things
like
this.
I
believe
it
becomes
a
little
become
a
little
bit
more
understandable
and
could
help
us
out
and
also
the
the
fact.
The
thing
that
I
like
about
this
is
that
we
can
have
resemble
a
resemblance
of
structure
where
the
tests
like
feed
from
the
parent
test.
D
That
gives
you
like
even
a
broader
information.
So,
for
example,
if
we
see
naming
wise
we're
doing
a
copy
bundle
image
test
and
it
should
preserve
the
copy
bundle
image,
it
preserves
the
annotation
right
or
if
we
go.
I
have
like
an
example
where
we
have
like
multiple
layers,
for
example,
for
this
one:
the
copy
bundle
image
tests
when
the
images
of
the
bundle
are
collocated
with
the
bundle
it
copies
the
images
and
creates
the
bundle
when
minus
minus
lock
output
is
provided.
D
I
think,
like
this,
provides
a
much
better
information
about
what
this
test
is
giving
and
what
is
tess
is
testing
really,
instead
of
us
trying
to
read
through
there's,
there's
like
a
description
about
this
test.
Let
me
see,
if
I
can,
is
this
the
file.
Let
me
check
go
back
so
there
you
go
so
like
this.
I
think
this
is
the
test
like
test
copy
bundle
with
co-locators
reference
images
to
repo
destination
and
output.
D
So
like
this
is
the
trend.
This
is
like
the
test
that
was
converted
from
here
into
too
many
tabs
too
many
tabs
here
into
like
these
lines,
and
I
believe
that
it
feels
a
little
bit
better
and
allows
us
to
isolate
if
you
want,
for
example,
to
have
some
setup
that
we
want
to
do.
We
can
move
that
setup
into
before
blocks
and
to
be
fair,
like
this
commit
that
I
have
here
it's
more
like
these
are
the
things
that
we
can
do
it
doesn't.
D
D
I
I
tried
to
open
this
and
see
if
there
was
like
other
people
that
were
interested
in
experimenting
with
other
frameworks,
but
I'm
the
only
one
here
so,
but
in
the
end,
what
I,
what
I
saw
in
terms
of
like
differences
for
spec
in
particular,
is
that
spec
is
like
a
thin
wrapper
around
test
testing
t,
so
it
still
keeps
all
the
the
goods
that
come
from
test
from
the
testing
package
from
go
with
it.
So
it
does
all
the
things
that
the
testing
t
does
it's.
D
It
gives
us
like
a
well-known
terms
like
when
and
it
that
translating
to
the
messages
on
the
common
line
when
you
run
go
test,
oh
by
the
way
like
these
run
with.
If
you
just
go,
do
go
test
and
you
have
tests
with
testing
t
like
the
the
base
standard
library,
testing
plus
spec,
they
all
run
without
you
have
to
having
to
do
anything
because
it
is
the
same
infrastructure
that
it's
used.
The
other
thing
is
that
you
can
focus
particular
tests
while
currently
there's
no
real
way
to
do.
D
It
has
no
extra
dependencies
whatsoever,
so
the
only
dependency
it
has
is
on
top
of
testing,
and
the
test
names
can
now
be
full
sentences
to
describe
the
behavior
that
we
expect.
One
thing
that
I
that
I
noticed
that
sometimes
nesting
can
be
a
little
bit
hard
to
follow.
D
If
you
have
like
a
lot
of,
if
you
do
tests
that
are
like
very
big
that
do
a
lot
of
things,
then
it
might
be
hard
for
you
to
follow
along
what
what
is
like
the
context
that
you're
in
so
that
was
one
thing
that
I
I
saw
that
could
be
improved,
not
could
be
improved.
I
don't
think
the
library
in
itself
needs
to
improve
it,
but
it's
something
that
we
need
to
be
aware
of
when
we're
writing
tests
and
for
the
testing,
like
the
testify
assertion
library.
D
What
I
saw
is
that
it
has
like
a
better
error
output
and
without
you
having
to
do
much,
you
can
say
like
this:
bananas
need
to
be
equal
to
apples,
and
then
it
already
has
a
phrase
for
you
and
says
like
oh,
but
the
expected
bananas
are
not
equal
to
the
the
the
apples
that
you
got
that
you
provided
right
so
like
it.
D
Error
messaging,
just
out
of
the
box,
I
think
it's
it
makes
it
easier
to
read
the
purpose
of
the
tests
when
you're
looking
at
it-
and
you
say
just
like
not
equal
this-
this
might
be
like
a
flood.
I
have,
but
I'm
a
little
bit
of
a
bullying
impaired
person
and
when
I
have
to
look
at
knots
and
hands
and
so
on.
It
takes
me
a
lot
of
time.
D
Well,
if
I
just
have
like
this
reason
in
english,
it
makes
it
easier,
at
least
for
me,
it
has
a
lot
of
pre-built
assertions
and
also
allows
the
user
to
the
developer
to
group
assertions
into
blocks.
So
there
are
two
different
types
of
there
are
searches
and
there
are
requires
and
the
assertions
you
can
do
like
10
assertions
and
it
will
not.
D
D
That
that
I
saw
as
a
downside,
it
takes
some
time
to
convert
tests,
especially
because
we
already
have
like
a
a
full
breadth
of
tests
right
and
it
has
some
third-party
dependencies.
So
it
brings
some
some
extra
dependencies
to
our
code.
D
So
that's
basically
what
I
have
to
present
in
terms
of
like
the
this
experiment
that
I
did
with
spec
and
testify.
I
don't
know
if
anyone
did
has
anything
that
they'd
like
to
share
if
they
try
to
do
this
with
different
libraries
or
something
or
if
they
have
any
kind
of
comments
and
so
on.
What
I
would
like
before
we
do
that
what
I'd
like
to
get
is
more
or
less
try
to
understand.
D
If
this
is
a
pattern
that
we
we
like
and
that's
something
that
we
would
be
willing
to
start
using,
and
if
so,
I
would
like
to
see
this
if
that's
something
that
we
that
we
want
to
implement.
We
I'd
like
to
see
like
whenever
we
do
some
changing
the
code
that
we
start,
including
our
like
these
new
frameworks
and
so
on.
That's
that
would
be
like
my
my
goal
with
this
experiment
here
so
I'll
open.
The
floor
too,
does
anyone
have
any
question
have
anything
that
they
want
to
show
something
like
that.
C
I
mean
I
prefer
the
bdd
nessa
structure
a
lot.
It
just
makes
it.
I
agree
easier
to
read
in
in
my
head,
while
you
were
going
through
spec
and
you
were
going
through
was
it
testify
as
to
match
as
the
assertion
library
in
my
head.
I
was
like
because
I
have
prior
experience
with
ginkgo
and
gomega
and
there's
a
almost
complete
overlap
in
terms
of
you
know
the
ginkgo
to
spec
and
in
my
head
gomega
to
testify.
C
You
know
they
have
context
and
whens
and
it's
nested
in
to
each
other
and
even
the
the
way
that
assertions
are
made.
You
know
something
should
equal
this,
then,
whatever
that
looks
pretty
similar,
I'm
just
wondering
about
the
activity
like
the
how
well
supported
you
know.
Testifying
spec
is,
I
know.
Ginkgo
and
gomega
is
active,
actively
developed
and
they
have
releases
pretty
frequently
I'm
wondering
about
yeah
how
popular
spec
is,
for
example,
or
how
widely
used
it
is.
That's
something
that's
unknown
to
me.
D
So
to
answer
that,
like
testify,
it
has
a
lot
of
stars
in
here
and
they
release
pretty
frequently
to
be
fair,
like
in
a
in
a
library
like
this
yeah.
I
expect
this
to
be
released
frequently,
while
the
testing
framework
for
this
one
in
particular,
it's
not
the
like
steven,
usually
doesn't
do
like
a
lot
of
updates,
because
in
the
end
this
is
a
very
thin
library.
D
The
only
thing
that
this
library
provides
to
you
is
just
the
readability
of
the
tests
and
gives
you
some
sort
of
like
a
a
structure
where
you
can
call
befores
and
afters,
and
that's
it.
So
it's
not.
I
don't
think,
there's
like
a
lot
being
developed
right
now,
because
in
the
end,
the
idea
behind
this
library
was
to
create
just
a
thin
wrapper
on
top
of
testing
t
that
could
provide
you,
some
bdd-like
behavior.
It's
it's
just
that
so
a
very
simple
library.
C
D
From
what
I
know
is
that
the
parallelization
on
ginkgo
was
created
before
testing
t
or
the
standard
library
was
able
to
parallelize
tests.
So
that's
a
good
thing,
but
the
problem
now
is
that,
at
least
this
is
the
knowledge
that
I
had
like
maybe
a
year
ago,
or
something
that,
because
they
do
not
rely
on
a
lot
on
the
testing
library,
they
now
have
their
own
implementation
of
the
testing
of
the
parallelization
of
the
tests.
It
is
a
parallelization
of
the
tests
that
I'm
talking
about
it's
it's
it's
like
their
own.
D
Instead
of
using
the
the
the
testing
that
comes
with
go
right.
So.
C
Yeah,
I
think
you
can
opt
in
to
use
the
testing
object
or,
if
you
use
ginkgoes,
you
can
run
tests
in
parallel,
but
you
got
to
know
what
you're
doing
you
don't
want
test
pollution
when
you're
running
tests
in
parallel,
essentially
yeah,
and
it
also
does
allow
a
really
nice
way
to.
If
you
have
parallelism
in
your
implementation
code,
they
have
nice
primitives
to
help
you
test
up.
You
know,
go
routines
running
in
your
implementation
as
well.
F
Yeah,
just
add
on
to
that,
I
think
my
experience
in
seeing
parallelization
issues
has
usually
been
just
not
structuring
the
tests
and
isolating
them
properly
or
using
shared
resources
and
having
contention.
So
that's,
I
think,
mainly
on
us
to
be
able
to
structure
those
things
well.
E
Yeah,
I
I
was
thinking
about
what
are
the
the
the
the
needs
that
drive
us
here.
So
one
of
them
is
that
test
names,
as
as
they
are
hard
to
read
that
the
sections
of
the
arrange
act
assert
can
be
hard
to
suss
out
yeah.
I
agree
with
a
lot
of
that
dennis
and
I
saw
a
real
improvement
in
the
schema
tests
that
we
just
reworked
a
little
bit
of
organization.
E
The
t
object
that
testing.t
object
provides
that
run
function,
which
kind
of
nice
when,
when
you're
looking
at
at
spec
tests,
they
can.
They
can
seem
very
similar
to
to
that,
because
I'm
used
to
seeing
the
the
the
tdot
run,
it
could
be
a
spec.run.
E
The
the
other
things
that
come
up
from
me
are
the
before
and
after
are
are
not
available
in
the
built-in
testing
framework
and
there's
hacks
that
you
can
do
to
make
that
work,
but
it's
pretty
hacky
so
that
that's
a
really
big
win
on,
in,
from
my
perspective,
to
be
able
to
set
per
test
setup
and
tear
down
and
also
because
you
can
layer
things
a
little
bit
and
again,
I
echo
your
concern
about
you
can
go
nuts
there's
more
rope
here
than
you
need
to
get
the
job
done
in
terms
of
nesting
context.
E
E
The
one
thing
that
I'm
just
trying
to
add
to
the
conversation
doesn't
mean
this
is
a
big
deal,
but
it,
but
it's
an
important
one
for
me
personally,
is
I
know.
There's
a
number
of
us
use.
Ides
want
to
make
sure
that
whatever
we
use
can
be,
even
if
it's
a
little
bit
of
work
to
get
it
working
with
id.
We
live
in
that
we
code
in
that
3tdd
and
that,
for
those
that
do
want
to
make
sure
that
those
workflows
wouldn't
be
broken,
and
then
I
wondered,
did
you?
E
Did
you
run
into
any
kind
of
assertion
that
would
show
a
diff
like
we've
got
a
number
of
tests
where
we
have
like
multi-line
strings,
and
we
want
to
show
the
difference
between
the
expected
and
the
result
and
want
to
show
like
a
a
patch
style
diff.
Is
that
does
testify?
Have
that?
Do
you
know
that
off
top
of
your
head.
D
Have
like
it's
like
these
are
like
the
basics
that
everybody
has
right,
but
I
do
believe
that
they
have
a
lot
of
like
comparison
and
the
output.
But
what's
oh
my
god,
that's
not
what
I
want
to
include.
E
So
yeah
there's
this
bit
where
there's
there's
some
there's
quite
a
bit
of
subjectivity
in
like
readability
and
some
of
those
other
things,
but
I
feel
like
from
where
I'm
standing.
E
I
could
see
a
real
improvement
here
and,
and
more
importantly
to
me,
is
like
that
we
have
actually
more
plays
in
our
playbook
with
like
the
ability
to
set
up
and
tear
down
context
easily,
if
the
the
other
thing
actually
that
is
a
big
deal,
is
the
approachability
of
our
projects
to
open
source
contributors.
E
So
I
think
that
I
wouldn't
look
at
stars
per
se
as
some
kind
of
indicator,
but
or
or
some
kind
of
direct
measure,
but
like
they
do
indicate
a
little
bit
of
how
many
folks
are
out.
There
would
probably
be
able
to
look
at
your
tests
and
not
just
be
able
to
read
them
be
able
to
contribute.
E
So
this
is
part
of.
I
think
why
the
built-in
testing
is
appealing
because
everybody's
got
it
it's
a
lot
of
material
for
it.
So
that's
the
thing.
I
think
we
want
to
keep
in
mind
make
sure
that
we
don't
create
inadvertent
potential
barriers,
but
like
okay
well
and
additionally,
you
got
to
learn
this
other
thing.
E
D
Cool
thanks
john:
does
anyone
have
anything
else?
They
would
like
to
say.
I'd
also
like
to
be
constant
that
we
do
have
another
topic,
so
my
idea
is
that
we
can
maybe
discuss
here
and
then
we
can
try
to
make
this
some
sort
of
like
an
asynchronous.
D
Maybe
in
a
synchronous
discussion
where
I
can
create
something
in
slack
and
then
we
can
contribute.
If
there's
any
questions,
if
there's
like
any
other
options
and
eventually
we
could
create
some
sort
of
like
a
pull
or
just
to
see,
if
that's
something
that
we
would
like
to
to
undertake
or
not
as
like.
The
next
steps
for
us.
E
I
know
you
pointed
to
it
before
but
like
if
we,
if
folks,
had
additional
thoughts
that
come
up
after
having
seen
this
and
had
a
chance
to
go,
look
at
things
themselves
and
have
questions
that
the
discussion
that
you
started
with
this.
You
know
rich
example
and
all
the
stuff
good
place
to
get
that
going.
G
Cool
just
last
minor
comment.
I
realized
that
one
of
the
points
here
is
takes
some
time
to
convert
tests
as
well
kind
of
the
counter
points
here,
and
maybe
this
is
implied,
and
people
are
thinking
that
way,
but
an
easier
way
to
like
convert
would
not
be
to
like
do
like
a
big
like
convert
all
the
tests.
I'm
thinking
of
this,
as
like
more
of
like
a
update
things
kind
of
as
we
need
to
would
be
a
nice
way
to
kind
of
introduce
this.
D
D
One
file
took
me
some
time,
because
I
had
to
rejigger
a
lot
of
parts
in
order
to
to
put
this
into
this
format
and
to
make
sure
that
the
format
made
sense
and
so
on,
and
when
we're
doing
this,
the
assertions
I
had
also
had
to
change
some
things
here
and
there.
So
that
was
the
only
thing
that
I
was
trying
to
call
out
here.
D
D
D
So,
as
the
next
steps,
I'm
gonna.
D
Here,
for
maybe
a
week
more
and
then
I'll,
maybe
try
try
to
create
a
poll
or
something
to
see
if
that's
something
that
we
want
to
take
over
if
there
are
other
other
ideas,
just
put
them
into
this
discussion.
D
I'll
share
the
discussion
in
here,
and
maybe
I
can
later
add
it
to
the
to
the
other,
to
the
to
the
meeting
notes.
So
we
can
go
there
and
we'll
see
if
we
wanna
undertake
this
or
not.
Thank
you
and
ellen.
A
Trying
to
do
that
right
now
at
that
github
issue
to
our
md
agenda
and
see
if
that
worked.
B
If
we
want
to
talk
about
the
next
item,
I
can
share
my
screen.
Go
over
that
a
little
bit.
B
Yeah,
so
this
is
an
issue
that
I
filed.
It
was
inspired
by
an
issue
that
danny
herbert
filed
for
generating
data
values
file
skeletons.
He
proposed
a
command
to
validate
a
data
values
file.
B
I
thought
it
was
a
great
idea
and
I
think
it
goes
well
with
sort
of
the
work
that
we're
doing
around
schemas,
so
to
kind
of
incorporate
that
idea
into
schemas.
I
propose
a
top-level
command
in
ytt
to
basically
validate
the
data
values
that
you've
given.
B
What
this
command
would
be
able
to
provide
in
addition
to
like
what
ytt
does
currently
with
schemas
is
it
would
be
able
to
provide
more
information
about
what
may
be
wrong
in
your
data
values
file.
B
Currently,
if
you
run
ytt-f
with
your
files
and
your
schema-
and
your
include
experimental
schema
flag,
of
course,
because
this
work
is
feature
flagged
currently
that
command
will
either
fail
or
succeed
fail
with
some
errors.
Your
data
values
file
did
not
adhere
to
your
schema,
and
so
that
is
an
error.
Otherwise
it
will
succeed,
but
we
think
that
there
may
be
some
additional
information
that
would
be
helpful
to
know
in
addition
to
just
failing
or
succeeding,
such
as
warnings
for
like
ytt
best
practices.
B
B
B
B
I
think
stephen
brought
up
some
good
points
about
like
what
do
we
want
to
recommend
for
best
practices
for
ytt
that'll
kind
of
inspire,
what
it
is
that
we
would
want
to
give
warnings
on
like
what
would
not
be
recommended.
What
we
want
to
warn
users
not
to
do,
and
another
example
that
stephen
and
joao
came
up
with
was
a
configuration
author
would
want
to
provide
a
single
instance
for
deployment
say
in
your
schema
file.
You
have
like
replicas
one.
B
Basically,
you
can
provide
as
many
data
values
as
you
want
in
your
schema
or
your
data
value
file,
but
you
don't
have
to
use
them
and
that's
not
an
error,
but
it
may
be
indicative
of
well.
Maybe
you
accidentally
hard-coded
something
or
maybe
you
meant
to
use
those
values
but
didn't.
B
H
H
Right
the
same
normal
templating
command.
I
really
love
the
idea
of
the
fail
on
warnings
flag,
because
this
to
me
is,
is
just
a
bunch
of
warnings
that
well
yeah.
You
may
have
duplicated
a
message
or
probably
an
extra
data
value,
or
you
may
not
be
using
something
that
you
specified
that
to
me
isn't
an
error.
It
is
just
a
warning
and
so
offering
the
consumer
the
ability
to
fail.
H
If
any
of
those
conditions
are
true,
I
think,
might
be
nice,
but
it
seems
I
don't
know,
perhaps
small
enough,
that
it
shouldn't
be
its
own
top
level
command
while
being
big
enough
to
offer
a
an
intermediary
flag.
B
Yeah,
I
think
some
advantages
of
using
a
flag
rather
than
a
top
level
command
would
be
like
you
can
go
through
the
exact
same
flow
that
you
would
otherwise,
if
you're,
maybe
piping
the
output
of
ytt
into
another
tool-
and
you
include
the
flag
fail
on
warnings-
then
you'll
never
make
it
to
that.
Second
step
fails
early.
You
don't
really
have
to
change
the
way
that
you
use
the
tool
which
I
think
would
be
nice.
E
Part
of
what
I
really
like
the
spirit
of
of
this
idea
is
that
it
it's
getting
to
the
heart
of
it
points
to
the
heart
of
a
part
of
the
overall
problem.
We
have
with
ytt
adoption,
which
is
there's
various
potential
barriers
sitting
in
front
of
a
new
user
of
ytt,
and
some
of
it
is
education,
and
some
of
it
is
us
like
helping
the
tool,
be
more
cognitively,
empathetic
garrett's
heading
up
like
a
yeoman's
effort
and
trying
to
like
make
that
second
part
better.
E
But
there's
also
this
first
part
like
in
in
terms
of
guides
that
we
want
to
write
in
the
documentation
to
like
meet
people
halfway
about
where
they're
at,
and
this
feels
like
another
potential
ingredient
for
improving
the
overall
experience,
especially
if
somebody's
getting
started
with
the
tool
where
we
know
that
there
are
probably
widespread
assumptions
that
one
might
bring.
There
are
assumptions
that
many
folks
might
bring
with
them
to
using
the
tool
that
like
fall
invalid
and
it's
just
the
tool,
has
a
slightly
different
philosophy.
And
how
do
we
kindly
share
that?
E
We
also
there
there's
a
kind
of
an
aspect
of
that
in
being
concise,
so
there's
real
value
in
in
keeping
the
messaging
and
the
signal
that
we're
getting
from
the
tool
to
be
tight
compact,
if
you
will
like
high
signal,
low
noise,
and
so
then
that
makes
me
wonder
about
like
yeah,
maybe
that
linting
things
so
anyway.
E
I
really
appreciate
the
exploration
here.
I'm
I'm
personally
like
really
torn
about
like
the
what
the
interaction
ought
to
be
personally,
but
I
love
that
we're
digging
into
this,
because
I
think
it's
a
very
subtle.
No,
no
one's
directly
asking
for
okay.
Can
you
make
your
education
experience
better,
but,
like
that's
one
of
the
big
things
that's
in
front
of
us,
I
believe,
with
this.
E
G
I
do
think
there's
like
this
kind
of
idea.
That's
come
out
of
this
of
like
what
the
what
really
the
scope
should
be
of
this
feature
like
whether
or
not
we
just
want
to
focus
on
things
like
data
values.
In
this
case
and
like
in
some
cases
like,
should
it
maybe
only
apply
like
if
we're
like
validating
things
against
schemas.
G
To
another
extent,
I
think
maybe
that's
like
where
it
should
be
like
refined,
a
little
bit
more.
It's
like
or
like
do
we
want
to
maybe
like
would
something
like
this
be
more
valuable
to
like
start
with
this,
like
more
specific
case
of
like
validating
data
values
and
then
like,
maybe
we
evolve
it
from
there,
because
I,
I
really
think
that
something
like
having
like
something
that
gives
you
a
more
opinionated
idea
about
like
how
you
should
work
with
ytt
like
beyond.
G
Just
even
like
your
data
values
file
would
be
really
valuable
and,
like
maybe,
would
be
really
helpful
for
people
who
are
using
it,
not
even
just
as
like,
first-time
users
but
like
day-to-day
users,
of
it
help
them
like
learn
better
ways
of
like
working
with
the
tool.
G
So
maybe
that's
like
something
to
to
think
about
with.
It
is
like
how
valuable
is
this
like
overall,
like
beyond
like
doing
this
data
values,
validation
and
like
maybe,
would
it
like
just
but
like?
Would
it
make
more
sense
to
like
maybe
start
there
from
like?
Just
how
to
like
see
if
this
feature
like
adds
a
lot
of
value
to
people's
life
like
to
really
put
it
in
that
scope,
to
start.
B
I
think
one
way
that
we
can
kind
of
get
more
data
on
that
is
adding
use
cases
seeing
what
use
cases
this
could
help
with,
and
I
know
we
have
like
three
or
so
in
here,
but
if
we
think
of
more,
I
think
it'd
be
really
helpful
to
add
them
to
this
issue.
Like
here's
a
case
where
we
could
inform
the
user
of
a
better
practice-
or
this
is
a
pain
point
that
we've
seen
right
now,
be
super
helpful
to
add
that
in
here
get
more
data.
D
D
Use
cases
right
where
there's
a
use
case,
where
you
want
to
use
the
tool
ytt
in
order
to
generate
some
some
yaml
that,
then
you
want
to
pipe
to
something
else
or
you
want
to
generate
the
file
and
that's
one
use
case
right
and
there
and
that's
the
use
case
of
someone
that
is,
that
receives
the
yaml
and
you
just
want
to
generate
the
yaml
and
just
put
it
into
kubernetes.
For
example.
D
The
other
side,
where
I
am
the
person
that
is
developing
the
the
the
ammos
and
the
ytt
templates
and
so
on.
I
think
I
would
prefer
to
have
a
way
to
tell
me
like.
Is
this
correct,
because
why?
Why
should
I
call
ytt
to
generate
templates
and
give
me
a
wall
of
yaml
just
so
that
I
have
to
go
scroll
up
to
see
if
there's
a
there
was
any
warning
or
or
that
I
have
to?
I
need
to
add
like
another
flag.
I
think
these
are
like
this.
D
D
So
personally,
I
think
that
the
validate
gives
you
more
like
this
idea
of
like
okay.
I
want
to
make
sure
that
what
I'm
generating
is
okay
he's
good
while
ytt.
When
I
call
ytt
I
want
to
generate
my
yaml,
I
don't
in
some
way
I
care
about
the
correctness
of
it,
but
I
I
just
want
to
generate
my
yaml
and
pipe
it
into
something
that
will
apply
it,
for
example,
right
so.
B
A
Thanks
thanks
carrie
and
everyone
for
the
discussion
so
again
a
reminder.
Next
week
we'll
meet
on
monday
11
30
a.m.
Pacific
time
so
cool
have
a
great
day.
Everyone.