►
From YouTube: Cartographer community meeting - Oct. 20th 2021
Description
Cartographer community meetings are held every Wednesday at 8:00 AM PDT/11:00 AM EDT
Feel free to add your discussion topics/questions/ideas to the agenda:
https://bit.ly/2Z67z08
A
Okay,
hello
and
welcome
everyone
to
the
first
cartographer
community
meeting
today
is
october,
the
20th
and
well.
We
really
appreciate
that
you
are
watching
the
recording
or
joining
us
live
for
yeah.
Probably
most
of
you
are
aware,
but
photographer
is
an
open
source
project
that
was
released
on
october.
The
5th
basically
two
weeks
ago,
released
to
the
community
and
the
community
meetings
are
one
of
the
communication
paths
we
have
with
you
all
to
discuss
publicly
the
status
of
the
project,
the
important
stuff
and
also
to
gather
your
comments,
questions
and
ideas.
A
A
A
Okay,
so
first
things:
first,
we
agreed
that
community
meetings
will
happen
weekly
on
wednesdays,
8
a.m,
pacific
time,
11
a.m,
eastern
time
or
you
can
convert
to
your
time
zone.
You
can
reach
us
on
this
lak
channel.
This
is
in
the
kubernetes
slack.
Workspace
so
feel
free
to
go
there
and
start
or
continue
the
conversation
there.
The
the
the
main
goal
of
the
agenda
is
to
to
remain
open
right.
So
if
you
have
a
any
questions,
any
topic
you
want
to
discuss.
Please
add
it
to
the
open
mic
section.
A
We
will
talk
a
little
bit
about
the
the
whole
agenda
format.
If
we
need
to
make
some
changes
to
it.
We
have
also
a
google
group.
Well,
you
find
well,
if
you
subscribe
there,
you
will
you
receive
the
the
community.
Many
invites,
so
you
can
add
it
to
your
calendar
and
get
proper
reminders
for
app
and
for
any
new
or
any
news
we
need
to
share
with
the
community.
You
will
find
it
there
and
also
well.
A
The
the
repo
for
the
project
is
is
right
there,
these
community
meetings
are,
let's
say,
regulated
by
a
code
of
conduct.
That
is
also
public
in
the
in
the
ripple
in
summaries
to
be
nice
with
each
other
right.
A
Okay,
so
for
the
meeting
we
try
to
collect
the
beginning,
who
is
attending
the
community
meeting?
If
you
want
to
add
to
your
name,
your
or
the
organization,
you
represent,
it's
fine,
you
don't
want
to
do
it.
It's
totally.
Okay,
it's
it's
useful
to
you
know
to
be
able
to
connect
absolutely
meetings
and
continue.
Conversations
of
people
feel
free
to
add
your
name
to
the
app
in
this
list.
A
Agenda
right
so
well,
we'll
start
with
welcome.
Welcome
the
first
community
meeting
and
well
for
for
most
of
the
people
out
there
in
the
community.
We
all
the
cartographer
team
are
new
faces
right,
so
we
would
like
to
start
by
introducing
the
female
ask
each
one
of
you
to
self-introduce
to
your
community.
A
You
know
briefly
and
in
your
role
in
the
project
by
myself,
david
sparrow.
I
am
here
to
make
sure
everyone
in
the
community
has
what
they
need
to
feel.
You
know,
welcome
and
be
able
to
progress
and
as
a
user,
a
potential
contributor
and
to
the
project
right.
So
I
don't
know
who
wants
to
go
next.
B
Yeah
I
can
go.
My
name
is
dan
darpanski,
I'm
the
pm
on
cartographer
I'll
toss
it
to
josh.
C
Hey
everyone,
I'm
josh,
I'm
the
engineering
manager
on
the
cartographer,
oss
team
and
I'll
toss
it
to
rash.
E
Hi,
I'm
sam
coward,
I'm
an
engineer
on
photographer
oss,
pass
it
to
wishuma.
G
Hi,
yes,
I'm!
Oh
yes,
james
rawlings!
I
recently
joined
vmware,
so
I'm
working
as
a
tl
at
vmware
and
I
love
all
things:
automation,
continuous
delivery,
so
also
working
with
cartographer
and
I'll
pass
it
to
stephen.
H
Hey,
I'm
stephen,
I'm
also
a
tech
lead
at
vmware,
and
you
know
I
also
work
with
cartographer
and
contribute
to
the
design
and
I'll
pass
it
to
cara.
I
Hi,
I'm
kara,
I'm
the
new
senior
engineering
manager,
which
will
include
cartographer,
should
be
starting
today-ish
and
I've
been
with
vmware
for
about
10
years
I'll
hand
it
over
to
milan.
J
Hello,
milan,
I'm
engineer
on
knockdown
team
here
mostly
just
to
learn
about
cartography,
pretty
cool
project.
J
K
On
yeah,
I'm
todd
also
an
engineer
on
the
oss
cartographer
team.
L
Hey
I'm
john
engineer
in
the
supply
chain
tools,
program
and
really
interested
in
the
intersection
of
secure
software
supply
chain
and
cartographer.
A
Awesome
thanks
everyone,
okay
and
being
the
first
time
we
we
have
this
session
in
the
public.
We
wanted
to
provide
a
brief,
brief
overview
of
what
the
project
is
and
and
daniel
will
help
us
with
that.
B
Yeah,
so
at
a
super
high
level,
I
think
most
people
on
the
call
are
familiar
with
this
already,
but
for
those
of
you
watching,
the
recording
cartographer
is
what
we're
calling
a
supply
chain
choreographer
that
allows
you
to
essentially
stitch
together
a
bunch
of
different
kubernetes
and
on
kubernetes
components
into
something
called
a
path
to
production,
and
so
the
idea
is
that
the
person
who
is
defining
that
path
to
production
essentially
will
codify
kind
of
all
of
the
pieces
that
are
required
for
that
workload
or
for
that
application
to
get
to
production
or
to
get
to
an
environment.
B
We
also
separate
the
kind
of
the
the
division
of
control
in
such
a
way.
That
developers,
for
example,
only
really
need
to
worry
about
a
very
small
subset
of
yaml
trying
to
keep
it
as
minimal
as
possible
in
all
things,
yaml
in
as
many
places
as
we
can
so
developers
are
worried
about
writing
a
workload.yaml
which
really
specifies
the
things
that
a
developer
would
care
about,
so
where
their
application
lives,
any
kind
of
environment,
variables,
etc,
and
on
the
flip
side,
the
kind
of
the
operations
team,
the
devops
team,
a
platform
team.
B
You
know
any
other
kind
of
stakeholders
are
able
to
define
what
that
path
to
production
looks
like
cartographer
takes
the
workload.yaml,
which
was
written
by
the
developer
and
the
supply
chain
specification
and
essentially
combines
them
on
on
cluster
to
to
essentially
choreograph
all
of
the
different,
the
different
components
of
the
supply
chain.
The
other
piece
of
it
is
we're
built
on
a
choreography
model
which
allows
us
to
create
paths
to
production
that
are
really
flexible
and
really
swappable.
B
So,
for
example,
if
you
wanted
to
use
kpac
for
building
images
or
you
wanted
to
use
conoco
or
you
wanted
to
use
something
else
that
builds
images,
you
definitely
could
as
part
of
your
path
to
production
and
each
of
the
different
steps
of
the
path
abroad
are
incredibly
flexible
and
easily
swappable
and
so
yeah
at
a
really
high
level.
That
is
cartographer
in
a
nutshell,
there's
a
lot
of
nuance
and
a
lot
of
little
things
that
we
dive
much
more
deeply
into
in
the
documentation.
B
So
again
for
those
of
you
on
the
call
I
probably
know
most
of
us
already,
but
for
those
of
you
watching
the
recording
definitely
check
out
the
documentation,
as
well
as
the
the
repo
and
yeah
for
interest
of
time.
That's
kind
of
the
level
that
I'll
keep
it
at
and
I'll
toss
it
back
to
david.
A
Cool,
thank
you.
Daniel
okay,
regarding
the
agenda
itself
remembers
the
first
time
we're
doing
this,
so
we
have
the
chance
to
change
things.
So
yesterday,
rasheed,
you
know
brought
up
to
me.
Some
interesting
points
on
kind
of
everything
is
becoming
up
and
like
right.
So
all
the
items
here
are
are
basically
in
the
in
the
same
place,
so
how
to
establish
priority.
A
I
was
thinking
if
you
know
how
to
do
it
and
if
we
should
do
it,
because
you
know,
let's
say
that
a
community
user
puts
a
question
at
the
end
here
and
he
doesn't
receive
any
of
both
we'll
still
need
to
listen
to
them
right.
We
still
need
to
give
them
priority.
So
my
short
experience
with
this,
and
also
by
being
part
of
some
upstream
communities,
is
that
somehow
happens
organically.
A
It
needs
to
have
you
know.
The
topics
here
need
to
have
some
some
kind
of
sections.
I
don't
know
structure
that
that
was
my
original
idea
with
reviewing
outstanding
ipm
items.
I
don't
know
if
that
makes
sense,
and
also
having
a
section
for
rfcs
that
also
I
don't
know
if
that
makes
sense.
So
I
would
like
to
know
your
thoughts
briefly
around
this
also.
This
will
happen
organically
throughout
the
weeks,
but
if
you
have
some
thoughts
around
how
to
organize
this.
L
So
I'll
just
jump
in
as
a
not
cartographer
contributor.
Not
yet
I
for
me,
what
would
be
helpful
is
a
like
a
high
level
overview
of
what's
going
on,
if
I
can
make
it
to
a
community
community
meeting
once
a
month
or
every
couple
of
weeks
having
an
idea
of
like.
What's
what's
the
overall
set
of
projects
or
works
that
are
going
on,
I
try
to
attend
the
six
store
meetings
regularly
and
they
kind
of
start
with
a
round
table
of
the
main
projects
right.
L
So
there's
a
cosign
project
and
a
full
co,
and
they
they
do
a
brief
update
for
each
of
those
and
then
inside
of
those
sections.
They
have
links
to
the
deep
dive
topics
for
specific
rfc
proposal
for
this
area
and-
and
I
don't
know
how
cartographers
structured,
what
what
makes
sense
the
right
way
there.
But
for
me,
like
a
high
level
overview
of
the
work
and
the
direction
the
project
is
going
would
be,
would
be
helpful.
D
D
A
Okay,
we'll
keep
iterating
on
this,
but
yeah
you're
keep
your
ideas
coming
please!
A
Okay!
So
first
thing
here
that
I
see
is
the
the
rfc
10.
So
I
don't
know
who
like
to
chime
in
on
this.
F
Yeah,
so
rc10
is
a
nrc
to
take
the
parameters
right
now
we
empower.
F
A
supply
chain
author
or
supply
a
template,
author
can
say
here's
a
default
value
and
then
a
supply
chain.
Author
can
say
all
right:
I'm
either
going
to
ask
the
developer
to
fill
that
pram
or
not,
but
there's
no
power
for
the
supply
chain
author
to
in
the
same
way.
F
You
know
similar
to
the
template
author,
there's
no
way
for
the
supply
chain,
author
to
say:
well,
if
that's
not
supplied,
here's,
what
the
default
value
should
be,
and
so
we've
already
had
users
who
have
asked
us
and
said
this
is
a
something
that
we
would
need
to
work
around.
We
would
really
appreciate
being
able
to
do
that
parameter
passing
similarly
or
not.
F
Similarly,
part
of
the
rfc
also
changes
the
manner
in
which
we
pass
parameters
from
the
workload
to
the
supply
chain
to
the
template
where
we
just
match
on
name
where
I
say:
if
you've
got
the
name
of.
F
If
if
a
template
is
expecting
variable
foo,
then
the
supply
chain
just
says:
pram,
foo
and
because
template
authors
you,
you
can
have
two
template
authors
that
both
ask
for
foo.
We
need
some
way
to
disambiguate
for
the
developer
when
the
developer
specifies
foo
that
the
supply
chain
author
may
be
passing
foo
to
resource
a
or
resource
b.
So
we'll
need
the
temp
the
developer,
to
specify
a
component
or
sorry
resource,
one
foo
and
then
resource
two
foo.
F
That's
some
of
the
motivation
as
well
as
the
intended
approach.
I
don't
know.
H
So
I
left
some
comments
kind
of
at
the
bottom
of
the
rfc.
You
know
I'm
really
worried
about
creating
a
lot
of
coupling
between
the
workload
and
the
supply
chain.
Like
the
you
know,
I
feel,
like
the
supply
chain
should
be
a
way
for
an
operator
to
create
an
experience.
H
You
know
based
on
different
services
and
then
those
services
can
offer
you
know
together
can
kind
of
cooperate
to
offer
configuration
in
the
workload
and
so
kind
of
my
proposal
at
the
bottom
of
that
is,
you
know,
could
we
kind
of
make
that
params
map
and
workload
sort
of
a
special
easy
to
access?
You
know
map.
H
Let
operators
override
that
with
kind
of
constant
parameters
that
don't
templatize
against
the
workload
so
that
we
can't
so
we
don't
introduce
more
coupling
kind
of
between
those
components,
so
they
can
still
override
them,
which
I
think
was
a
goal
like.
If
you're
an
operator,
you
could
still
override
a
particular
value
over
a
template
and
then
kind
of
allow
defaults
in
the
template
as
well,
so
that
you
know,
if
you
can
have
a
template
that
has
defaults.
You
don't
have
to
use
ytt
or
you
know,
come
up
with
a
new
and
json
path.
H
Notation
for
defaults,
there's
still
an
opportunity
for
an
operator
to
provide
something
that
looks
like
a
default
in
the
supply
chain,
but
then
the
developer
could
still
override
it
through
the
workload
I'm
wondering
if
that
kind
of
hits
enough
of
those.
The
kind
of
problems
we're
facing
right
now
with
parameterization.
H
But
it
it
wouldn't
separate
out
parameters
per
template
right.
D
Sort
of
requires
some
orcish
orchestrations,
probably
not
the
right-
coordination
between
template
authors,
some
set
of
idioms
that
get
developed.
Maybe
we
can
provide
guidance
then,
but
I
feel
like
that
would
be
required
yeah.
I
I
can't
jump
on
an
obvious
example
right
now,
but
I
know
there
are
certain
variables
that
you
would
provide
like.
I
I
think
classically
would
be
maybe.
D
Credentials
right
so
credentials,
here's
my
service
account
and
I'm
sure
that,
from
their
perspective,
a
template
also
would
be
like.
Oh,
I
just
need
a
service
account,
so
I'll
look
for
the
parameter
service
account
and
then
that
just
becomes
the
one
service
account
and
maybe
there
needs
to
be
multiple
service
accounts
to
satisfy
these
needs,
and
I'm
guessing
that
you
know
two
things
could
come
of
that.
D
You
could
have
multiple
template,
authors
or
say
service
account
right
rather
than
a
service
account
that
can
get
push
service
account
that
can
do
this
right
and
that
and
that
fine,
the
the
person
defining
the
workload
would
just
give
a
service
account
and
it
would
work
with
all
of
them
if
it's
broadly
applicable,
but
in
the
situation
where
the
supply
chain
author
knows
that
it's
not
going
to
be
broadly
applicable
that
they
could
provide
certain
suppliers
override
certain
ones
of
those
params
to
certain
of
the
templates
and
other
templates
remap
them
into
the
workload
so
that
because
they
know
that
workload
users
might
have
to
have
separate
params.
D
H
If
template
authors
understand
that
you
know
the
kind
of
parameters
they
receive,
are
you
know
going
to
be
top
level
workload
items
that
they're
not
going
to
be
names
based
under
the
resources
and
they
can
use
generic
names
when
they
want
to
accept
parameters
that
are
shared
more
specific
names.
They
want
to
accept
parameters
that
are
pretty
specific
to
their
their
templates.
So
the
the
name
spacing
issue.
H
You
know,
I
think,
creating
a
simple
interface
where
you
don't
you
don't
couple
to
you
know
k-pac,
specifically
when
you're
adding
parameters
right
that
the
kind
of
template
authors
can
you
know
have
conventions
to.
You
know
kind
of
work
through
to
me
that
that
hits
the
trade-off
a
little
you
know
or
like
the
trade-off.
There
seems
a
little
bit
better
than
forcing
the
workload
the
person
who
writes
the
workload
to
think
to
know.
H
D
F
And
I
I
think
that
one
of
the
one
of
my
concerns
is
who
do
we
expect
to
know
about
which
actors
and
when
and
I
don't
expect
template
authors
to
be
aware
of
other
template
authors,
and
I
expect
them
to
be
the
earliest
actors
in
in
the
supply
chain
that
they
write.
They
write
months
before
the
template
is
actually
used
and
then
the
conversely,
I
think
of
the
developer
as
the
last
actor
and
the
one
that
has
the
most
communication
from
the
supply.
Like
the
supply
chain
author.
F
It
knows
all
the
templates
that
it's
referring
to
and
they
can
specify
to
the
developer.
Here
are
all
the
fields
that
you
need
to
to
fill
in
like
x,
y
and
z.
But
when
we
say
I
think,
there's
an
implicit
and
implicit
unders,
an
implicit
assumption
in
what
you
were
saying
that
every
template
author
would
would
settle
on
the
same
understanding
of
which
variables
would
be
shared
and
which
ones
would
it
be.
H
As
an
operator
right,
I
think
it
should
be
really
easy
to
make
changes
to
your
supply
chain
without
having
to
tell
all
of
your
developers
that
they
have
to
modify
all
of
their
workloads
or
whereas
a
developer
who's
created
an
application.
You
know
the
application
that
the
workload
gamble
you
create.
H
It
seems
important
to
be
able
to
take
that
to
different
places
right
and
then,
and
in
some
instances
there's
you
know
there
might
be
a
scan,
that's
required
and
you
might
have
to
provide
some
configuration
for
the
scanner
in
other
cases
that
scan
resource
might
not
be
there
right.
The
like
for
an
image
scan.
H
I
think
you
know,
I
understand
the
benefit
of
having
a
tight
contract
right
and
if,
if
we,
if
the
goal
was
you
know,
operator
creates
all
of
this
kind
of
static
infrastructure,
then
asks
you
know,
documents
that
then
asks
developers
to
bring
workloads
to
that
infrastructure.
I
think
it
would
work
well
to
have
more
of
that
name
spacing,
but
you
know
to
me:
the
supply
chain
is
kind
of
like
a
drag
and
drop
interface
that
the
operator
can
kind
of
change.
H
You
know
whenever
they
want
that's
kind
of
low
configuration
to
say
these
are
the
services
in
this
environment
that
an
application
is
going
to
get
built
using
and
then
whatever
parameters,
kind
of,
get
brought
in
through
their
selection
of
templates
are
configurable
by
the
hopefully
don't
need
very
much
configuration
but
are
kind
of
configurable
in
a
simple
way
by
the
developer,
and
you
know
as
long
as
we
set
that
expectation
from
the
beginning,
that
templates
do
work
on
a
shared
namespace
of
parameters.
H
Then
you
know,
I
think,
it'll
make
template
authors
careful
about
their
choice
of
of
parameters
they
choose
and
we
can
set
some
standards
for
parameters
that
are
shared
between
them.
Two,
like
an
example,
this
is
like
environment
variables.
H
H
A
Okay,
cool
since,
like
we'll
move
into
the
next
section,
open
mic,
which
is
kind
of
everything
else,
first
thing
will
be
revisit
error
handling,
so
worship.
D
Oh
yeah-
and
this
is
one
of
those
things
that
came
up
for
me
in
priorities.
I
we
talked
about
this
last
week
in
our
first
of
these
meetings
that
was
private
before
we
established
this
open
forum.
I
just
want
to
make
sure
that
previous
business
we
get
out
of
the
way
so
we
spoke
about.
We
spoke
about
the
standards
for
like
finding
an
idiomatic
or
standard
way
of
doing
error
returns
and
having
those
errors.
I
think
our
intention
was
to
keep
stack
traces,
but
from
what
I
can
see.
D
The
only
time
we
actually
saw
stack
traces
must
have
been
coming
from
people
who
were
using
the
github
package
because
the
standard
the
standard
go
standard,
library
errors,
don't
log
the
stack
trace,
it
would
seem
so
so
I
came
up
with
two
proposals.
We
we
stick
to
the
standard
lib
where
we
lose
access
to
most
of
the
stack
trace
output
in
our
error
logging,
unless
we're
logging
third
party
errors
that
are
using
that
github
package
anyway
or
we
can
just
use.
In
that
case,
we
just
use
format,
error.
D
F,
we
don't
need
to
use
errors.
New
format,
f
returns.
The
same
kind
of
object,
no
loss
of
information,
because
there
isn't
any
stack
trace
in
there.
The
alternative
is,
we
use
github's,
the
the
the
package
errors
on
github,
so
it's
less
conventional,
although
I
still
see
plenty
of
people
using
it
and
it
has
stack
traces.
It
just
gives
it
to
you,
and
in
that
case
you
use
either
errors
errors,
f,
error,
f
or
errors
new.
D
G
D
D
D
You
have
to
make
sure
your
logging
input
information
is
so
valuable
that
your
debug
is
easy,
whereas
this
is
a
cheap
way
to
obtain
a
call
stack
context.
H
I
don't
want
to
detract
from
the
question
of
you
know:
should
how
should
we
do
stack
traces,
but
I
think
the
pkg
errors,
one
on
github
is
owned
by
dave
cheney.
It's
not
like
a.
He
just
got
the
pkg
username
on
github,
it's
not
a
github
specific
thing,
and
it's
I
think
it's.
I
don't
think
he
plans
to
maintain
it
anymore.
It
says
it's
in
maintenance
mode,
because
it's
kind
of
been
superseded
by
when
that
functionality
got
integrated
into
the
standard
library.
I
don't
know
if
that
helps.
D
Yeah,
no,
it
does
help,
because
I
did
try
to
look
at
it.
I
mean
I
didn't
didn't
understand
who
was
owning
it,
and
I
know
it
wasn't
a
github
thing.
I
just
should
try
and
differentiate
between
that
and
the
standard
lib.
I
use
the
word
github
because
that's
how
people
write
it
yeah
I
mean
so.
If
he's
planning
on
not
doing
that,
then
it
means
to
me
that
we
should
use
the
standard
lib.
Happily
just
use,
error,
f
and
not
worry
about
errors,
dot
new!
D
So
that's
very
easy,
very
straightforward
percent
w
to
wrap
when
you
want
to
and
and
tackle
our
contextualization
of
log
errors
and
that
sort
of
stuff
separately
in
a
discussion
about
logging.
A
Cool
seems
like
we
are
moving
to
the
next
set
of
items.
Also,
rasheed
was
kind
enough
to
point
me
to
a
document
for
the
original
discuss
meeting,
which
already
had
some
items
there
that
I
took
the
liberty
to
move
here.
I
believe
those
were
created
by
sam.
E
Yeah,
I
think,
as
rash
said,
like
you
know,
we
we
kind
of
like
had
a
a
bunch
of
like
talking
points
that
we
just
kind
of
had
lying
around,
and
I
guess
I
had
a
few
more
that
came
up
recently
that
I
just
kind
of
like
dumped
onto
that
list.
E
So
I
can
quickly
kind
of
go
through
this.
I
suppose
like
so
we've
got
the
cuddle
tests
naming
convention
right
now
we
pretty
much
just
have
a
convention,
we've
been
following
everywhere,
where
we
name
all
the
objects
and
cuddle
tests.
E
We
separate
them
with
dashes,
but
then
we
also
need
to
name
space
them
by
by
the
test
as
well,
so
they
need
this
like
prefix
before
them.
So
you
can
see
here
like
we
have
something
descriptive.
That
might
be
like
the
name
of
your
test
and
triple
dashes
and
then
something
that
describes
the
test
objects.
E
I
think
we
probably
don't
want
to
use
more
than
255
anyway,
because
at
which
point,
I
think
the
names
of
the
test
objects
are
probably
incomprehensible,
but
63
is
definitely
a
limit
that
you
can
bump
up
against
pretty
easily.
So
what
I'm
arguing
for
is
to
just
universally
change
the
right
hand,
side
of
the
triple
dash
part
to
just
be
like
dot
delimited.
Maybe
something
like
that
just
so
that,
like
you,
don't
keep
easily
bumping
into
that,
because
then
you
end
up
like
really
pruning
names
and
having
to
abbreviate
things.
Sometimes.
E
Honestly,
it's
very
much
an
aesthetics
question
and
I
think
the
more
interesting
piece
below
is
the
logging
design
discussion
but
I'll.
Let
other
folks
have
the
floor
first
to
see
if
they
care
about
this
testing
convention
much.
F
I
mean,
I
think,
that
the
fourth
bullet,
that,
if
we
use
the
dot,
if
we
don't
run
into
the
the
character
limit,
is
I'm
right
on
board.
I
think
that's,
I
think
it's
a
not
nice
gotcha
for
some
future
devs,
so
yeah.
E
Yeah
I'll
plan
a
trial
to
go
through
and
change
all
the
tests,
just
so
that
it's
obvious
that
that's
the
appropriating
and
that
is
a
con
contributing
guide,
right,
yeah,
yeah,
absolutely
so,
logging
design.
This
is
like
something
that's
just
become
apparent.
We
we
picked
up
a
story
recently,
so
we've
had
very
little
requirement
in
the
way
before
about
like
what
we
needed
to
log,
and
so
we've
got
some
very
minimal
stuff.
E
This
just
happens
to
be
there
when
we
wrote
the
code,
there's
no
strict
requirements
when
we
have
any
story
about
what
needs
to
be
logged
and
we've
just
run
into
a
situation
recently
where
we
now
question
what
happened
in
a
situation
where
something
unexpected
happens
related
to
caching,
and
so
now
we're
going
back
and
adding
logging
to
the
cache.
E
But
this
logging
is
pretty
verbose,
and
so
you
know
I
guess
you
could
say
this
is
the
first
time
we're
really
adding
what
we
might
consider
debug
logging-
and
this
is
kind
of
like
shown
a
bit
of
a
light
on.
You
know
we
just
happen
to
be
using
a
logging
framework
by
convention
and
somewhat
equivalent
to
the
errors
domain.
E
There's
differing
views
on
how
logging
should
be
structured
that
are
out
there,
and
you
know
varying
frameworks
and
approaches
for
that
and,
of
course,
that
kind
of
begs
the
question
like
if
we're
going
to
start
undertaking
doing
debug
logging,
what
what
is?
What
is
our
approach?
How
do
we
want
to
think
about
that?
E
Do
we
want
everything
to
go
into
one
log
bar
debug
logs
terrible?
Do
they
need
to
go
somewhere
else,
and
I
think
quite
potentially,
we
need
to
think
a
lot
more
about
the
stories
that
we've
got
when
we're
reviewing
them.
You
know
what
what
what
is
the
logging
use
case
right?
Who
who
are
the
stakeholders
and
what
will
they
care
about
happening?
E
You
know,
and
of
course
one
of
those
stakeholders
is
awesome-
the
support
team
right
yeah,
and
so
I'm
arguing
for
a
lot
of
things
here
that
are
all
kind
of
around
logging
right,
one
of
them's
around
like
asking
us
to
be
more
and
temple
about
what
goes
into
the
design
of
the
stories
to
say
what
needs
to
be
logged,
and
then
two,
I
think,
is
like
the
investigating
like.
E
D
I
think
our
stories
need
to
specify
the
kind
of
log
message
that
a
user
cares
about.
So
I'm
talking
about
info
and
error
all
right,
that's
important
straight
off
the
bat
and
for
us
debugging
as
a
kind
of
user,
including
users
who
are
trying
to
debug
whatever's
going
wrong,
not
necessarily
needs
to
be
in
the
story,
but
needs
to
exist
during
a
review
that
something
with
a
lot
more
context
is
applied
or
is
supplied.
D
I'd
be
curious
to
know
if
there
are
conventions
out
there,
because
it
would
be
preferable
in
my
mind,
to
have
some,
even
if
it's
a
truncated,
debug
log,
a
debug
log
actually
stored
somewhere.
That
can
be
obtained
after
the
fact,
rather
than
asking
a
user
to
reconfigure
their
deployment
to
get
debug
logging
turned
on
so
that
they
can
just
ask
for
a
pod
log,
because
that
would
be
yeah
after
the
fact
tends
to
be
painfully
hard
to
recreate
so
we'd
love.
E
D
H
So
sorry
is
the
question:
it's
about
debug
logging
and,
if
there's
another
place,
to
put
deep
logs
besides
the
kind
of
pod
logs
generally
no
and
on
kpec,
I
think,
although
I
haven't
less
familiar
with
what
they're
doing
recently,
I
I
think
they
use
log
levels
and
just
allow
that
to
be
configurable
and
that's
usually
the
common
approach.
H
The
important
thing
is
that
or
like
an
important
way
to
think
about
this
is
the
when
your
controller
logs
it'll
go
into
pod
logs
that
live
on
whatever
node
the
pod
lives
on,
but
then
the
customer
may
set
up,
or
I
mean
sorry
community
working
group-
the
end
user
may
set
up
something
like
fluent
bid
or
fluent
d.
That'll,
collect
kind
of
you
know
pod
logs
across
the
cluster
and
send
them
to
some
log
storage
solution
that
could
keep
them
permanently.
H
You
know,
aside
from
letting
the
component
be
configurable
at
the
log
level
that
it
logs
at
you
know
I
I
don't
tend
to
worry
too
much
like
I
wouldn't
log
every
you
know
millisecond
or
whatever,
but
I
would
tend
to
lean
more
on
verbose
logging.
That
user
can,
you
know,
really
use
to
identify
problems,
as
opposed
to
you
know.
H
G
No,
I
I'd
agree.
I
mean
that's
the
similar
approach
that
I've
been
I'm
familiar
with
yeah
yeah.
In
fact,
I
mean
we
actually
wrapped
a
logging
logarithm
and
enabled
some
nice
environment
variables
to
be
able
to
switch
on
debug
levels
and
different
types
of
formatting
as
well,
because
you
might
want
to
be
able
to
configure
like
stackdriver
formats
so
that,
if
you're
running
on
gcp
users
might
want
to
have
stackdriver
level
logging
and
json
and
davis
different
things
as
well.
D
Oh,
I
don't
know,
what's
going
on
with
my
mute
there
we
go
we're
currently
using
the
zap
logging
that
comes
like
is
the
default
out
the
gate
with
coup
builder
and
controller
time.
I
believe
it's
configurable,
for
I
mean
it
has
at
least
a
couple
of
serializers.
D
So
does
anyone
is
anyone
aware
of
any
concerns
we
might
be
or
any
corners?
We
might
be
driving
ourselves
into
using
that
or
will
that
be
satisfactory
for
those
kinds
of
use
cases
eventually,
as
far
as
we
know,.
H
Yeah,
I
think
zap
is
pretty
standard
right
now,
there's
a
an
interface
library
that
some
people
use
app
through.
At
least
q
builder
used
to
use,
but
I
don't
know
if
it
still
uses
it.
It's
like
log,
I
think,
or
something,
but
it
I
think
it
just
changes.
The
interface
for
zap.
D
D
Let's
make
sure
that
we
have
in
our
stories
some
kind
of
id
we
identify
before
the
developers
deep
in
the
weeds
is
what
they're
working
on.
We
identify
the
kinds
of
notices.
We
want
to
hear
all
right,
the
sorts
of
things
a
user
might
expect
at
a
when
an
error
occurs
or
when
there
is
some
information
that
might
be
valuable
to
them
about.
D
What's
what's
going
on
under
the
hood
and
then
debug
logging,
let's
try
and
get
as
much
context
as
we
can
in
there
and
be
verbose
enough
to
to
just
like
not
stress
about
it,
because
we
know
that
it
can
be
a
configurable
level.
We
can
then,
instead
of
stressing
about
how
much
information
to
provide
provide
as
much
as
we
think
is
necessary,
which
is
a
problem
we've
had
in
the
past.
We
have
not
said
enough
information.
E
A
Yeah,
as
for
the
daniel's
suggestion
to
add
an
action
items
section
which
makes
sense,
I
just
want
to
make
sure
that
I'm
understanding
for
all
these
logging
discussions
what
will
be
action
items.
A
I
think
I
understand
more
identifying
and
make
sure
that
we
are
documenting
stories
some
of
these
specifics,
but
I'm
I
just
wanted
to
you
know
what
will
we
have
as
action
items
for
this
login
discussion.
D
D
No
one
seems
upset
with
that
approach
and
it
seems
safer
than
trying
to
dry
those
results
up
if
everyone's
okay
with
that
for
now
the
idea
is
you
log,
an
error,
and
then
you
return
an
error
and
that
might
be
logged
multiple
times
up
the
stack
and
that
doesn't
matter
either
at
this
point,
it's
better
to
be
noisy
than
to
be
too
quiet.
B
I
was
just
gonna
say:
follow
one
on
the
action
item
suggestion
we
should
probably
save
like
the
past
or
the
last
five
minutes
of
this
meeting,
just
to
assign
action
items
to
volunteers,
volunteer
fraction
items
and
then
just
make
sure
that
started
that
way.
We
can
just
ensure
that
issues
get
created
for
them.
A
E
D
D
Oh
just
I
will
just
mention,
though,
that
there
is.
There
is
at
least
a
couple
of
frameworks
for
this
already
out
there
and
the
big
projects:
okay,
native
okay,
carvel
carvel.
E
D
D
This
this
was
too
early
on
here.
Sorry,
so
it's
okay,
but
I
just
think
it
was
valuable
to
make
it
shared
that
we're
actually
thinking
about
it.
I
also
think
it's
safe
to
duplicate
the
code
so
long
as
the
test
pass
and
we've
got
a
product
that
works.
I
think
it's
fairly
safe
to
duplicate
the
code
at
the
moment,
because.
D
C
D
E
Was
unfair
like
it
took
up
all
the
space
on
the
other
half
of
this
meeting,
but
this
is
another
interesting
one
because,
like
the
cuddle
tests
are
like,
I
don't
I
don't
know
they
feel
like
what
I
would
normally
call
like
an
acceptance
test,
but
I
mean
the
way
that
we
write
them.
They're,
really
really
obtuse
they're
they're
hard
to
reason
about
like
what
what
we
would
really
expect
features
to
be
used
for
because
they
just
go.
Oh
well.
E
You
know
I
just
want
to
see
the
way
that
something
gets
templated
out
so
I'll,
just
template
a
config
map
or
a
test
object,
and
it's
not
not
always
obvious
to
see
the
the
big
pictures
from
the
cuddle
tests
and
I
wonder
how
much
that's
a
good
or
a
bad
thing.
I
don't
know.
I
know
we
have
the
intent
test.
D
There
was,
there
was
a
action
item
that
I
took
from
the
last
inception,
which
I
still
have
every
intention
to
do,
which
is
to
sit
down
and
convert
a
handful
of
cuddle
tests,
probably
around
one
of
the
smaller
sets
that
we
have
probably
around
delivery
or
pipeline
to
convert
them
to
something
a
little
bit
more
bdd
using
gingko,
because
that's
what
we're,
using
as
our
framework
to
make
them
more
concise
and
easier
to
recognize
what
what
they're
testing.
D
So,
I
had
an
action
item
to
try
and
generate
a
generate
an
example
of
how
we
would
convert
them,
because
we
also
took
from
that
the
feeling
that
using
cuddle
test
as
our
primary
form
of
integration
testing
made
it
harder
to
debug.
E
Yeah
because
normally
concision
in
test
is
good,
but
when
it
happens
in
the
cuddle
tests,
because
there's
no
verbiage
and
pros
around
everything
right,
you've
just
got
a
bunch
of
yaml
documents.
D
D
D
Maybe
that's
another
thing
that
we
can
make
sure
we
cement
at
the
next
meeting.
If
we've
got
time
before
then
otherwise,
at
a
meeting
in
the
future.
H
This
may
be
a
little
tangential,
but
you
know
I
know
kubernetes
still
uses
genko
and
gomega,
or
at
least
ginkgo
and
in
some
places,
but
kind
of
they
seem
to
be
moving
away
from
it,
and
it
doesn't
seem
to
be
the
direction
that
a
lot
of
kind
of
open
source
you
know
kate's,
you
know,
controller
projects
are
kind
of
going
these
days.
Do
we
you
know?
G
No,
I
I
think,
like
my
my
general
view
of
tests,
is
just
trying
to
make
them
as
easy
as
possible,
since
people
make
create
lots
of
them
reducing
the
barrier
of
entry
into
just
writing
tests,
along
with
the
code
of
contributions
and
having
it
familiar
to
what
they,
what
other
projects
are.
Having
as
much
consistency
across
projects
does
reduce
that
barrier,
I
think,
do
you
know
what
they're
they're
looking
at
doing?
Just
writing
just
gen
general
go
test
with
cubans,
who's
fakes,
and
things
like
that.
D
G
D
F
D
It
does
that
just
because
we
put
use-case-oriented
language
in
it.
We
have
an
example
right
now,
where
there
is,
there
is
one
test
that
is
use
case-oriented,
another
one
which
is
just
context
oriented.
Given
you
know
in
this
state,
this
is
what's
going
to
happen
versus
bdd
language,
which
is
you
know
the
the
user
scenario.
D
So
so
there
is
it.
It
comes
down
to
how
you
write
the
the
language
in
your
open
strings.
Really,
I
think
you
can
keep
that
pattern
of
describing
user
experiences
even
in
low-level
testing.
All
right.
I
was
looking
at
spec
with
marty
because
he
mentioned
it,
which
is
steven's
framework.
D
You
know-
and
I
was
like
I'd
like
this-
not
because
I
don't
like
ginkgo,
but
because
it
would
give
me
standard
runner
behavior,
which
means
in
the
ide
it
would
be
easier
to
you
know.
Ides
will
integrate
and
tell
me
this
error
failed.
This
error
did
not,
and
it
would
make
more
sense
because
most
ides
know
how
to
handle
anything.
That
is
conventionally
a
go
test.
D
So
I'd
be
interested
in
that
this
is
entirely
orthogonal
to
whether
we
ditch
cuttle
for
something
that's
an
in-memory
test.
So
the
original
thing
just
wanted
to
sort
of
like
draw
a
line
under
that
I
mean
we
definitely
want
to
do
the
work
to
work
out
what
it
would
look
like
to
stop
using
cuddle
and
get
some
in-memory
tests
going,
which
I
think
are
more
friendly
for
the
most
part
when
doing
story
oriented
testing.
D
But
I
would
like
to
see
some
examples
personally
of
people,
because
I've
read
I've
looked
around
to
find
examples
and
struggled
to
find
testing
that
even
makes
any
sense
to
me
personally
in
the
kubernetes
sphere,
so
I'd
love
to
I'd
love
to
get
feedback
from
people
who
have
seen
successful,
open
source
projects
that
are
testing
integration
level
tests
well-
and
I
happily
take
inspiration
from
that,
and
I
think
we
should
all
maybe
review
that
together
rather
than
because
my
situation
is.
I
first
thing
I
thought
of
was
carvel.
D
I
couldn't
find
any
tests,
so
I
don't
know
I'd
I'd
like
to
see
some
great
examples.
G
It
came
from
from
projects
I've
been
working
on
recently
and
we
tend
to
do
fairly.
Fine
grain
unit
tests
on
you
know
sitting
alongside
the
code,
and
then
we
would
have
a
wider
set
of
record
and
bdd,
but
they
were
more.
G
We
would
actually
physically
build
and
deploy
a
cluster
provision
it
and
run
through
some
sort
of
scenarios,
for
example,
create
an
application,
get
it
promote,
so
the
behavior
of
that
making
sure
that
that's
kind
of
that
end-to-end
behavior
isn't
firstly
affected
by
that
change,
that's
kind
of,
but
there's
a
cost
to
that
as
well.
So
you
don't
want
to
be
doing
on
every
single
week
on
every
single
change,
bringing
up
lots
of
clusters
on
different
cloud
providers
as
well.
G
D
Yeah-
and
I
think
that's
a
stylistic
choice
that
not
all
of
us
are
going
to
be
comfortable
with,
because
I
for
one
like
my
regression
tests
to
be
at
the
level
that
run
fast,
but
actually
test
an
integrated
system,
not
a
bunch
of
things.
I
want
to
feel
comfortable
refactoring
all
right,
so
if
we
can
find
examples
of
that
would
be
really
nice.
H
To
me,
the
important
thing
is
like
in
the
long
term,
when
we
have
ideally
many
more
contributors,
you
know
from
different
places
that
it's
it's
really
easy
to
add
tests
that
give
us
all
the
coverage
that
we
need.
You
know
at
the
integration
level
at
unit
level
and
whatever.
So
I
really
like
the
idea
of
looking
at
what
other
projects
they've
been
successful,
have
done
and
what
kind
of
coverage
they
have
and
just
keep
an
open
mind
to
that.
H
H
You
know
in
the
past,
but
like
balancing
that
with
you
know,
what's
going
to
be
what's
the
right
thing
for
the
long-term
success
of
the
project,
when
you
know
half
people
here,
working
on
different
things
in
three
years
or
whatever
right
and
not
not
just
what's
you
know
what
what
what
we'd
like
to
do
based
on
our
kind
of
recent
experiences,
if
that
makes
sense,.
D
D
Yeah
we
use,
we
currently
use
api
server,
so
we
run
kubernetes
without
being
able
to
generate
pods.
So
that's
is
it
in
controller
runtime
that
that's?
I
forget
where
it's
test
test
yeah.
I
think
it's
part
of
controller
runtime
right
or
is
it
part
of
anyway?
Wherever
it
is
it
we
we
run
fcd
and
an
api
server.
So
we
can
do
everything
that
doesn't
involve
actually
standing
up
a
pod
and
it.
G
But
it's
still
is:
it's
turned
into
an
episode.
When
you
can
see
you
can
apply
your
resources,
you
can
check
in
your
search.
You
can
actually
interact
with
it
like
it's
an
api
server,
so
it's
just
a
an
interesting
viewpoint.
Really,
okay
I'll
get
to
know
I'll
dive
in
and
gain
understanding.
It's
more
question
really
yeah.
D
D
But
please,
let's,
let's
see
examples.
If
anyone's
got
great
experience
with
well-tested
open
source
products,
we
can
take
the
lead
from
those
sorts
of
things.
What
we're
doing
is
just
using
our
experience
with
other
entirely
different
products
right
in
entirely
different
scenarios.
We're
not
we're
not
working
as
a
case
controller
developer.
I've
used
this
or
I've
used
this,
and
so
that's
what
we've
rolled
with,
because
it's
what
we
know,
but
if
we
can
see
great
examples
to
follow,
that'll
be
fine.
A
The
session
is
so
interesting
that
we
are
way
past
the
time,
so
hopefully
the
we
will.
We
won't
have
the
chance
to
discuss,
live,
vrs,
rfc
13,
like
family,
so
we'll
have
to
follow
up
the
next
meeting.
If
you
don't
mind
and
also
the
points
that
I
put
here
for
anything
urgent
so
finally
for
the
action
items,
we'll
ask
some
of
you
to
please
voluntarily
take
take
some
of
those
action
items
feel
free
to
add
your
name
to
the
list.
A
Okay,
so
well
again,
we
are
past
the
time.
I
really
appreciate
your
time
and
have
a
nice
day
thanks
david.
Thank
you.
Thank
you.