►
From YouTube: Cartographer Community Meeting - Dec. 8th, 2021
Description
00:00 Intro and welcome
01:07 The TL;DR - What's new in the project this week?
06:19 Follow up from previous meeting
09:22 Open Mic (Issue 430 Remove RBAC Informers)
21:07 Permissions and ServiceAccount discussion
31:23 Provide more informational error messages
44:51 Next version numbering
Community meetings happen each Wednesday at 8:00 AM PT/11:00ET
See the agenda here (https://bit.ly/2Z67z08), add any topic you may want to discuss and join us live!
A
Okay,
hello,
welcome
everyone
to
the
cartographer
community
meeting
today
is
december,
the
8th
and
well,
let's
get
started.
Let
me
share
my
screen
with
the
agenda
there.
You
go
okay,
yeah!
Thank
you,
emily.
I
think
I
have
to
do
the
same.
You
know
there
my
name
under
the
ad
in
this
list
that
will
help
us
to
keep
up
the
conversation
even
after
the
meeting
and
remember
all
sessions
are
recorded
and
stored
in
the
youtube
vmware
cloud
native
apps
youtube
channel
under
the
characterographer
community
meetings.
Please
playlist
all
right
welcome
everyone.
A
We
don't
see
exactly
new
faces,
but
we
are
happy
to
see
you
here
again.
Okay,
so
the
pldr
is
our
first
section
where
we
try
to
summarize
what's
new
in
the
project
this
week,
what
team
is
working
on?
I
I
try
to
contribute
at
least
one
item
there.
A
One
thing
that
it's
definitely
new
and
it's
it
it
will
be
really
useful
for
the
community
out
there,
it's
having
architecture
and
troubleshooting
docs,
it's
open
for
feedback.
I've
been
reading
it
using
it
and
it's
you
know
it's
sent
from
heaven,
because
I
I,
as
a
user
I've
been
longing
to
to
have
this
overview
and
see
you
know
visually
to
see
all
the
concepts
how
they
relate
to
each
other.
A
So
it's
going
to
be
very
useful,
but
again,
if
users
out
there
have
opportunities
to
improve
and
they
have
ideas
or
suggestions
to
the
dogs,
please
please
feel
free
to
open
up
issues.
A
So
so
we
can
preach
and
and
work
on
it
and
also
there's
a
new
troubleshooting
section.
Troubleshooting
guide.
All
of
these.
You
can
check
it.
You,
you
will
be
able
to
see
it
right
now
in
the
development
version
here,
and
it
has.
You
know
it's
it's
very
informational,
because
I
I
found
this
problem
several
times.
You
know
the
missing
value
at
pat
and
not
having
a
clear.
A
You
know
not
having
clear
guidelines
on
what
to
do
next.
So
this
is
really
good
because
it
has
you
know,
missing
value
output
seems
to
be
a
kind
of
generic
or
one
of
the
most
common
unknown
states,
and
it
has
information
what
to
do
on
different
conditions.
When
this
message
can
come
up,
so
thank
you
for
for
this.
Thank
you
to
tim
for
working
on
this.
A
B
D
A
A
E
Sorry
as
muted
no
yeah,
along
with
docs
we're
working
on
getting
some
new,
our
example
has.
We've
always
had
one
example
and
we're
looking.
E
That
into
kind
of
a
tiered
example,
here's
like
just
get
your
thing
to
prod.
Here's
get
your
thing
to
broad
with
testing.
Here's
get
your
thing
to
prod
using
get
ups
flow
and
then
also
on
our
short-term
horizon
is
upgrade
testing.
I
think
we're
just
to
the
very
beginning
of
that.
A
Okay,
great,
thank
you.
Okay
from
previous
meeting,
we
had
a
couple
of
action
items,
or
one
of
it
is
already
in
the
works.
I've
been
talking
to
hector
and
tiffany
on
what
will
be
working
on
the
demo
for
the
cncf
that
we
discussed
in
the
in
the
previous
meeting.
A
A
B
E
A
A
Yeah
he's
just
that,
I
haven't
run
a
naming
exercise
before
I
think
it's
it's
more
on
the
pm
skills,
but
but
yeah.
Certainly,
I
saw
in
the
dogs
that
blooping
blueprint
and
honor
is
still
there,
but
just
to
mention
that
there
are
several
other
ideas
and
we
could
with
a
little
help,
and
you
know
to
structure
a
proper
naming
exercise
if
there's
a
need
to
change
naming
for
those
for
the
class.
First.
E
Yeah
same
for
myself,
I
I
find,
I
think
blueprint
is
a
great
name.
I'd
be
voting
for
it.
If,
if
we
ran
such
an
exercise,
I'm
not
super
hot
on
owner,
but
I
also
haven't
heard
a
name
that
I
am
super
hot
on,
so
I
don't.
I
wouldn't
want
to
take
precious
cycles
to
spin
on
that.
A
Okay,
I
agree:
that's
great
cool
great
now,
for
I
don't
know
if
there's
any
other
regarding
blueprint,
if
not,
we
can
move
to
the
open
mic
discussion.
This
is
from
marty.
I
believe.
G
We
talked
about
this
in
the
office
hours
meeting
and
I
had
to
drop
a
bit
early,
but
this
was
the
the
outcome
was.
It
was
decided
that
we
would
remove
the
arbec
informers
because
of
the
issue
with
the
informers
handling
a
lot
of
the
same
art
logic
that
the
controllers
have,
and
I
did.
I
left
some
follow-up
questions
in
there.
I
see
scott
answered
a
few
a
little
while
ago,
but
yeah.
G
I
like
the
problem
with
that
is,
unless
you
really
fix
the
rolls
within
the
first
minute
or
two,
then
who
knows
how
long
you
have
to
wait
till
it
reconciles
again
and
then
it's
essentially
useless
you
may
as
well.
You
know
if
you're
just
gonna
end
up
deleting
and
reapplying
your
workload,
so
it
I
think
the
the
default
back
off
is
useless
for
people
in
that
regard,
right
and
so
like.
If
we
say
we'll
reconcile
every
five
seconds,
then
you're,
you
know
once
we
have
this
permissions
failure.
G
Okay,
so
then
we're
gonna
just
constantly
have
logging
of
constantly
reconciling
with
the
same
error,
or
do
we
say:
okay,
fine,
so
we'll
limit
that,
like
this
said,
like
long
polling
like
sounds
like
there
was
some
sort
of
time
frame,
we
thought
was
reasonable.
I
was
like.
Is
it
reasonable
to
do
every
five
seconds
for
10
minutes
like
like?
It
seems
a
little
bit
arbitrary
at
that
point,
so
I
don't
feel
like
there's
a
great
solution
to
solve
this
without
informers.
B
I
think
it
was
more
of
a
heuristic
that
that,
having
sort
of
like
controller
code
yeah
like
in
an
expansive
mapreduce
felt
messy
to
you,
and
I
understood
that
and
I
feel
like
we
could
remove
that
complexity
there.
We
don't
need
to
go,
adding
it
anywhere
else
until
users
have
serious
issues
with
or
take
serious
issues
with
it.
So
the
heuristic
is.
B
Why
don't
we
wait
to
see
if
it's
a
big
problem
for
users
to
do
something
other
than
either
use
an
exponential
back
off?
I
personally
don't
like
the
exponential
back
off,
because
it
can
make
debug
a
little
hard
and
noisy
I'd
like
it
to
be
for
really
truly
exceptional
scenarios.
So
that's
why
I
was
asking:
could
we
just
use
a
long
pole
and
a
long
pole
that
still
lets
users
get
something
resolved,
maybe
before
they
go?
I'm
done
with
this
and
I'll
delete
and
reapply
all
right.
I
was
thinking
10
minutes
or
something
like
that.
G
D
G
Apply
a
workload
and
just
see
which
roles
we
absolutely
need
to
get
going
like
that's
how
we
made
the
example
and
it
was
brutal
to
keep
deleting
reapplying
and
that's
when
we
said.
Okay,
maybe
we
should
have
performers
and,
like
I've,
seen
conversations
in,
I
don't
know
how
which
slack
they
were
in
but
like
I've
seen
conversations
where
people
said
like.
Oh,
I
have
this
error.
I'm
missing
a
permission
and
the
answer
we've
been
able
to
give
is
just
like
add
this
role
and
you'll
be
good.
G
We
can
create
an
in-memory
mapping,
but
the
work
of
like
a
role
to
a
service
account
has
nothing
to
do
with
cartographer's
inner
workings,
and
that
can
stay.
You
know
higher
up,
but
I'm
just
nervous
that
this
long
pulling
is
a
light
solution
that
won't
actually
end
up
solving
anything,
and
it's
not
worth
it.
B
G
Like
I
think,
it's
not
just
in
this
case
right,
it's
also
in
mapping
workloads
to
supply
chains
and
deliverables
to
delivery.
So
we
have
the
same
problem
there
like
now
that
we
do
best
match
labels.
We
have
to
do
the
same
best
match
in
the
informers
to
make
sure
we're
actually
informing
on
the
right.
G
B
To
be
honest,
I
think
that
the
the
mapreduce
functions
that
make
that
make
up
the.
I
want
to
be
informed
for
this,
because
this
controller
needs
to
be
informed.
When
any
of
these
things
change.
We
can
just
wholesale
pick
up
that
logic
and
put
it
in
the
same
name
space
as
the
reconciler
for
it,
and
then
call
that
from
the
call
that,
from
the
the
notif,
the
sorry
words
escaping
what
we
call
a
watch,
informer
and
former.
Thank
you
from
the
informer
and
then
we
would
at
least
have
localized
code.
B
That
makes
sense
is
why
it
exists.
We
can
test
it
in
isolation
and
still
depend
on
it
until
such
time
as
we
need
to
do
in-memory
maps
which
I
think
are
riskier
personally,
but
scott
does
have
some
suggestions
that
there
were.
There
is
a
tool
that
we
should
in
maybe
investigate
the.
I
think
it's
the
k-native
one
that
is
designed
to
set
up
in
memory
mapping
for
informers
is
that
right,
scott.
A
B
H
The
general
pattern
is
that
as
you're
reconciling
the
resource,
you
basically
know
what
resources
you
need
to
actually
successfully
reconcile
a
resource.
So
as
you're
going
through,
you
can
basically
say
this
is
a
resource
that
when
it
changes,
I
want
to
re-reconcile
the
resource
that
I'm
reconciling
or
at
least
add
it
back
onto
the
test
queue.
B
We
now
have
logic
to
say:
take
that
one
out
and
put
this
one
in
those
are
my
concerns.
Whereas
mapreduce
is
very
straightforward
in
my
mind,
and
I
wouldn't
recommend
it
if
there
wasn't
a
client-side,
transparent
cache
which
we
get
for
free
from
controller
runtime.
That's
why
I
recommend
it
because
it's
very
straightforward
to
say
from
these
map
down
to
these
results
and
they're,
easily
tested
versus
an
in-memory
cache.
My
producers
are
just,
I
think,
naturally,
easier
to
test
that
those
are
my
my
thoughts.
H
H
Yeah,
I
think
kpac
is
still
using
the
canada
package.
If
I'm
correct,
maybe
they
switched
away
from
it.
It
might
be,
but
I
basically
did
that
copy
paste
into
reconciler
runtime,
which
is
a
library
that
I
maintained,
but
you
could
also
copy
this
in
something
else.
So
like
it's,
it's
it's
one
type
with
like
a
helper
function.
It's
not
that
much
code.
B
H
Care:
okay,
because
what
happens
is
that
worst
case
is
you're,
just
enqueuing
additional
resources
to
be
reconciled
so
like
reconcile
an
additional
reconciler
running
against
a
resource
is
like
a
no
op.
If
there's
nothing
to
do
so,.
B
H
Is
the
one
the
one
that's
implemented?
Basically
just
has
a
time-based
exploration
of
the
keys.
H
B
H
A
No
sorry
somehow
sharing
was
posit.
I
don't
know
no,
but
I'm
still
here
no
problem
yeah
your.
E
A
Okay,
great
great
discussion,
anything
else
regarding
this
specific
issue,
I
think
it's
an
ongoing
conversation.
A
G
E
I
wrote
this
sentence,
it's,
it
is
true,
it's
how
cartographer
is
set
up
right
now
and
it
seems
to
cut
in
direct
opposition
to
one
of
the
goals
which
is
developers
shouldn't
have
to
know.
What's
going
on
with
the
workloads
and
the
way
permissions
work
right
now,
the
developer
has
to
know
exactly
what
the
workload
or
the
developer
has
to
know
exactly
what
the
supply
chain
is
going
to
stamp
out
in
order
to
create
our
back
rules
for
the
controller.
E
To
do
that,
stamping
and
I
just
yeah-
I
didn't
didn't
like
writing
that
sentence,
and
so
I
wanted
to
bring
that
up
at
a
meeting.
G
I
just
put
a
link
in
the
chat
you
can.
The
supply
chains
can
now
create
and
reference
service
accounts
at
the
supply
chain
level,
and
so,
if
the
developer
did
not
fill
out
the
service
account
name
field,
it
would
default
to
what's
said
in
the
supply
chain,
and
so
it
could
be
on
the
operator.
If
we
want
to
write
the
docs
in
such
a
way
that
the
developer
doesn't
care,
we
could
put
it
that
the
onus
on
the
operator.
E
It
was
whose
responsibility
is
it
to
reference
that
surface
account?
Is
it
the
developers
or
is
it
the
operators.
I
We
allow
that
to
be
specified
in
both
places,
so
you
could
just
do
it
over
your
side.
Okay,
that
makes
sense.
There's
another
question
of
like
if
you're
setting
up
our
back
rules,
you
probably
don't
want
to
set
star,
and
so
then
the
developer
for
to
set
our
back
rules
for
that
service
account
they
may
need
to.
You
know
explicitly
allow
the
different
things
in
the
supply
chain
to
be
created.
B
I
B
I
I
E
E
With
those
cluster
roles,
I
think
the
when
you
say
then
you
could.
I
guess
my
question
is:
are
we
saying
then
you
as
the
developer,
which
is
what
I
think
when
I
hear
in
the
name
space,
that
it's
the
developer's
name
space,
because
then
it
goes
it
I
like
I,
I
totally
hear
you.
I
think
it
makes
total
sense
to
have
the
templates
to
say
here
are
the
permissions
I
need
and
then
to
roll
those
up
to
the
supply
chain.
E
The
supply
chain
says:
hey
included
this
template,
and
so
it's
its
included
roles
are
going
to
be
bound
to
the
service
account
that
I'm
using
and
then
when,
when
the
developer's
workload
comes
along
and
says
hey,
I
got
this
code.
Then
cartographer's,
like
yeah,
we'll
we'll
stamp
out
your
code
and
you
shouldn't
need
to
know
what
permissions
some
template
that
you
may
never
see
require.
I
I
I
think
there
are
a
few
things
as
part
of
that
somebody
or
if
the
service
account
is
in
the
name
space,
because
in
both
both
the
case
where
the
service
account
is
specified
in
the
supply
chain
and
specified
in
the
workload
the
service
account
itself
is
always
in
the
name
space.
In
the
operator
case,
the
service
account
is
just
like
a
way
to
say
the
default
service
account
name
in
the
namespace.
Is
this?
It's
not
setting
an
operator
specified
service
account,
and
so
that.
A
I
But
if
you
don't
specify
the
namespace,
then
does
it
it
just
picks
the
one
in
the
namespace
of
the
workload
is
that
right,
yeah
and
that's.
I
thought
that
was
kind
of
the
configuration
we
were
going
to
advertise,
but
maybe
that's
a
different
different
topic,
because
if
you
use
a
shared
service
account
across
all
workloads,
then
it
you're
kind
of
like
it's
like
a
little
bit
of
a
privilege.
F
I
did
wonder
what
do
we
know
what
privileges
we
would
expect
developers
to
have,
whether
they
do
we
call
out?
Do
they
need
to
be
able
to
create
roles?
Or
you
know?
What's
the
minimum
set
of
privileges
are
about
privileges
that
we
think
developers
should
have
so
they
can't,
because
they
won't
be
able
to
create
it
well
potentially,
to
have
that.
I
My
mental
model
for
this
is
like
operator
provisions
namespaces
for
developers.
Those
namespaces
have
a
role
binding
to
a
aggregated
cluster
role.
That
gives
you
know
the
service
account.
That's
also
created
in
the
workload
namespace
access
to
create
a
specific
set
of
resources,
not
all
resources
within
their
namespace,
and
then
the
developer
just
creates
the
workload
of
the
namespace,
and
so
it's
it's
tightly
restricted
to
the
namespace.
I
B
Truth
right,
I
just
I
just
wanted
to
understand.
What's
there
and
the
other
one
is
in
the
workload
you
can
specify
within
the
same
name,
space.
I
I
think
all
three
of
those
in
service
account
yeah.
I
think,
all
three
of
those
work
with
the
current
implementation.
If
you
don't
specify
a
namespace,
a
service
account
that's
specified
in
the
supply
chain,
it
means
use
the
service
account
in
the
namespace
of
the
workload
when
you're,
creating
the
resources
of
that
workload.
I
I
I
could
see
that
being
useful
on
some
platforms
right,
especially
if
you're
like
a
smaller
cluster
with
less
developers
or
whatever,
but
I
think
it
is
inherently
a
less
secure
configuration
because
it
allows
you
to
get
configuration
into
you
know
something
that's
going
to
be
set
by
a
service
account
that
has
you
know,
potentially
access
to
things
outside
of
your
namespace
on
the
cluster,
and
so
I
I
I
don't
know
if
I
would
recommend
that
configuration
first,
although
I
think
we
should
document
all
three
workflows
and
describe
the
implications.
This.
B
Is
where
I
was
going
with
this?
Actually,
because
we
have
a
story
up
for
it.
I
think
we're
going
to
pull
it
today
or
soon
to
write
something
similar
to
our
architecture
document,
something
diagrammatic,
something
useful,
something
with
warnings
about
what's
best
to
use,
and
so
we
might
ask
who
wants
to
he
wants
to
help
us
make
sure
that
we
get
a
good
review
on
that.
I'm
going
to
ask
you
stephen
what
about
you
james?
Would
you
like
to
yeah
cool
so
yeah,
whoever
whoever
does
pick
up
that
story?
B
I
suspect
it
might
be
me,
but
whoever
does
pick
up
that
story.
It'd
be
great
to
to
yeah
include
james
and
stephen
on
the
review
request
for
it,
and
then
we
can
make
sure
that
it's
accurate
and
best
in
class
or
whatever
we
want
to
call
it.
You
know
putting
the
the
least
security
model
forward
as
much
as
we
can
at
least
privileged
sorry.
B
When
the
when
the
pr
is
created
would
just
like
to
pull
in
some
people,
david.
I
That's
definitely
I'm
only
I'm
only
here
two
more
days
this
year
and
then
I'm
back
in
the
middle
of
january.
So
if
I
don't
respond
after
that
point,.
F
F
Well,
I
just
wish
I
had
one
very,
very
quick
thing:
sorry
if
it
almost
went,
but
maybe
I'll,
follow
up
and
add
a
comment,
but
I
noticed
on
the
troubleshooting,
which
is
looking
really
good
by
the
way,
I
think
one
of
the
things
I've
felt
with
it's
hard
to
know
sometimes
where
to
go.
Look
for
the
next
debug
message.
F
B
D
B
B
I
feel
like
it's
not
in
the
troubleshooting,
it's
maybe
in
if
you
happen
to
be
using
kpac
as
one
of
your
templates,
here's
troubleshooting
for
that
stuff
and
we
may
create
categories
for
that,
but
the
broader
troubleshooting
is,
I
mean
correct
me.
If
I'm
wrong,
you
all
can
say
that
this
is
probably
a
bad
approach,
but
the
approach
I
took
with
troubleshooting
is
at
the
moment
to
assume
that
it's
just
the
part
the
you
created
your
supply
chain,
whatever
it
might
be.
B
F
Yeah
well,
where
was
pondering
with
that
was?
Could
we
is
there
a
way
to
catch
what
you
know
in
the
era
whenever
we
detect
an
error,
we're
about
to
update
the
message
on
this
message
status
on
the
work
note
almost
if
we
know
the
gbk
to
say,
okay
run,
cube
ctl
describe
this.
This
resource
in
this
name
space
kind
of
thing
to
help
give
the
user
the
command
to
run
to
get
the.
So
that
is
in
the.
B
C
C
B
B
F
B
C
F
Yeah,
I
mean
maybe
there's
something
just
people
thought
like
as
well
things
in
the
future,
just
yeah
just
little
tips
or
something
just
to
help,
give
a
bit
more
information
about
people
to
go
and
find
information.
B
D
F
I
that
personally
I
I
quite
like
that
I've
done
some
things
in
the
past
with
their
with
health
checks
and
things
like
that.
It
it
helps
and
maybe
on
the
docks.
Then
you
can
have.
You
know,
there's
go
to
the
community.
If
this
is
beyond
all
various,
it's
just
giving
people
a
bit
more
information
of
how
to
go
and
find
information
I
mean.
Maybe
we
can
rift
on
it
a
little
bit
more,
but
it's
just
that
initial
thing.
Initial
experience
of
okay
something's,
not
quite
right.
Where
do
I
go
next?
B
B
I
think
the
the
head
will
always
be
safe
enough.
You
know
we
could
always
go
to
development.
Actually,
no
there's.
B
D
E
And
I
I
thought
of
this
because
of
what
you
were
asking
about:
james,
how
to
help
users
troubleshoot
even
better
in
some
of
our
examples
again,
some
places
in
our
code
base.
We
do
like
talk
about
cube
cuddle
tree
and
how
that
can
help.
You
see
the
dependency
graph.
E
I
don't
know
if
we
want
to
put
that
into
our
into
the
troubleshooting
guide
as
well.
I'm
not
totally.
B
Like
I
don't
want
the
troubleshooting
section
you
get
there
and
you
start
to
use
commands
that
you
now
have
to
make
sure
you've
installed
all
right.
I
wanted
it
to
be
as
and
so
that's
my
guiding
rule
for
this,
and
it
could
be
that
that
shouldn't
be
it,
we
should
say,
look
minimum.
You
want
to
debug
this
thing
get
tree
installed
because
it
really.
F
B
But
that
was
just
so.
You
know
where
my
head
was
at
with
the
first
cut
of
this
was
kind
of
like
hey.
Can
I
can
I
give
people
enough
info
to
get
them
going
as
though
they're
not
using
anything
special,
maybe
they're,
running
on
a
you
know,
on
a
business
provided,
laptop
or
and
whatever
the
rules
are
wherever
the
person
lands?
Surely
they've
got
cube
cut
all
they
might
not
have
anything
else
right?
B
I
think
maybe
like
there
needs
to
be
a
section
on
understanding
the
the
target
objects,
which
is
why
the
way
we
descend
the
tree
in
the
first
step
on
each
of
these
bits
of
debug
is
well
here's,
the
output
that
tells
you
where
what
the
thing
is
that's
being
owned
that
we're
waiting
on
right.
You
don't
need
to
look
at
the
tree
to
see
what
the
sort
of
like
general
status
of
things
are,
but
maybe
there
should
be
a
section
on
that
and
there
is
actually
a
cube
cuddle
command.
B
E
One
that
that
approach
that
you
were
talking
about
makes
total
sense
to
me
yeah,
let's,
let's
allow
them
to
troubleshoot
with
the
with
any
tool
that
they
have.
I
personally
would
push
a
little
more
forcefully
on
hey
here's,
a
great
tool,
not
just
in
a
note
but
like
hey,
the
developers
of
a
cartographer
use
k-tree
all
the
time
like.
E
Happy
too,
and
then
on
a
similar
note,
there's
this
code
that
was
written
by
folks
that
are
on
the
team
q
logs
some
members
of
the
team
use
it.
E
I
find
it
very
useful,
yeah,
essentially
a
tool
that
will
allow
you
to
say,
like
here's,
the
workload
what's
going
on
in
the
logs
of
these
objects
that
we've
created-
and
it
just
tells
you
one-
is
this
the
place
to
discuss
like
that
tool
and
to
if
so,
what
do
we
think
about
making
that
tool
available
to
others?.
B
C
E
It
doesn't
log
it
it
reads
logs.
It
gives
you
in
the
in
the
same
way
that
logs
or
yeah,
in
the
same
way
that
tree
lets
you
see
like
hey,
there's
this
dependency
graph,
like
here's
all
the
stuff.
In
that
graph,
q
logs
is
like
hey
here's,
the
top
like
I
just
specify
the
top
level
objects,
and
then
it's
like
hey
here's
the
logs
coming
off
of
all
these
different
child
yeah.
H
The
the
two
tools
that
use
most
commonly
combat
are
either
tail
k-a-I-l
or
stern
s-t-e-r-n,
and
both
of
those
will
allow
you
to
specify
a
label
selector
and
basically
just
grab
a
bunch
of
pods
or
you
can
start
getting
more
advanced
like
kale
will
allow
you
to
specify
the
name
of
the
deployment
or
the
name
of
something
else
and
it'll
basically
just
go
find
all
the
appropriate
children.
H
E
A
G
Yeah,
so
we
are
still
on
0.0.78,
whatever
we're
listening,
we're
up
to
like
eight
now,
so
we
have
rc's
out
for
0.0.8.1.
When
do
we
want
to
bump
to
like
0.1
or
1.0?
Even
it
would
be
nice
to
start
following
some
ver
for
real
at
some
point,
preferably
soon,
but
if
we
don't
feel
like
the
api
is
stable
enough,
yet
maybe
we're
not
ready
for
1.0.
E
What
do
y'all
think
one
I
mean.
E
I
there
was
a
meeting
where
there,
where
we
were
asked
to
consider
the
name,
delivery
and
deliverable.
I
would
not.
I
would
like
to
get
buy-in
on
we're
sticking
with
this
or
we're
changing
it
before
declaring
1.0,
but
I
also
don't
know
if
it
matters
if
we've,
if
we're
going
to
have
to
bump
our,
are
we
keeping
two
versions,
there's
that
that
summer
version
and
then
there's
the
v1
alpha
one?
B
I
mean
I'll
just
say
that
most
kubernetes
projects-
much
to
my,
I
think,
it's
rotten,
but
most
of
them
never
hit
one
never
hit
1.0,
never
hit
p1,
it's
like.
So
when
can
I
start
trusting
in
cenva,
so
I'd
love
to
see
it,
and
I
think
it's
a
great
thing
to
bring
up
marty.
I
also
just
think
it's.
The
thing
that
we
need
to
say
is
our
one
of
the
one
of
our
sort
of
like
goals
is
to
get
to
a
one
1.0,
but
to
do
so
mitigating
risks
of
getting
there
too
soon.
B
F
Maybe
it's
best
to
hold
on
a
little
bit
and
see
what
comes
out
of
the
coming
weeks.
I
guess,
having
a
1.0
does
suggest
to
outside
world
that
people
can
confidently
build.
On
top
of
this,
I
guess,
but
I
guess
also
the
other
thing
I'd
say
is,
I
think,
there's
two
parts
to
originally
there's
the
the
component
of
cottonwood
itself.
There's
also
the
apis
in
there
as
well,
so
like
the
versioning
of
the
of
those
apis.
F
But
you
said
there
are
other
like
other
projects
that
probably
are
still
on
alphas
or
betas
for
their
for
their
apis.
They
still
have.
They
follow
a
different
version
approach
to
the
project
itself,
so
I
also
think
there's
you
can
find
out
some
more
there's
some,
I
think,
there's
some
versioning
suggestions
as
well
from
from
a
vmware
point
of
view,
but
that
might
help
out
with
someone
that
gives
some
help.
Some
guidance
on
other
projects
in
the
open
source,
tanzan
work
that
their
approach
they're
taking.