►
From YouTube: Tanka Community Call 2020-06-02
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
So
a
bit
of
announcement
time
conquer
now
has
its
own
first
logo.
So
it's
the
little
ship
thingy
you
can
see
on
the
updated
tanker.dev
website
and
big
shout
out
to
song
or
internal
designer
who
did
it
for
us,
and
I
think
it
really
looks
nice
and
also
tanker.
Now
has
a
governance
which
richie
will
talk
about.
A
A
B
C
That's
very
nice,
you,
you
mentioned
the
new
logo,
but
you
like,
I
was
just
on
the
website
a
couple
of
minutes
ago
and
I
realized
the
website
is
also
new.
So.
B
No,
no,
we
I
talked
to
gina
she
will.
She
will
actually
stop
that
the
website
we
have
this
web
shop
thingy
and
still
all
right
stickers
and
everything
so
we'll.
B
What's
the
beauty
of
the
government
of
the
community
called
as
long
as
people
are
having
fun
and
as
long
as
it's
somewhat
useful?
That's
that's
all
on
topic.
So
governance
tanker
has
a
governance.
Now
it's
modeled
after
the
grafana
governance,
which
itself
is
modeled
after
the
previous
governance,
so
some
of
you
might
be
familiar
with
it
by
and
large
the
how
it
works
is
once
you're
you're,
actually
team.
Member.
B
There
is
rough
consensus
modeled
on
the
itf
ref
consensus
model,
but
without
the
chair,
of
course,
the
assumption
is
that
everyone
will
be
working
together
quite
nicely,
but
also
you
have
a
system
of
voting
into
the
majority
voting
in
case
no
consensus
can
be
reached
and
that's
about
it.
For,
for
the.
D
A
D
I
think
I
was
just
going
to
say
a
little
bit
more
about
the
the
governance.
Is
you
know
in
a
way?
From
my
perspective,
it
just
says
it's
like:
in
most
cases,
things
work
and
people
submit
stuff,
and
it
just
works
and
governance
just
covers
those
cases
where
it
might
not
quite
work
as
as
smoothly
as
you
might
hope,
and
how
do
we?
How
do
we
know
when
someone
says
hey?
I
want
to
help.
D
How
do
we
know
when,
when
the
point
is
to
say,
come
on
in
become
a
full
team,
member
and
so
on?
What
the
process
is
through,
which
that
happens
and
so
on,
and
that's
what
governance
really
makes
clear
and
once
you've
got
it
all
agreed,
then
there's
no
argument
about
it,
because
we
all
know
and
have
agreed
on
a
common
way
of
working,
so
it
just
makes
working
as
a
community
much
more
straightforward,
yes
and
also.
B
Entering
it
from
the
outside,
I
mean
there
is
there's
also
the
obvious
stuff,
like
code
of
conduct
and
such
to
just
show
that
we
are
not
horrible
but
also
giving
more
or
less
a
guideline
of
hey.
I
want
to
actually
join
that
bunch
of
people,
because
I
care
about
the
software
which
they
are
writing.
A
Right
awesome
so
now
upcoming
features.
The
first
thing
we
can
kind
of
showcase,
or
at
least
announce
is
that
we've
been
working
on
a
replacement
for
the
long
deprecated,
kubernetes
library,
a
technology
preview
of
the
new
thing
is
located
at
the
link
of
this
posted
everybody.
Please
keep
in
mind
it's
definitely
technology
previous,
so
I
don't
think
it
has
been
used
by
anybody
in
production
yet
or
even
used
by
anybody.
We
don't
even
do
it
internally
yet,
but
we
plan
on
converting
our
whole
code
base
to
that
as
soon
as
possible.
A
The
new
library
is
again
a
go
based
generator
that
takes
these
regular
json
from
the
kubernetes
cluster
and
transforms
everything
it
can
find
in
there
to
json
boilerplate
code,
which
so
you
don't
need
to
write
that
yourself
anymore,
also
because
it
uses
a
slightly
different
design.
It
is
much
more
performant.
A
A
A
C
Something
worth
pointing
out,
I
think,
looking
at
the
page,
you
just
posted
is
the
different
versioning
of
kubernetes
yeah
like
generated
libraries,
because
I
think
that
was
something
we
discussed
during
foster
right
that
like
what
what's,
what
does
casenet
better
three
case
net
better
for
what
does
it
even
mean
right?
So,
essentially,
I
think
the
idea
going
forward
is
we
want
to
have
kind
of
like
an
up-to-date
generated
version
per
kubernetes
version
so
that
you
can
literally
just
pick.
C
I
want
to
use
the
json
kubernetes
library
for
kubernetes
1.15
and
it
will
kind
of
stay
the
same,
but
we
might
even,
I
suppose,
like
correct
me
if
I'm
wrong,
we
might
even
like
update
this
under
the
hood,
but
the
api
from
the
outside
should
stay
the
same
so
kind
of
like
with
an
improved
generator.
C
We
might
be
able
to
even
like
improve
the
performance
under
under
the
hood.
A
D
D
So,
for
example,
the
saying
container.new
dot
with
port
start
with
this
start
with.
That
is
the
thing
you
don't
get.
Instead,
you
say
container
dot,
new
plus
container
dot
with
ports
plus
consoles.
It's
very
small
shifts
in
how
you
write
your
code,
but
you
will
have
to
do
some
adjustments.
D
A
A
A
A
At
the
moment,
it's
mostly
purely
generated,
but
already
in
the
old
case
on
the
library
they
had
custom
written
constructors.
So
all
of
the
new
functions
were
actually
made
by
hand
and
the
new
library
has
a
pretty
sophisticated
patching
system
where
the
final
api
outcome
is
based
on
a
combination
of
generated
resources
and
custom
written
library
parts.
So
we
can
also
think
about
moving
some
of
the
improvements
we
have
made
in
our
causal
extension
into
the
main
library
to
provide,
like
the
most
simple
api
for
people
to
use.
We.
A
C
Yeah,
I
think
that
would
be
great
because,
like
looking
back
at
casenet
in
many
ways,
it
often
didn't
even
reduce
the
amount
of
code
you
had
to
write
right,
like
I
just
recently
kind
of
like
more
like
went
into
the
direction
of
just
writing.
Pure
json
objects
for
for
the
for
the
for,
like
the
kubernetes
object,
and
it's
it's
crazy,
how
how
much
fewer
code
it
is.
So
if
we
can
like
really
try
to
improve
the
experience
there,
that
would
be
would
be
helpful,
like
get
kind
of
like
get
the
80.
C
The
80
really
easy,
like
with
the
like
hyper
functions,
but
then
still
be
able
to
like
merge
things
on
top,
which
is
like
the
the
main
benefit
of
using
json
right.
So.
A
Yes,
exactly
so,
I
think
we
should
aim
for
a
good
combination
of
simple
to
write
code
and
a
bit
of
these
more
intelligent
or
magic
parts.
For
example,
what
I
really
like
in
the
corsa
library
is
the
service
for
helper,
so
you
can
just
pass
a
deployment
and
it
already
automatically
extracts
all
of
the
service
required
things
out
of
there
and
just
creates
the
service
for
you
without
you
having
to
write
it
by
hand,
and
I
think
if
we
can
find
a
good
set
of
similar
helpers,
that
might
be
really
nice.
C
Just
my
two
cents,
looking
at
hugh
the
what
is
it
conform
unify.
C
Like
so
so,
q
kind
of
takes
a
different
approach
right
where
you
more
or
less
like,
write,
high
level
objects,
and
then
you
would
say
I
want
to
expose
these
parts
and
then
actually
it
takes
these
objects
like
higher
level
objects
and
then
puts
the
ports
into
a
kubernetes
service
and
the
same
part
into
a
deployment.
C
So
it
kind
of
like
I
think
we
could
kind
of.
Like
I
mean
that's
all
the
discussions
we
should
have
probably
going
forward,
but
the
the
the
reason
why
I'm
kind
of
like
opposed
to
to
have
kind
of
such
a
magic
helper
function.
Is
it
like
really
hides
like
the
complexity
from
the
people,
but
in
a
bad
way
like
we
can,
we
can
still
have
something
where
we
like.
C
Like
specifically
say
we
want
to
expose
these
things,
but
then,
if
we
just
pass
an
object
of
type
deployment
into
some
helper
function,
like
no
one
really
knows
what's
going
on,
I
think
and
that's
that's
kind
of
a
bad
move.
I
would
like
to
say
yeah
but
like
yeah,
let's
have
discussions
going
forward.
A
Okay,
so
I
don't
know
if
many
people
realized
yet,
but
we
did
create
a
new
organization
on
github
called
json-libs,
which
is
focused
around
building
libraries
for
json
and
also
enhancing
the
jsonnet
library
ecosystem,
also
in
terms
of
documentation
generation
and
everything
around
the
these
things.
C
Yeah
I
mean
it's
really
cool.
The
one
concern
that
I
kind
of
have
right
now
is
we
had
the
kubernetes
j
or
cube
jsonnet
arc,
or
something
like
this
right,
like
the
concern
here
being,
we
should
clean
things
up
before
creating
yet
another
arc.
A
C
Yeah
that
would
be
really
cool.
I
don't
know
if
pavel
from
red
hat
from
the
monitoring
team,
he
works
with
us
reached
out
he
kind
of
created
a
monitoring
mixins
website.
So
essentially
it
shows
you
like
the
alert
like
essentially
for
all
the
mixers
that
are
there.
C
He
like
generates
a
default
set
of
rules
and
alerts,
and
I
don't
know
about
dashboards,
and
I
think,
because
we
have
the
advantage
of
of
kind
of
like
having
both
in
terms
of
the
like
capabilities
to
have
kubernetes
objects
generated
with
json,
but
additionally,
we
can
also
generate
like
prometheus
ecosystem
related
things
with
jsonnet
right.
I
think
that's
kind
of
a
bonus
point
we
kind
of
have
here
over
helm,
so
I
think
we
should
try
to
work
together
to
to
basically
have
kind
of
a
packaged
system
where
we
like
show.
D
And
my
bit
of
this
is
total
dreaming,
but
my
dreaming
there
is
just
a
place
where
anyone
who
has
json
it
related
code,
whether
it's
for
you
know
it
could
be
for
deploying
to
my
car
all
that
matters.
No,
but
it's
chase
on
it
do
I
do.
I
need
to
distinguish
between
j
sonnet
and
between
tanker
and
and
quebec
and
and
kubernetes
and
monitoring
mixins
or
all
these
other
things
graphonic
based
ones
or
not.
Actually,
we
could
just
have
a
json
thing
with
tags.
B
D
So
I
can,
I
can
go
into
the
bit
that
I
I
get
to
speak
about
you
know
so
so
in
that
regard
you
know,
if
we're
talking
about
possibly
having
this
space,
where
we
have
you
know
if
you,
if
we
imagine
a
this
wonderful,
you
know
arch
or
apt
or
npn
like
thing
where
we
can
go
and
find
libraries
in
a
way.
The
easy
thing
is
writing
the
tool.
D
The
hard
thing
is
getting
together
the
list
of
libraries,
so
you
know
what
what
would
be
really
good
would
be
to
try
to
for
us
to
start
talking
about
hey.
You
know.
I've
got
this
library.
Oh
I've
got
this
library
and
just
start
stimulating
the
conversation
around
what
resources
are
there
already
out
there
created
by
any
of
the
jsonnet
communities?
D
So
it
may
be
that
quebec,
for
example,
has
this
most
hot
library
that
this
does
something
incredibly
useful,
that
works
for
us
and,
like
you
know
a
bit
like
you
have
cortex
and
thanos
who
first
were
in
competition,
and
now
they
have
common
committers
in
common
and
sharing
code
bases
and
things.
That's
pretty.
You
know
it's
what
you
can
do
in
open
source
and
it
would
be
wonderful
if
we
could
start
that
conversation
about
how
we
share
what
what
resources
like
what
jsonic
code
is
out
there.
D
C
C
Essentially,
we
have
a
couple
of
places:
okay,
never
mind
that
repo
is
empty,
but
we
have
a
couple
of
places
where
we
kind
of
just
need
to
go
in
and
collect
them
and
kind
of
like
talk
to
the
people
who
already
have
started
collecting
a
bunch
of
things
like
my
colleague
carver,
who
created
the
monitoring
mixing
website.
C
So
we,
I
think
I
think
at
this
point
it's
mostly
yeah,
exactly
there's
awesome
lips
on
it.
I
think
I
created
once
awesome
jsonnet,
so
there
are
like
a
handful
of
projects
that
try
to
do
this
right
now.
Some
are
more
up
to
date
than
others,
but,
like
the
real
hard
thing
is
talking
to
people
and
try
to
come
together
and
try
to
figure
out
how
to
how
to
like
really
have
this
like
central
location,
right.
D
C
Yep,
I
think,
yeah
I
think,
like
from
my
perspective.
I
think
we
should
try
to
consolidate
the
the
lists
and
repos.
We
have
like
awesome,
json
awesome
case
in
it
also
awesome
something
then
the
monitoring
mix
inside
then
the
things
you're
working
on,
try
to
consolidate
this
and
then
basically,
I
think,
like
from
my
perspective,
let's
have
a
github
repo,
that's
designated
and
then
really
just
have
people
open
issues
and
say
hey.
I
worked
on
this.
Take
a
look.
C
Yeah
sure,
let's
do
it
in
one
way
or
another
and
I
think
like
we
should
talk
to
pavel
and
then
try
to
already
get
like
monitoring
mixins
into
that
as
well.
So
we
can
kind
of
tell
the
big
story.
C
I
pasted
the
website
in
the.
A
A
F
Maybe
one
note:
it's
not
really
a
question
I
can
also
maybe
my
camera.
Then
you
can.
You
can
see
me
yeah.
One
thing.
D
F
I
also
checked
out
the
mix
in
website
mrts
posted,
and
I
think
it
would
really
benefit
if
we
not
only
create
a
list
of
those
but
also
yeah,
like
guidance,
how
to
use
them,
because
often
those
collections
of
libraries
are
yeah
exactly
that
a
collection
just
a
list,
but
a
bit
more
advanced
examples
of
how
to
use
several
of
those
libraries
in
one
project
in
timecar,
for
example,
would
be
really
helpful
to
get
started.
If
you
are
new
to
the.
A
Ecosystem
right,
I
think
that's
a
bit
similar
to
something
I
did
discuss
with
malcolm
internally,
where
you
have
a
website
where
you
can
search
for
all
of
the
libraries
you
want
to
use
with
kubernetes
and
then
has
something
similar
to
an
install
button
where
it
just
shows
you
a
json
snippet
that
you
can
integrate
into
your
code
base.
That
gives
you
that
exact
library.
D
They
don't
do
anything,
they
just
shove,
something
into
your
json
it
out
chase
on
it
out
a
script.
That
then,
is
ignored
because
you
know
so
they
are
designed
to
be
consumed
by
another
library.
D
D
So
you
have
a
library-
and
you
mix
those
mixins
into
your
library,
so
you
have
some
so
the
one
we
use
is
prometheus
case
on
it,
unfortunately
named,
but
but
a
brilliant
piece
of
code.
So
you
use
the
prometheus
case
on
it,
library
you
then
add
on
to
that
you're
mixing
and
then
that
renders
the
grafana
dashboards
it
creates
config
maps
to
store
the
dashboards
in.
It
was
those
up
to
grafana
for
you,
it
handles
all
of
your
prometheus
configurations,
it
installs
prometheus.
D
Why
is
that
what
you
know-
and
it
just
so
all
of
the
stuff
inside
the
the
the
mixing-
ends
up
deployed
into
your
kubernetes
cluster,
because
that
library
did
the
work.
D
The
grizzly
tool
is
just
for
pushing
the
dashboards
to
grafana
and
and
is
intended
to
just
to
make
that
initial
stage
of
actually
every
change
you
want
to
make
to
a
dashboard.
You
have
to
deploy
to
kubernetes,
it's
just
a
really
slow
thing,
because
it
has
to
do
difficult
across
your
entire
cluster
name
space.
So
grizzly
has
just
pushed
straight
to
a
grafana
instance
for
development
purposes.
D
D
C
One
other
thing
that,
like
on
a
similar
note,
I've
been
discussing
with
power
for
the
monitoring
mixin
project
is
actually
something
that
I
kind
of
did
for
the
slo
library
that
I
I
wrote
the
slo
libsonnet
project,
where,
essentially,
you
have
a
web
ui,
where
you
can
put
in
like
selectors
for
the
job
or
the
prometheus,
that
you're
running
like
just
like
as
key
value
pairs,
and
then
it
takes
all
of
that
and
essentially
runs
the
json
generation
on
the
server
side.
C
On
the
server
like
on
the
back
end
and
just
gives
you
back
like
the
yummy
that
you
need
to
copy
paste
and
deploy
right.
So
essentially,
what
we've
been
kind
of
discussing
is
as
a
stop
gap
into
something
that
is
a
bit
more
advanced
in
the
future.
We
already
could
do
something
where,
like
for
the
mixins,
for
example,
we
could
have
the
like
give
us
a
name,
give
us
the
name
space.
C
You
want
to
deploy
these
things
to
give
us
like
some
some
name
for,
like
the
instance
of
I
don't
know
like
jager,
you
want
to
deploy
or
something
like
this
right
and
then
essentially
just
like
with
a
click
of
a
button
and
like
two
seconds,
while
generating
jsonnet
on
the
server
side.
C
We
then
print
yummy
to
to
the
website
so
that
the
user
essentially
just
has
to
copy
paste
things,
and
they
can
copy
paste
it
into
the
githubs
kind
of
workflow
going
forwards
there
right,
so
they
they
can
choose
to
just
like
as
a
test
like
one-off
thing,
deploy
to
the
cluster
and
be
done
with
it
or
put
it
in
githubs
kind
of
like
take
it
from
there,
and
I
think,
certainly
something
like
that
would
be
possible
not
only
for
not
only
for
the
mix
sense
but
like
also
for
even
like
for
for
case
and
that,
like
kubernetes
related
deployments.
C
So
essentially,
we
could
generate
the
entire
like,
for
example,
q
prometheus
project,
which
comes
with
kubernetes
objects
and
all
their
rules
and
alerts
and
dashboards
depends
on
who
is
hosting.
That
right,
like
I,
might
be
quite
resource,
intense
to
run
a
bunch
of
generations
all
the
time.
A
C
It's
more
like
you
kind
of
register,
your
jsonet
library,
with
with
a
project
with
a
website,
and
you
say
you
can
configure
a
namespace.
C
You
can
configure
these
various
things
and
then
from
that
we
kind
of
like
built
a
configuration
object
on
the
server
side,
and
then
you
just
put
it
into
the
configuration
that
needs
to
be
run
essentially
taking
the
burden
of
like
installing
jsonnet
running
json,
bundler
and
all
of
these
things,
at
least
for
people
who
just
want
to
have
like
a
one-off
thing
and
then
for
more
advanced
users
like
obviously
something
like
tankers.
The
the
way
to
go,
and
then.
D
A
I
think
that
aligns
quite
well
with
something
similar.
We
do
on
the
grafana
agent
repository
where
we
concatenate
the
entire
tanka
output
and
put
in
some
and
subs
variables
in
there,
where
people
can
just
run
a
single
command
on
their
machine,
and
if
we
could
turn
that
into
a
bit
more
sophisticated
service.
Where
we
tell
the
servers
at
which,
like
which
configuration
variables
are
available
and
where
to
put
these
into
the
final
jsonnet
and
then
just
evaluate
that,
we
could
basically
provide
like
the
kubernetes
yml
service
for
all.
A
Applications
which
have
json
or
tanka
manifests.
D
F
F
A
D
F
Matthias
send
me
the
link
and
ask
if
I
want
to
join-
and
I
thought
yeah,
why
not?
I
can
check
in
what's
going
on
with
tanka,
I
don't
use
it
anywhere
in
production.
I'm
used
for
some
private
experiments
where
I
currently
have
most
stuff
deployed
with
helm
and
it's
super
annoying
to
manage
the
monitoring
stick
with
him.
So
I
started
to
look
more
into
tanka
and
jsonnet
in
general
and
yeah,
but
it's
like
my
private
free
time
and
on
in
for
work.
We
use
jsonet,
but
not
with
tanka
so
far,
but
yeah.
F
We
don't
use
it
a
lot.
Then.
D
F
Yes,
it's
using
casenet,
it's
was
originally
deployed
when
casenet
was
still
maintained
and
it's
yeah
they
that
is
used
for
the
monitoring
stack
as
well,
I'm
not
sure
if
it
uses
q
promoters
or
not
yeah.
I
think.
C
That's
what
you're,
using
we've
discussed
okay
a
couple
of
months
ago,
so
you're,
basically
using
q
prometheus
and
keep
your
meteos
like
I
talked
to
tom
about
this
during
foster.
The
problem
kind
of
is
that
q
prometheus
itself
is
more
of
a
of
a
library
and
doesn't
really
like,
like
it
doesn't
really
like
tankers
use
use
case
and
workflow
doesn't
really
fit
you
prometheus.
C
So
I
think
the
the
way
going
forward
would
be
helpful,
for
people
would
be
kind
of
to
outside
of
ko
prometheus
have
an
example
how
to
use
q
prometheus
if
they
would
want
to
deploy
it
with
tanker,
because
it's
kind
of
like
just
another
library
that
you
consume
with
tank
yeah,
so
yeah.
A
Exactly
q,
prometheus
itself
is
just
the
library
like
premises
case
sonnet
is
or
all
of
the
other
things
it's
one
of
the
pieces
that
can
be
consumed
by
tonka,
but
it
itself
should
not
be
like
have
any
connection
to
tonka.
Really
one
thing
that
I've
noticed
with
q
prometheus,
where
it
does
not
play
too
well
with
tonka's
assumptions,
is
that
it
outputs
these
list
objects
of
kubernetes.
Instead
of
returning
the
plain
resources,
this
kind
of
sets
tank
a
bit
off
at
the
moment.
A
We
could
work
around
that,
but
I
kind
of
would
like
to
hear
the
idea
behind
wrapping
all
of
these
into
lists.
Instead
of
just
outputting
gamut
streams,.
C
I'm
not
a
hundred
percent
sure
I
think
you're,
like
talking
about
the
example
json
file
that
we
have
in
in
our
project
right.
So
I
think
so.
C
Yeah
as
a
test
is
just
an
example,
and
you
could
basically
copy
and
modify
the
example
jsonnet
to
kind
of
your
your
needs
and
really,
if,
if,
if
you
need
to
need
to
change
things
in
there,
it's
it's
totally
fine
where
it
would
become
a
problem
would
be.
If
you
really
need
to
change
anything
inside
the
json,
slash
q,
prometheus
subfolder,
then
we
should
really
have
a
conversation
about
that.
But
those
should
be
basically
per
component.
We
have
an
object
and
then,
within
those
objects
we
have.
C
We
have
like
another
object
per
kubernetes
object
that
we
that
we
create.
So,
for
example,
we
have
like
the
prometheus
adapter
is
like
a
top-level
project
object
and
then
inside
of
that
one
we
have
a
deployment,
object,
a
service
object
and
so
and
so
forth.
So
yeah
like
it's
just
an
example:
jsonnet,
so
don't
take
it
too
seriously,
and
I
think
we
can.
We
can
work
with
something
that
should
work
for
tank
as
well.
A
I'll
take
a
look
there
and
let
you
know
what
would
be
required
sure.
Actually
I
just
noticed
I
missed
something
in
the
agenda
I
wanted
to
talk
about,
which
is
this
project.
A
Like
key-
and
I
do
admit,
it's
probably
not
the
most
easiest
syntax
and
that
could
be
done
a
lot
to
improve
that,
but
it
at
least
proves
to
be
working
and
gives
functional
libraries
like
the
cube,
like
the
kds
alpha,
one,
a
generator
which
can
output
markdown
documentation,
which
does
work
well.
But
I
would
like
everybody
to
consider
how
we
could
convert
that
into
something
more
general
for
all
jsonnet
libraries.
B
Are
you
planning
to
also
make
a
part
of
tanka
and
then
potentially
enable
saving
of
run
books
and
such
and
also
maybe
even
jumping
over
mixins?
So
you
have
a
consistent
thing
of
dashboard
and
runbook
and
alert
and
configuration.
A
Also,
the
project
is
kind
of
twofold,
like
it's
already
explained
in
the
readme
of
that
thing,
that
it
is
a
data
model
and
a
marketing
generator.
So
you
it
while
it
not.
It
can
the
thing
we
extract
from
json
at
like
the
packages
and
the
function
definitions.
A
C
I
really
like
the
idea,
but
I
think
it
needs
a
bit
more
thought,
so
I'm
not
even
sure
I'm
I'm
not
even
sure
if
we
should
move
it
to
the
orc
just
yet,
just
like
my
two
cents.
So
what
I,
what
I
would
propose
going
forward,
is
to
kind
of
like
get
other
folks
in
into
the
conversation,
because
I
I
haven't
heard
it
here.
C
First
now
like
from
like
people
from
who
work
on
jason
at
bundler
on
on
the
different
kind
of
libraries
and
stuff
and
then
yeah,
maybe
even
write
up
a
design
doc
around
this
because,
like
what
we've
kind
of
like
so
far
always
have
been
discussing,
at
least
for
run
books,
and
there
might
be
a
special
case,
though,
is
to
basically
have
like
a
hidden
object
next
to
the
alert
definitions.
C
So
we
wouldn't
even
like
put
the
run
books
into
into
comments,
but
I
I
do
do
still
see
the
value
for
generating
such
a
dog,
like
that's
for
sure,
so
I'm
I'm
definitely
happy
to
to
discuss
and
and
try
to
come
up
with
something
going
forward.
A
Also
also
you're
speaking
of
comments.
The
reason
why
it's
not
using
comments
here
is
that
I
did
target
at
the
kubernetes
library,
where
we
override
the
final
documentation
outcome
in
these
custom
handwritten
patch
files
and
with
like
at
once.
We
can't
even
extract
comments,
because
you
just
need
us
puzzle.
Just
does
not
really
work
for
that
too
well,
and
also,
we
would
never
be
able
to
do
the
sophisticated
merging
with
comments
as
well,
so
we
kind
of
hijacked
the
json
engine
itself
to
merge
the
documentation
for
us
as
well.
A
B
I
think
there
is
inherent
value
of
having
configuration
and
documentation
live
together,
because
this
means
you
can
also,
if
you
do
updates
you
can't
really
you
can't
really
I
mean
you
can
still
forget
it
or
you
can
ignore
it.
But
it's
a
lot
harder
to
forget
and
you
can't
conceivably
argue
that
you
that
you,
that
you
didn't
ignore
it
on
purpose
like
it's
right
there.
B
So
so
you
can't
forget-
and
I
think
this
has
value
in
it
as
of
itself,
if
it's
done
basically
like
in
a
way
where,
where
it's
modularized,
but
if
you
do
it,
it's
just
part
of
the
complete
package
and
and
behaves
natively.
If,
if
that
makes
sense
like
you,
don't
force
people
to
use
it,
but
if
it's
being
used
and
if
it's
being
used
as
a
part
of
the
thing,
then
whatever
is
the
right
thing
would
should
just
happen
like
do
what
I
mean
kind
of.
C
D
You
know
I
was
just
going
to
say
you
know
once
once
you've
got
the
documentation,
docs
traveling,
alongside
the
code
in
some
format
you
and
then
you
put
those
into
your
repository,
the
awesome
jsonnet
type
thing
we've
talked
about
all
of
a
sudden.
You've
got
your
free
documentation
site
as
well
for
any
library
that
goes
in
there.
You've
got
godot
godox,
which
is
you
know,
which
would
be
blow
away
really
compared
to
what
the
situation
has
been.
D
D
I
mean
my
only
question.
Looking
at
the
dock
is,
is:
is
that
the
most
concise
and
clean
way
of
doing
it?
It's
a
real
shame.
You
can't
have
the
hash
without
the
quotes.
D
For
example,
you
know
it's
just
things
like
that
that
just
make
it
seem
a
bit.
You
know
when
I
did
first
look
at
it.
I
did.
It
did
take
me
a
little
bit
of
a
while
to
work
out
what
you
what
you
were
actually
saying.
You
know
people
are
used
to
comment
based.
D
Documentation
and
yeah,
so
so
there
is
a
bit
of
overhead
here,
cognitive
overhead,
for
people
to
get
their
heads
around
what
they're
doing,
and
why
and
that.
That's
that's
the
problem
I
have
with
it,
not
that
you
know.
I
think
the
idea
is
is
great.
The
the
way
you've
you,
the
power
of
what
you're
suggesting
is,
is
great,
but
the
just
it's
like
having
to
quote
your
element,
names,
yeah,
just
kind
of
goes
against
something.
It
just
adds
an
extra
level
of
of
cognitive
process
that
I
would
really
really
like
to
avoid.
A
Also,
you
basically
need
to
write
things
twice
like
you.
Do
have
to
define
the
function,
and
then
you
need
to
define
exactly
the
same
information
again
just
for
docs
on
it.
So
we
might
be
able
to
improve
the
situation
with
the
sugaring.
If
you
only
read
me
just
scroll
all
the
way
down
there
in
the
second
last
paragraph,
I
had
an
idea
how
you
should
how
distributed
syntax,
for
that
might
look
like,
which
probably
is
better,
but
I
don't
know
if
it's
good
already.
D
C
I
mean
this
is
almost
more
like
a
json
language
feature,
so
we
we
do
know,
I
think
dave
cunningham
is
his
name,
so
we
should
probably
just
set
up
a
call
with
him
and
discuss
this
because,
like
this
is
this
is
not
something
we
need
to
kind
of
like
do
ourselves
or
even
maybe,
maybe
he
has
a
lot
of
more
thoughts
on
this
to
begin
with
right,
we're
just
more
like,
like
from
the
us
consumer
perspective
of
jsonnet,
but
it
would
be
really
great
to
to
talk
to
him
and
see
what
kind
of
like
language
people
think
about
this.
A
D
I
would
be
curious
to
hear
if
you
know
that's,
you
know
who
who
is
maintaining.
Is
here
googler
you
know,
are
they?
Is
he
doing
that
in
his
own?
Spare
time
is?
Is
it
actually
in
active
use
there.
C
Yeah,
I
think
I
think
it's
hard
to
tell
actually
but
like
it
is
constantly
being
worked
on
as
far
as
I
can
tell
so
like
yes
like,
we
should
just
just
talk
to
him
anyway
and
see
what
he
what
he's
up
to
like,
I
I
haven't,
talked
to
him
in
about
a
year,
so
it
might
be
worth
doing
and
checking
in
anyway.
C
A
C
Okay,
yeah
that
might
have
yeah,
I
mean
we
can
just
talk
to
dave
because,
like
I
I've
talked
to
dave
previously
and
then
maybe
he
can
point
him
to
point
point
us
to
to
stan's,
love
and-
and
we
can
just
take
it
from
there.
Whoever
kind
of
like
dave
says
is
the
right
person
to
talk
to.
I
just
assume
he
knows
who
they
are
so
yep.
C
Good,
I
will
definitely
try
to
get
pavel
and
maybe
frederick
and
a
bunch
of
other
people
into
this
call
next
time,
because
I
think
a
lot
of
the
things
that
we
discussed
here
are
very
very
close
to
what
the
monitoring
team
on
a
daily
basis
uses
for,
like
our
production
systems
too
right,
like
everything,
is
json
based
in
terms
of
configuration
and
so
on
and
so
forth.
So
hopefully
I
can
convince
a
few
more
people.
Next
time
was
all
kind
of
on
on
a
short
notice.
This
time.
For
me,.