►
From YouTube: Tanka Community Call 2020-11-03
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Yeah
talking
about
inline
environments,
where
I
made
two
prs
one
original
one
with
a
single
inline
environment.
So
you
can
edit
the
spec
in
on
the
json
level,
which
was
at
least
two
issues
that
were
asking
for.
A
A
B
B
B
B
A
A
Definitely
what
you've
written
below
here
that
like
yeah
tkn,
is
basically
broken
at
this
point,
because
it
would
need
to
evaluate
every
single
environment,
basically
to
understand
it
yeah
there
would
be
an
option
around
it.
If
we
say
that,
like
this
stanza
needs
to
be
at
the
top
level,
then
we
could
just
partially
evaluate
these
files
and
try
to
check
whether
file
has
a
kind
environment
key
and
then
just
say
yeah.
This
is
one,
but
this.
A
B
C
B
B
B
C
B
A
B
B
A
B
C
C
Yeah,
so
I've
been
using
it
to
deploy
to
our
stage
and
production
systems,
for
I
don't
know
a
couple
months
now:
wow
cool
and
I've,
actually
just
so
I
was
using
it
for
so
I
ported
a
lot
of
like
third-party.
You
know,
like
logging
and
whatnot
helm,
charts
over
to
jsonnet
and
was
deploying
with
tonka,
and
I
was
using
it
only
for
our
production,
configs
and
now
I've
just
actually
moved
all
of
our
n10
test
environments
and
all
of
production
and
stage
deployments
over
to
json
and
tenka.
C
So
so
yeah,
it's
a
fairly
fairly
familiar
with
it.
I
would
say
so.
It's
probably
been,
I
think,
yeah
I
would
say
since
august.
I
guess
I've
been
using
it
in
production,
so
yeah,
so
the
the
inline.
C
Is
definitely
a
great
topic
because
I've
been
yeah
so
especially
for
things
like
intent
tests,
I've
had
to
do
some
kind
of
ugly
hacks
in
order
to
you
know,
deploy
to
arbitrary
name
spaces
that
I
just
created,
so
yeah
yeah
being
able
to
just
create
a
new
environment
on
the
fly
and
deploy
to
it
would
be
fantastic.
C
Cool
yeah,
so
for
our
intent
test
environment
I
create
a
new,
a
new
subdomain
new
name
space.
You
know
that
is
just
a
basically
a
randomly
generated
name
space
and
then
so
in
order
to
do
that
in
tonka.
Right
now,
I
basically
have
to
create
a
new
temporary
directory
copy,
the
stuff
into
it,
make
some
tweaks
and
and
then
use
that
as
the
as
the
the
component
directory
right
to
to
deploy
from
it
works.
But
it's
just
a
little
hacky.
B
Yeah
we're
just
to
to
get
you
up
to
speed
we're
discussing
some
implementation
details
like
how
are
we
going
to
make
that
available?
Go
ahead,
stone.
C
A
A
B
B
What
about
saying
this
is
naming
this
alpha
now
or
better
saying
it
for
very
sad.
A
C
A
B
B
Then
you
can
say,
then
that
won't
have
a
dk
import.
B
It
implies
that
what
you
have
in
an
environment
in
the
main.js
net,
with
a
spectral
json
there
you
tell
exactly
this-
is
the
data,
and
this
is
your
environment.
B
B
A
B
And
then
you
have
multiple
things:
not
only
an
environment
but
maybe
also
an
object
that
executes
terraform.
Just
take
it
out
loud
here,
then
you
would
first
execute
the
terraform
thing
to
gen
to
create
your
cluster
and
then
use
the
environment
to
apply
stuff
to
the
cluster.
Just
thinking
out
loud
here
right,
I
I
have
no
strict
plans
for
that.
C
So
what
is
the
so
without
this
data?
I
guess
what
is
what
is
tom?
What
what
is
your
intended?
I
guess
mechanism
for
including
the
the
main
jsonnet.
You
know
the
top
level
jsonnet
object.
C
If
it's
not
like
this,
like
yeah,
I'm
just
curious
to
know
because,
like
so,
if
it's
an
inline
environment,
that
implies
that
I
should
be
able
to
say
you
know
this
file
represents
my
environment,
or
this
object
represents
my
environment,
and
this
file
represents
my
my
json
main
when
I
just
run
tanka
right.
So
that
means
I
should
be
able
to
point
to
two
different
files
anywhere
on
my
file
system
and
and
just
execute
tonka
against
those
right.
A
Yeah
so
like
the
idea
here
on
hand,
for
this
pull
request
was
that
you
have
like
currently
at
like
the
top
level
value
of
just
of
digestion.
Evaluation
is
considered
by
tanka
as
some
sort
of
collection
which
it
scans
through
to
find
any
kind
of
kubernetes
object
right,
and
this
sort
of
changed
it
a
bit
that
at
the
top
level,
we
now
would
strictly
expect
an
object
of
this
form
like
this
is
what
was
before
specter.json,
which
would
now
have
to
be
emitted
by
jsonnet
and
was
what
was
previously.
C
Right
so
again,
if
we
leave
out
the
the
data,
the
data
field
there,
what
what
would
be
the
alternative.
A
Well,
if
we
left
out
the
data
field,
then
we
could
maybe
think
of
that
exactly
the
same
way
as
it
is
at
the
moment,
we
scrape
the
big
tree
for
whatever
looks
like
a
kubernetes
object
and
because
this
also
looks
like
a
kubernetes
object,
we
could.
We
would
also
also
find
this
while
scanning
and
would
then
just
filter
it
out
and
read
the
configuration
from
there,
so
it
would
be
on
the
same
level
instead
of
enclosing
the
data.
C
I
see
I
see
so
it's
okay,
so
this
is
a
matter
in
some
ways.
I
guess
this
this
this
example
here
kind
of
is
a
a
mechanism
for
kind
of
enforcing
a
hierarchy
where
this
this
has
to
sit
on
top
of
the
object
hierarchy,
whereas
in
the
the
alternative
approach
it
can
be
found
anywhere
in
the
in
the
generated
data.
C
C
One
yeah,
I
guess
what
the
only
advantage
I
could
think
of
offhand
is
just
in
long-term
maintenance
keeping
like.
I
guess
what
what
would
happen
if
you
know
some
json
and
file
got
included
and
all
of
a
sudden,
I
ended
up
with
two
environment
objects
in
in
this
mess
of
objects,
right
like
so
like
the
wrong.
C
So
some
one
live
includes
another
live
which
includes
another
live,
which
includes
some
vendored
thing,
and
then
we
end
up
with
two
environment
objects
in
the
the
resulting
jsonnet
expansion.
A
C
Yeah,
so
I
kind
of
I
think,
even
if
even
if
there
wasn't
this,
this
kind
of
data
object
right,
because
that's
that's
really.
What
this
does
is
this
kind
of
implies
that
this
becomes
the
entry
point
of
the
tank
program
that
gets
executed
right.
So
even
if
that
data
object
wasn't
required,
I
would
probably
I
think
we
would
probably
end
up
adopting
a
very
similar
convention
just
so
it
was
very
clear
like
what
the
entry
points
were
right.
You
know
it's
like
every
yeah.
C
A
So
I
think
what
probably
a
bit
with
the
current
approach
outlined
by
dpr
is
that
it
strictly
requires
that
boilerplate.
So
there
is
no
way
around
having
this
schema
set
up,
while
when
you
had
the
environment.
B
The
thing
what
I
did
in
multiple
environments-
the
second
pr
on
this-
actually
allows
for
that
that
we
can
put
the
environment
nested,
so
it
doesn't
have
to
be
a
top
level.
We
just
filter
out
that
we
only
want
one
if
we
find
more
than
one,
we
can
fail
there,
which
is
cleaner
than
what
we're
doing
now.
A
I
think
it's
not.
My
control
is
not
about
it
not
being
at
the
top
level,
but
rather
at
the
moment
you
can
like,
if
you
just,
took
a
single
deployment
and
put
it
to
top
level.
A
That
would
be
still
valid
and
will
still
execute
just
fine,
which
I
personally
found
quite
handy
when
trying
to
just
like
when
trying
to
trying
things
out
just
quickly,
while
with
this
I
at
least
always
would
need
to
include
that
which
I
think
might
block
some
use
cases
or
makes
me
type
that
more
often
and
if
it
was
just
another
element
in
the
top
level.
Nobody
like
what's
the
current
way
and
you.
C
But
if
there
are
multiple
environment
objects
found
in
the
in
the
jsonnet
rendering,
then
the
data,
then
the
data
object
becomes
required.
Because
now
you
have
to
specify
what
becomes
the
entry
point
for
this
particular
deployment.
So
you
could
actually
generate
two
deployments
from
one
tank,
a
run
with
two
environments
and
two
data
blobs
pointing
at
two
different,
two
different
mains.
A
B
B
B
A
B
C
I
I'm
kind
of
leaning
towards
that
as
well.
My
take
is,
and
again
I'm
just
one
one
opinion
here,
but
the
biggest,
maybe
not
the
biggest,
but
one
of
the
biggest
hurdles
for
me
in
learning
the
tanka
approach
was
so
I
have.
C
I
have
you
know
in
clusters
and
m
environments
within
each
cluster
and
my
my
approach
for
implementing
this,
I
guess,
was
to
have
the
the
the
main
right.
C
So
what
was
main
jsonnet,
like
my
my
intuition,
was
to
make
that
almost
identical
for
every
single
cluster
right,
which
which
point
I'm
just
kind
of
copy
pasting
across
the
you
know
the
same
kind
of
component
across
a
bunch
of
different
environments,
and
that
was
you
know,
and
then
then
I
would
only
differentiate
by
like
a
config
that
I
would
pass
in,
like
would
be
like
top
level
arguments
and
in
order
to
adopt
adopt
the
existing
tanka
paradigm.
C
I
basically
had
to
move
all
of
that
common
code
into
lib
and
put
my
context
into
each
main
jsonnet,
and
it
was
just
a
I
don't
know
it
was
a
little
bit
so
so
I'm
thinking
like
if
I
can
move
to
a
system
where
I
don't
have
to
have
one
directory
for
every
combination
of
cluster
and
environment.
C
The
way
I
guess
I
would
want
to
do
that
is
is
yeah
basically
just
as
I
just
described
so
I
generate
this
environment
object
as
part
of
this
this
job-
I
don't
I'm
not,
maybe
not
explaining
this
too
well,
but
I
guess
I
would
what
I
want
to
try
to
avoid
is
having
n
times
m
environment
objects
in
code
when
they're,
all
you
know,
90
similar,
and
the
only
thing
that
changes
is
the
api
server
and
and
and
the
name
space
right
right.
A
So
yeah,
I
think
this
is
super
interesting,
because
this
is
a
core
problem
of
the
spectre
jason
approach,
but
like
how
would
you
like
to
do
it
differently,
like
you
could
certainly
create
a
main.json
that
that
generates
all
of
that
for
you
that
in
the
environment
object,
but
how
would
you
day
take
it,
and
how
would
you
specify
which
environment
you
actually
want
to
apply
like?
Would
you.
C
Create
yeah,
I
would
either
x,
vars
or
top
level
argument
function
right,
so
the
function
I'd
have
one
I'd
have
one
main
json
that
takes.
You
know
the
environment,
variables
of
api
server
and
namespace
and
and
cluster
name
and
every
in
it
and
then
so.
I'd
have,
you
know,
end
times
m
invocations
of
tanka
that
that
change
the
variable
each
time,
because
I
don't
obviously
I
don't
want
to
deploy
all
of
them
every
time.
I
only
you
know
every
time
I
deploy
I'm
only
interested
in
deploying
to
a
single
cluster
right,
so
so
yeah.
C
So
I
think
that
using
this
approach
that
I
would
just
parameterize
the
entire
deployment
with
three
variables
that
would
create
this
new
environment
on
the
fly.
C
A
C
Yeah,
so
basically
some
other
some
other
automation
system
would
would
that
would
be
yeah.
Okay,.
A
But
I
might
imagine
somebody
to
be
a
bit
less
careful
there
and
then,
for
example,
want
to
have
a
single
option
set
on
a
single
environment
or
a
single
system,
server
or
cluster,
and
then
they
would
maybe
use
an
if
switch
instead
of
json
and
to
react
on
whatever
the
extra
value
was
set
to,
at
which
point
we
are
at
something.
I
would
highly
like
to
discuss,
because
this
becomes
super
unobvious.
What
happens.
C
Right,
no,
that's
that's
understandable.
I
guess
my
take
was
just
you
know,
following
the
the
the
dry
principle
that
I
wanted
to
modularize
as
much
as
possible.
Also
in
a
lot
of
cases,
the
literally
the
only
only
difference
between
a
lot
of
our
deployments
is
the
namespace
and
so
yeah.
It
seems
certainly
having
the
the
structure
it
has
now
it
when
the
only
desire
is
to
change
the
name
space.
That
of
the
target
deployment
is
definitely
far
too
heavyweight.
A
C
C
So
I
guess
I
don't
know
thinking
through
my
thinking
through
that
rather
complicated
use
case.
I
guess
it
doesn't
really
change
whether
there's
a
required
data
blob
or
not.
I
don't
think
it
would
change
how
I
would
how
I
would
structure
everything.
B
A
B
Yeah,
it's
something
that
we
don't
want
to
move
too
fast,
because
besides
basic
things
like
fundamental
things
and
how
the
anchor
works.