►
From YouTube: Tanka Community Call 2020-12-01
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
welcome
to
the
december
tanker
community
call,
so
I
I
was
thinking
just
just
to
start
with.
We
could
might
be
good
if
we
could
just
say
a
sentence
about
what
brings
us
here
and
then
we've
got.
We've
just
got
a
couple
of
topics
to
present,
and
then
we
can
use
the
rest
of
the
time
for
general
q
and
a's.
A
So
I
can
start
I'm
malcolm
I'm
part
of
the
the
newly
renamed
platform
squad
at
grafana
labs.
We
make
heavy
use
of
tanker.
Obviously-
and
you
know
I've-
I've
had
some
dabblings
with
the
tank
code
base
itself.
Probably
some
of
the
simpler
features
are
my
responsibility.
B
Yeah
yeah
sure
I'm
an
esri
at
grifan
labs
joined
a
year
ago,
and
I
am
very
happy
with
thank
god
tooling,
as
it
was
there
in
first
place,
and
I
started
adding
more
features
on
top
to
make
my
life
easier.
C
Hi,
I'm
david,
I
am
coming
from
ubisoft,
you
must
know,
there's
ubisoft
expat
that
is
working
for
you
and
right
now
I
think
he's
working
on
loki.
I
don't
remember
his
name
on
the
department
anyway,
and
I'm
looking
to
help
automatic
automate
our
deployments
and
we've
developed
a
small
tooling
to
to
prepare
with
tanka
manifests
and
to
easy
onboarding
of
people
that
are
not
very
used
to
communities.
C
So
we
just
have
a
small
top
level
manifest
and
with
makefiles
and
tanka
we
create
environments.
That's
why
I'm
really
interested
with
the
inline
environments
feature.
So
we
create
the
sites
for
exporting
and
that's
it.
It's
a
it's
a
tooling
for
eighty
percent
of
the
people
coming
to
communities.
Would
they
just
want
to
have
their
service
set
up
and
not
having
too
much
to
fuss
around
with
the
communities.
D
I
will
yeah
I'm
tim,
I'm
a
yelp
sre
at
yelp
yeah.
So
for
the
past
few
years,
we've
been
running
prometheus
on
ec2
instances
and
we're
starting
off
a
migration
to
kubernetes
using
prometheus
and
eks.
D
We've
also
got
like
a
in-house
platform
as
a
service
that
is
running
on
kubernetes
as
well,
and
we're
starting
to
use
tanka
to
model.
Our
prometheus,
shards
and
similar
to
david
was
saying
like
using
a
series
of
make
files
and
tanker
to
like
allow
teams
to
go
and
create
themselves
a
prometheus
shot.
E
Yeah,
maybe
a
couple
of
words
to
myself:
I'm
a
necessary
at
tower
testing,
open
source
assessment
platform
and
we're
currently
transition
transitioning
from
playing
jason
that
stack
to
timer
in
order
to
ease
our
lives
a
bit.
F
Yeah,
I
just
realized,
I'm
double
booked
doing
this
and
the
toc
call
for
cigarette
capability.
So
I'm
going
to
have
to
drop
out,
unfortunately,
because
we're
presenting
some
stuff,
but
my
name
is
matt.
I'm
from
a
company
called
every
quote
and
we're
moving
over
to
using
chase
sonnet
for
most
of
our
configuration
and
tanka.
Sorry
about
that.
I've
created
a
github
tank
of
tools
where
we
intend,
over
the
next
couple
weeks
to
open
source
most
of
our
configuration.
F
Is
on
just
on
friday,
I
finally
got
with
our
general
counsel.
I
got
clients
from
the
tower
to
proceed
with
actually
putting
what
we're
doing
out
in
the
open.
In
short,
we're
running
all
this
on,
you
can
in
a
secure
way
with
service
accounts
and
role,
bindings
and
all
of
the
stuff
you
have
to
do
with
annotations
and
not
to
do
eks
fully
when
you
can't
use
the
default
service
account.
So
most
of
my
changes
to
those
various
projects
that
will
bring
pr's
in
the
next
week
or
two
are
stemming
a
lot.
A
So
so,
let's
head
over
to
euron,
do
you
want
to
give
your
presentation.
B
B
There
was
a
lot
of
discussion
because
it's
actually
a
major
feature.
Do
it
differently
and
it
has
some.
It
might
have
some
drawbacks
and
discussion
about
convention
and
whether
it's
convention
or
tank.
I
should
do
it
for
you.
Things
like
that.
B
Yeah,
sorry
about
that
got
some
noise
from
the
blinds
going
down
so
yeah.
I
can
show
you
a
bit
on
how
to
convert
a
regular
setup
with
this
main.json
speckle
chase
nut
into
one
with
an
inline
environment.
I
think
showing
it
is.
The
best
example
is
the
best
way
to
exam.
I
show
an
example
of
how
it
works.
B
If
it
will
work
with
me,
yeah,
that's
an
internal
one
form
of
devinstance.
There
shouldn't
be
no
secrets
in
here,
but
if
so
look
away.
B
This
is
the
speckled
json
that
goes
for
it
just
as
the
api
server.
What
you're
used
to
some
labels
that
we
add,
as
we
also
want
to
let's
flux,
deploy
itself,
what
states
test
and
which
team
it
owns.
B
This
renders,
as
a
regular
one,
it
ends
up
with
a
flux,
cd
block,
with
a
lot
of
manifest
for
everything,
as
you
would
expect,
yeah,
that's
a
really
simple
one.
Let
me
show
you
how
I
would
convert
this.
B
It's
really
fast
and
dirty,
but
here
we
have
pentagon.
Pentagon
comes
from
downstream
pentagon.
What
we
already
configured
and
flex
cd,
which
is
the
downstream
flex
cd.
B
Yeah,
that's
the
basics
that
you
need,
and
it's
done
already
we
can
delete
what's
in
here,
but
we
can
delete
the
spec.json
file.
Let's
do
that.
B
That's
that,
which
is
a
really
simple
part,
just
move
it
and
add
it.
Are
there
any
questions
on
this
one?
I
got
a
follow-up
one
make
it
more
advanced.
A
I
guess
my
the
obvious
question
is:
yes,
you
move
far
from
one
place
to
another.
Why
why
do
it.
B
Why
do
it?
Let
me
give
an
example:
one
that
comes
from
the
issues
is
that
you
want
to
make
this
one
an
input
variable
that
you,
because
you
know
you
run
a
local
kubernetes
cluster,
to
test
it
somewhere,
and
I
heard
it
I
think,
from
git
lab
there
were
some
people
that
run
a
local
cluster
do
to
do
run
tests
and
there
they
want
to
provide
provision
the
api
server
and
that's
each
time
a
different
ip.
So
let's
say
you
want
to
change
that.
B
C
B
B
B
We
have
a
few
different
systems,
it's
the
name.
The
naming
is
here,
cluster
name,
dot
namespace,
which
is
a
convention
we
use
and
inside
that
we
have
names
a
main.json.
What
would
render
to
an
amp?
And
actually
I
made
it
really
simple-
and
it
looks
like
this:
this
is
all
that
is
left
from
an
environment
all
the
rest.
B
I
I
program
with
jsnet,
so
we
have
amso.json,
which
is
a
huge
list
of
all
the
information
I
need
about
the
clusters
we
have
and
that
you
can
use
all
over
the
place
for
every
project
you
have
because
yeah
you
can
render
this
with
terraform,
for
example,
the
tool
that
sets
up
your
kubernetes
cluster,
and
you
can
load
that
in
into
a
file
and
say
we
loop
over
that.
B
B
Which
is
also
no
function
in
here,
but
you
can
put
that
somewhere
else
in
the
library.
So
you
have
all
environments
for
your
project
with
the
same
configuration
for
environments,
which
is
just
here.
It's
the
same
thing
as
we
see
before
only
made
a
bit
more
specific
for
what
it
is,
but
general
enough
to
be
used
by
different
things.
B
A
C
So,
what's
the
data
key
for
I,
I
see
it
that
you're
referencing
the
pentagon,
the
top
level
pentagon
object,
but.
B
The
data
levels
actually,
what
would
previously
be
in
your
main.json,
okay,
okay,
so
it
moved
the
one
layer
deeper.
So
it's
part
of
an
object.
The
reason
for
that
is
not
necessarily
from
adjacent
perspective,
but
from
a
tanker
perspective
where
and
development
of
tank,
it
would
be
interesting
to
be
able
to
pass
along
a
complete
object,
including
the
spec
information
and
objects
related
to
that
spec
to
another
component
within
tanka.
So
it's
more
of
a.
A
So
so
what
you're
saying
is
in
the
same
way
as
you
might
have
a
deployment
in
kubernetes
and
that
deployment
contains
has
containers
within
it.
So
it's
one
thing
that
that
describes
multiple
in
the
same
way.
What
we
have
here
is
a
single
thing,
which
is
an
it
has
a
a
kind
of
environment
and
what
it
does
is
it
contains
everything
about
that
environment,
I.e
all
of
the
resource
definitions
within
it.
A
So
that's
why
euron
moved
it
from
the
outside
into
the
data
element
and
that
that
does
actually
allow
a
number
of
interesting
things
that
that
we'll
mention
in
the
next
topic.
B
One
of
the
questions
that
will
probably
come
up
and
as
far
as
I
know,
there's
no
answer
that
when
will
this
be
released?
Well,
the
next
version
has,
but
that's
not
planned.
Yet
when
that
comes
out,
but
here's
master
all
right.
C
The
the
make
file
system
that
we've
set
up
in-house
caters
for
this
use
case
only
for
injecting
the
server
label.
I
think
so.
We
set
the
different
directories
for
different
environments
and
we
use
a
a
base
template
of
a
spec
json,
and
then
we
replace
this
with
jsonnet,
so
yeah
yeah.
So
I
think
it
I'm
not
seeing
it
how
we
could
do
it
because
we're
generating
for
the
user,
it's
nice
to
see
different
environments
in
the
folders
already,
but
the
templating
could
be
replaced.
C
I
think,
with
this
feature,
I'll
try
to
play
with
it
in
the
final
and
following
weeks.
A
B
A
There's
no
other
questions
anything
we
could
keep
going
yep,
okay,
so
yes,
I
do
in
fact
have
it
still
so
in
the
dock,
I'm
going
to
paste
a
definition
that
we
came
up
with
when
talking
internally
of
what
tank
is.
I
can
paste
it
into
the
chat
here
as
well.
A
So
this
was
this
is
not
so
much
a
statement
of
what
tanker
is
right
now,
but
more
to
try
and
capture
our
imaginings
of
what
it's.
What
where
it
might
go
so
tanker
is
a
tool
for
managing
kubernetes
workloads.
That
also
plays
well
with
similar
jsonic-based
configuration
tools.
This
means
it
structures
its
golan
code
to
maximize
reuse
by
other
tools,
whilst
entirely
focusing
on
its
own
use
case.
It
avoids
opinions
that
block
other
tools
from
consuming
the
same
jsonic
code
base.
A
So
you'll
start
to
get
the
feel
for
where
we're
going
when
you
now
reflect
back
on
the
inline
environments.
So
at
the
moment,
if
you
have
a
resource,
if
you
have
anything
in
your
tanker
in
your
json
output,
that
is
not
a
kubernetes
object.
Tanker
will
complain,
and
so
so
the
the
tanker
the
j
sonic
code
base
is
a
your
whatever
json
code.
A
You're
using
is
going
to
be
a
tanker
code
base,
whereas
what
we're
moving
towards
here
is
you
know,
for
example,
we
have
a
a
much
more
but
a
useful
tool
called
grizzly
that
uses
consumes
jsonnet,
but
instead
of
talking
to
a
kubernetes
endpoint,
it
talks
to
a
grafana
endpoint
or
actually
it
does
grafana
grafana
synthetic
monitoring
and
prometheus
stroke
cortex
ruler
for
uploading
prometheus
rules,
so
it
can
talk
to
various
endpoints.
A
So
what
we
can
now
do
is
have
one
code
base
that
can
be
consumed
by
multiple
consumers
want
a
better
term.
So,
in
a
way,
what
that's
saying
is
that's
where
tanker
avoids
opinions
that
block
other
tools
from
consuming
the
same
code,
jsonic
code
base.
If
everything
is
in
an
end
environment
and
that
environment
is
labeled
with
a
with
an
api
key
of
tanker,
then
if
it
comes
across
something
that's
not
an
environment
that
has
an
api
key,
it
doesn't
recognize
tanka.
Just
can
just
say.
A
I
don't
know
that
I'll,
ignore
it
yeah
and
that
way
we
can
put
in
there
another
thing:
maybe
that
is
a
grizzly
environment
with
a
grizzly
api
version,
and
it
can
then
refer
to
pull
in
just
the
bits
that
it's
interested
in
and
do
whatever
it
wants
to
do
with
it.
So
that's
the
second
part.
The
first
part
is
well
tanker's,
got
all
of
this
called
golang
code
to
do
all
this
stuff,
to
interact
with
jason,
it
and
so
on.
A
Why
would
a
tool
like
grizzly
want
to
redo?
All
of
that
and
so
tanker
could
itself
be
increasingly
structured,
as
in
as
in
its
golan
code
such
that
you
want
to
push
something
I
don't
know
to
slack.
A
So
prometheus
itself
does
not
support
it.
Basically,
it
has
its
rules
written
into
a
file,
so
you
need
to
find
a
way
to
get
the
your
rules.
If
you
generate
your
prometheus
rules
from
within
jsonnet,
you're,
probably
going
to
be
writing
them
into
a
config
map
or
a
crd,
or
something
like
that.
A
However,
if
you
use
something
like
the
open
source
cortex
ruler,
if
you're
using
cortex,
which
has
its
own
cortex
ruler
or
you're,
using
grafana
labs,
hosted
prometheus,
etc
or
grafana
metrics
enterprise,
they
have
that
function.
The
functionality
which
basically
provides
you
with
an
api
that
you
can
use
to
push
the
rules
up
to
it
and
that's
what
grizzly
will
write
to.
B
I'm
showing
one
more
time
what
we
had
before
this
doesn't
work
just
an
example,
or
we
could
have
something
like
this
and
then
block
here
where
we
have
this
kind.
Grafana
thing,
for
example,
let's
say
fox
tanka
doesn't
understand
that,
but
let's
say
grizzly
understands
that
we
could
execute
grizzly
on
the
same
code
base
that
tanka
uses
and
then
from
the
same
flux,
cd
that
we
have
below.
That
would
provide
grafana
dashboards.
B
B
That
was
why
the
data
block
is
separate
here.
A
Yeah
and
what
this
so,
what
this
this
allows
you
to
use
other
systems
that
run
alongside
tanker
or,
potentially
you
know
what,
if
there
was
something
that
just
used
ansible
to
do,
ssh
based
stuff
or
whatever
else
you
know
it
doesn't
need.
You
know
this
jsonic
way
of
working
doesn't
need
to
be
tied
to
kubernetes
per
se.
D
B
We
all
have
either
yaml
or
jsnut
doing
things,
because
they
are
so
interchangeable
and
easy
to
work
with.
So
if
you
have
one
output
moving
to
another,
the
same
way
that
we
use
helm
now
and
that's
one
of
the
next
topics
we
can
also
you
use
customize,
for
example,
in
the
same
way
because
translate
cml
or
json,
and
you
can
import
it
and
use
it
again
and
again
and
port
it
back
like
you,
can
modify
something
with
json,
create
the
ammos
give
back
to
customize.
For
example,
if
someone
builds
a
tool
for
that.
B
It's
a
sneak
to
the
next
topic
yeah.
It
was
actually
quite
easy
because
the
work
the
same
workflow
applies
like
customize,
renders
yaml
and
yaml
can
be
imported
and
be
provided
just
like
we
actually
and
that's
the
funny
part.
F
B
But
we
want
to
deploy
it
in
the
get
ops
fashion
right
because
we're
doing
get
ups,
so
we
were
like
okay,
but
how
do?
How
does
the
cli
tool
deploy
that
and
it
actually
uses
customize
under
it?
So
why
not
use
customize
directly
inside
tanka,
randall,
randall,
the
ammos
and
let
flux
v1
deploy
flux
v2
for
us
and
we
can
do
the
component
base
because
you
have
the
source
controller
and
you
can
deploy
source
controller
first
and
not
anything
else.
D
A
B
It
yeah
the
nasty
thing
I
find
about
customize
is
that
it's
built
on
the
premise
that
you
have
an
internet
connection.
It's,
like
you
can't
just
say
vendor
everything
in
and
render
it
locally.
Now
it
needs
to
pull
and
you
can
go
as
deep
as
customize
can
go.
So
even
the
there's
particularly
no
end,
because
the
next
customize
can
import
another
customized
remotely
and
again
and
again
so
I
think
that's
the
nasty
part-
and
I
don't
know
if
and
how
we
are
going
to
solve.
A
It
yes,
so
what
you're
saying
is
at
the
moment
when
you
use
the
helm
feature?
What
you
do
is
you
vendor
the
whole
the
helm
chart
into
your
local
repo
so
which
means
it's
all
as
it
were
statically
defined,
but
with
customize
it
will
be
pulled
in
at
runtime,
which
means
you
may
risk
a
chance
of
getting
a
different
thing
from
two
separate
runs,
which
is
something
we'd
rather
avoid.
C
Yes,
what
we
do
is
we
try
to
to
have
a
strong
reproducibility
and
that
would
to
me
just
trigger
a
thought
of
maybe
not
going
to
that
route.
B
C
We
do
we
have
a
strong,
we
want,
we
want
it
to
be
locally,
be
local
executable,
so
you
can
evaluate
the
manifests
and
the
ci
actor
or
the
the
programmer
is
the
same
actor.
So
the
ci
does
the
same
thing
as
the
programmer
would
do
locally.
So
it's
it's
really
more
reproducible
and
we
even
use
we
use
jsonnet
to
generate
the
ci
from
gitlab,
so
we
can
use
all
the
same
configuration
from
tanka
and
in
the
same
code
base.
B
No,
no,
really
it's
it's
just
what
it
is
it
just
executes.
We
stand
on
the
shoulders
of
giants.
We
don't
do
much
magic
here.
It
just
executes
the
cli,
nothing
more
yeah.
A
It
can
be
consumed
by
other
tools
that
work
with
jsonic
code
bases,
but
it
can
also
consume
itself
other
tools,
such
as
helm
and
and
customizing.
You
know,
are
there
other
opportunities
like
that,
I'm
kind
of
quite
curious
what
those
might
be.
B
I
made
sure
that
the
output
of
tk
eval
will
always
render,
even
if
there
is
no
tank
environment
there,
and
I
think
in
in
the
previous
version,
when
you
ran
tk
eval
and
it
didn't
have
a
tank
environment
like
a
spectral
chase
on
that
or
something
json,
then
it
wouldn't
work,
but
now
in
at
least
newer
versions.
That
will
so
you
can
run
tk
evil
instead
of
running
json.
E
A
Good,
so
I
think
we've
we've
come
to
the
end
of
the
content
we've
prepared.
So
at
this
point
we
open
the
floor
to
anyone
who
has
any
questions
or
anything
you
might
want
to
ask,
discuss,
etc.
C
I
have
a
question
if
I
can
go
first
as
we're
providing
adjacent
tooling
to
to
users,
I
find
that
sometimes
the
users
are
in
the
dark,
with
the
the
template
where
framework
that
we're
providing
them.
C
So
let's
say
we
have
we
say:
okay,
all
these
objects
will
come
and
you
can
override
so
and
I
find
in
jsonnet
that
it's
hard
to
see
what's
coming
inside,
to
be
overridden,
I've
tried
to
use
std
trace
and
sometimes
it
doesn't
work
because
it
it
recurses
it
on
itself,
because
you
can't
just
say
std
trace
on
itself.
C
Have
you
done
have?
Do
you
have
any
debugging
tricks
for
that
or.
A
C
Yes,
I
use
it
well,
I've
I've
branded,
it
render
function,
so
we
do
like
render
the
the
environment
and
it
renders
the
json,
but
you
don't
have
all
the
adjacent
magic
inside.
So
you
don't
have
all
the
the
overrides
or
the
the
raw
json
that
is
is
is
manifested
so
you
don't
see
you,
you
know
what
the
keys
could
be.
So
so
it's
a
tool.
I
I
use
that
for
tool
to
know
what's
could
be
overwritten,
but
I
would
like
to
see
what
what's
in
memory
before
manifestation.
You
know
that
would
be
nice.
A
C
A
For
for
kubernetes
objects.
D
A
Eval,
minus
e,
basically
does
let
me
type
it
here.
If
I
do
t
k,
eval
minor
environments,
foo
at
minus
e,
something
if
I
were
to
do
that,
what
it
would
evaluate
is
import
environment,
foo,
slash
main
dot,
it
dot,
something
yup.
So
even
if
main.jsonet
has
a
thing,
an
element
called
something
that
is
private
and
not
visible
to
the
therefore
does
not
end
up
in
the
output
by
executing
that
import
statement
like
that,
you
can
start
to
extract
and
explore
the
hidden
elements
within
the
the
tree.
B
C
B
C
B
First
was
actually
evaluate
all
and
where
it's
hidden
don't
hit,
but
okay
still
runner
everything.
What
you
maybe
find
harder
is
just
to.
B
Let's
say
when
looking
at
chase.
Not,
I
always
imagine
the
tree
of
what
comes
out
of
it.
So
I
start
looking
in
the
tree
and
don't
look
at
it
as
jason
that
I
look
at
it
as
jason,
which
allows
me
to
find
things
fairly,
quick,
but
yeah,
certainly
looking
at
that,
as
if
it
is
chase
not
as
a
programmer
becomes
increasingly
harder
to
to
find
back.
What
caused?
What
one
thing
that
I
I
noticed,
someone
mentioning
in
our
chat,
one
of
my
teammates
danny
used
jq
and
the
part
to
inspect
it.
B
D
C
C
Yeah,
so
do
you
do
you
use
ide
integrations?
I
know
the
happier
one
is
like
out
of
sync:
it's
stopped
working,
it's
not
easy
is
a
do.
You
know
anybody,
that's
working
on
bringing
that
back
into
life
or.
A
I
don't,
I
have
swim
so
matt
who
matt,
who
briefly
joined
and
then
left
to
suggesting
you
might
have
a
look,
but
we're
still,
you
know
someone
who
does
that
would
make
themselves
very.
E
Popular
I'm
actually
working
on
something
like
that.
You're
interested.
I
do
not
have
anything
public
yet,
but
basically
it's
based
on
a
kds
offer
and
a
doxanet,
and
it's
it's
just
for
them.
It's
an
only
complete
plugin,
but
it
works
kind
of
I
still
have
some
bugs
to
fix.
But
when
it's
when
it's
done,
I
will
probably
get
it.
B
A
I
think
one
of
the
things
I
will
just
say
today,
that
is,
I
don't
know
if
you've,
if
you're
aware
of
this
particular
perspective,
that
what
what
a
lot
of
jsonic
code
does.
A
You
know
an
a
a
way
that
jsonic
was
used
and
has
been
used
for
for
a
few
years
is,
you
can
put,
you
can
add
all
of
your
libraries
to
the
global
namespace
and
all
of
your
libraries.
Can
then
cherry
pick
what
they,
what
bits
of
the
entire
j
sonic
tree
they
adapt.
A
Now
that
is
so,
for
example,
you
you
install
a
grafana
instance
and
the
grafana
instances.
Oh,
I
want
to
be
monitored
by
prometheus.
I
will
go
and
add
some
record
some
scrape
configs
to
prometheus
in
order
to
get
myself
monitored
and
that
that
assumes
that
you've
also
imported
a
prometheus.
That
and
you
happen
to
know
the
exact
path
and
exactly
how
to
tweak
it.
What
is
extremely
powerful,
but
you
also
end
up
with
extremely
tightly
tightly
knit
code.
A
A
A
Often
it's
that
kind
of,
I
think
using
globals.
Yes,
you
know
it's
something
that
we
that
in
most
programming
language
would
be
extremely
probably
apart
from
basic,
would
be
extremely.
You
know,
encouraged
against
or
discouraged
so
yeah.
That's
something
that
we've
learned
about
how
we
write
our
jsonite
is
to
actually
think
about.
C
Yeah,
so
in
in
our
case,
where
we
provide,
we
say:
okay,
you
have
the
main.jsonet
in
tanka
override
what
you
need
to
override.
We
have
only
one
site
so
that
that
promotes
this
one
file
organization.
C
So
I
guess
we
now
need
to
propose
to
the
user
entry
points
to
different
customization
points,
so
they
could
provide
vocalized
overrides
yeah,
yeah
yeah.
A
A
So
main.json
should
say
I
want.
I
mean
you
could
say.
I
want
a
mysql
and
I
want
a
an
nginx
with
php
with
wordpress
installed
yeah
and
you
could
say,
give
me
these
three
things,
but
then
actually
I
could
make
a
library
that
creates
those
three
things
and
then
say
give
me
a
wordpress
and
it
so
all
I
say,
is
wordpress.new
bracket
name
of
the
site
and
then
I
hand
it
a
config
which
says:
what's
the
domain
name,
what's
the
size
of
the
disk,
it's
going
to
use.
A
A
Then
it
returns
you
a
a
blob
of
json
and
then
maybe
you
can
have
you
can
say,
plus
wordpress
dot
and
then
have
other
functions
or
other
elements
that
the
patch
and
adapt
that
behavior
to
to
to
add
new
functionality.
As
you
go,
and
then
your
main.json,
it
becomes
much
more
descriptively,
descriptive
of
intent
and
much
less
about
implementation
detail,
so
encapsulating
the
chunks
of
functionality
as
we
would
again
in
any
other
programming
language,
we
would
write
functions
in
encapsulating
classes
and
all
that
stuff.
D
Yeah
yeah,
okay,
so
yeah.
We
I
mentioned
that
we've
got
like
we're
using
eks
and
we're
using
tanka
to
apply
to
that.
Fine
and
we've
also
got
like
our
in-house
kubernetes
cluster
and
like
one
of
the
pieces
of
work,
that's
been
done
at
the
moment,
is
to
change
the
authentication
of
that
so
that
we
use
octa,
which
is
like
an
sso
provider,
which
means
that
you
have
to
run
cube
control
with
the
dash
dash
token
option
now.
Tanker
just
runs
cube
control
directly.
D
B
D
So,
there's
a
dash
dash
token
option
that
you,
when
you
run
cube
control,
so
you
okay,
you
were
running,
keep
control,
diff
dash
token
and
then
pass
the
token
that
you've
got
back
from
octa,
so
it
might
depend
on
if
that
is
supported,
an
environment
variable
in
passing
as
an
environment
variable
instead,
which
is
something
I
have.
I
don't
know
I
can
look
at
that,
but
otherwise
it
might
have
to
be
changing
the
way.
The
tanker
calls
keep
control.
A
Yeah,
I
mean,
I
think,
your
your
answer.
Your
previous
comment
is
is
also
absolutely
valid.
You
know
if,
if
that
is
something
that
that
can
be
justified
as
a
way
of
interacting
with
with
kubernetes
via
cue
control,
then
then
the
pr
would
be
if
you're,
first
enough
in
golang,
if
you're
first
in
golang
a
pr
if
you're
not
an
issue.
B
D
A
I
just
just
just
to
say
remember:
it
says
when
using
cube
control
use
your
id
token
with
a
minus
minus
token
flag
or
add
it
directly
to
your
cube,
config
right.
C
A
A
A
Yeah,
I
am
aws,
I
am
aws
authenticator
or
something
like
that.
B
Yeah,
I
can
get
one
more
demo
before
time.
Time
runs
out.
We
have
this
thing
where
we
have
everything
we
import
added
to
the
global
scope.
B
And
all
that's
here,
except
for
looking
at
naming,
you
actually
don't
know
who
consumes
what
so
you
have
to
take
a
wild
guess
based
on
naming.
It
seems
obvious
that
ultraroute
is
used
by
otterroot
and
pentagon
mappings
by
the
pentagon,
library
and
that
gcr
name
possibly
comes
from
the
gcr
library,
but
then
how
or
what
and
why
is
it
all
meshed
up
together?
B
It
becomes
really
hard.
You
totally
don't
know
who's
using
this,
so
what
I
usually
do
to
write
it
differently
is
encapsulate
that
in
root,
something
so
that's
that
and
start
pulling
things
out
like.
I
know
that's
because
I
know
I'm
going
to
take
into
that
and
that
pentagon
and
gcr
belong
together
because
of
the
we
get
the
secret
from
there
and
then
the
gun
rights
from
fault
into
into
kubernetes
and
what
it
needs
is
at
least
the
pentagon
mappings.
B
So
we
end
up
with
pentagon
mappings
here,
but
pentagon
also
needs
config
to
have
the
namespace
and
the
cluster
name,
but
also
autoread
autoroute
needs
that
so
they
both
have
the
same
config
here
that
they
need.
So
I'm
going
to
pull
this
out
and
add
it
here.
B
That's
gone
so
we
take
discord
config.
B
Underscore
config:
that's
a
lot
cleaner
already
and
now
I
know
the
last
thing
that
is
here
is
that
the
gcr
secret
name
comes
from
the
pentagon
block,
so
I
add
here
the
pentagon
and
that
that's
it
and
now
we
know
each
piece
of
config
that
was
previously
there
who
consumes
that
config
is
consumed
by
both
other
words
specific
to
other
route
and
that
the
gun
specific
to
pentagon,
and
this
renders
the
exact
same
yamls
in
the
end,
no
more
global
scope
anyway,
just
showing.
B
C
Yes,
yes,
it
is,
I
I
I
believe
it
all
boils
down
to
the
experience
that
you
get
in
the
language.
The
first
jason
that
I
was
writing
was
more
spaghetti
and
I
was
trying
to,
but
now
the
the
the
problem
is
more
on
how
to
provide
an
api
in
jsonnet
without
bleeding
too
much
place,
jsonnet
into
the
user's
hands,
and
that's
that's
the
hard
part
so
because
we're
really
a
generic
tooling
that
if
you
have
a
docker
file,
we
don't
ask
any
questions.
We
just
build
the
image.
C
We
make
a
deployment
for
you
with
all
the
tank
tooling,
and
that's
that's
my
part.
I
maintain
that.
So
it's
really
easy
for
the
users,
you
just
you
just
say:
okay,
spin
off
the
the
ci.
I
don't
want
to
talk
to
communities,
but
I
want
to
talk
to
my
service
though
so
it
works,
it
works,
but
some
users
need
some
small
modification.
So
this
is
the
interface
part
between
the
user
and
the
jsonnet.
That's
harder,
I
would
say
just
do
jsonnet,
but
they
don't
want
too
much
to
learn
jsonnet.
So
I.
A
You
know
the
beginner
level,
where
all
you
do
is
called
apis
call
this
dot
new,
plus
this
dot
feature
with
feature
and
so
on,
and
you
just
use
it
as
a
kind
of
like
a
domain
specific
language
to
to
call
things
that
you,
you
cop
copy
and
paste
from
places
where
other
people
did
it
and
then
there's
other
people
who
actually
write
libraries
that
do
things
more
cleverly
and
if
you
there's
a
demo,
I
think
is
it
the
previous
tank
a
week,
a
tank
call.
A
A
Is
you
know,
library,
consumer
library,
creator
and
mix
in
library
creator
the
three
sort
of
levels
of
jsonnet
skill.
C
I
think
tank
and
an
overall
graphene
ecosystem
for
jason.
It
is
a
really
good
inspiration
for
organizing
code.
I've
read
a
lot
of
that,
so
yeah
good.
C
A
We've
now
hit
our
one
hour
limit
so
unless
anyone's
got
anything
else
to
add
or
ask.
B
Thank
you
all
for
joining.
You
can
always
come
and
bother
us
on
slack.
The
funnest,
like
is
something
we
hang
out
in
yep.
Some.