►
From YouTube: Webinar: Introducing Alterant- A Transparent Way to Modify Kubernetes Configuration Files on the Fly
Description
Join us for Kubernetes Forums Seoul, Sydney, Bengaluru and Delhi - learn more at kubecon.io
Don't miss KubeCon + CloudNativeCon 2020 events in Amsterdam March 30 - April 2, Shanghai July 28-30 and Boston November 17-20! Learn more at kubecon.io. The conference features presentations from developers and end users of Kubernetes, Prometheus, Envoy, and all of the other CNCF-hosted projects
A
Alright,
it
looks
like
our
numbers
have
stabilized,
so
let's
go
ahead
and
get
started,
welcome
everyone
to
today's
CN
CF
webinar,
introducing
altering
a
transparent
way
to
modify
kubernetes
configuration
files
on
the
fly,
I'm,
Kaitlin,
Barnard
marketing
manager
at
CNCs
and
I'm,
going
to
be
helping
to
moderate
today's
webinar
and
once
again,
I'd
like
to
thank
everyone.
Who's
joining
us
today,
as
well
as
our
presenter,
Kasia
shadi
CEO
at
cloud
66.
So
just
a
few
housekeeping
items
before
we
get
started
during
the
webinar
you're,
not
able
to
speak
as
an
attendee.
A
So
if
you
have
any
questions
during
the
webinar,
please
use
the
Q&A
box
at
the
bottom
of
your
screen
and
we'll
get
to
as
many
of
those
as
we
can
at
the
end.
If
you
have
any
technical
difficulties
or
questions
about
the
webinar,
there's
also
a
chat
that
I'll
be
monitoring
as
well,
and
then
the
session
is
being
recorded
and
will
be
sent
out
afterwards,
along
with
the
link
to
the
presentation,
I
mean,
with
that
I'll
hand
over
to
Kash
to
kick
off
today's
presentation.
B
Thank
you
very
much,
hi
everyone.
Thank
you
very
much
for
taking
the
time
to
attend
this
quick
webinar.
We
have
around
altering
one
of
our
open
source
projects
that
we
have
sponsored
and
developed
to
help
us
migrate
and
move
to
kubernetes
in
a
safe
and
secure
way.
I'm
very
excited
that
we're
going
to
share
this
with
you.
For
the
first
time
we
have
known
publicized
about
altering
and
very
much
is
it's
a
new
project
that
be
using
in
production
ourselves,
but
we
haven't
really
talked
about
it.
B
B
We
look
in
the
market
to
solve
those
which
led
us
to
development
and
sponsorship
of
alternate
and
after
that
I'm
sure
you
another
couple
of
our
open
source
projects
that
we've
talked
about
more
before
and
are
more
popular,
obviously
they've
been
around
for
longer
and
then
I'll
be
more
than
happy
to
take
any
questions
you
might
have.
So,
first
of
all,
who
am
I
just
a
quick
introduction,
Casa
Jodi,
one
of
the
cofounders
of
cloud
66
at
last
exists.
My
role
is
to
take
care
of
mostly
business
and
product
strategy,
but
we
started.
B
Classics
is
back
in
2012
with
an
aim
to
help
developers
build
infrastructure
in
a
way
that's
forward-looking
and
compatible
with
the
cloud
base.
So
that's
when
we
started
in
2012
and
2014,
we
roll
out
our
products
around
containers.
2015
was
the
kubernetes
year
for
us
when
we
started
using
kubernetes
and
we
moved
ourselves
our
SAS
product
which
we're
running
on
AWS
ec2.
You
know
old-style
quote,
unquote,
infrastructure
on
took
abilities
and
this
product
or
the
project
that
I'm
talking
about
today.
B
Ultron
is
a
result
of
the
issues
that
we
saw
around
this
migration
and
primarily
around
the
deployment
of
our
own
products
into
commodities.
So
what
we
had
was
a
standalone
very
standard.
I
should
say
stack
that
was
running
on
AWS.
What
we
wanted
was
to
streamline
this
process
of
the
process
of
deployment
of
this
application,
so
we
can
deploy
a
more
than
one
environment
as
production.
We
wanted
to
be
able
to
have
one
the
entire
stack
end-to-end
for
every
gate
branch
that
we
have.
B
We
also
wanted
to
have
one
environment
for
the
developer
and
that
that
wasn't
in
the
company
wanted
every
developer
to
have
their
own
instance
if
they
wanted
to
so
have
these
ephemeral
environments
of
the
entire
application,
with
databases,
storage,
everything,
all
the
components
just
come
up
and
be
usable
and
then
folded
away.
So
you
can
imagine
that's
a
really
good
way
of
showcasing
new
features,
testing
the
functionality
running
automatic
tests
and
CIC
each
holds.
It
is
an
ideal
situation
for
a
SAS
business
to
be
so,
we
can
move
faster
in
a
straight
faster.
B
That
was
the
idea
that
the
goal
what
we
had,
but
we
also
had
some
requirements.
First
of
all,
we
wanted
to
have
fewer
as
past
few
as
possible
a
number
of
clusters.
We
knew
that
kubernetes
itself
can
really
help
us
with
achieving
that
goal
by
having
each
instance
of
this
stack
into
it,
for
example
in
namespace,
but
we
didn't
want
to
have
multiple
clusters
and
manage
lots
of
clusters.
Upgrade
a
lot
of
cross
clusters
pay
a
lot
of
money
for
a
lot
of
servers
that
will
be
essentially
a
redundant
resource
for
us.
B
So
what
we
wanted
was
the
most
cost
effective
and
fast
way
to
have
this
ephemeral
environments
of
everything
we
have
in
the
system.
That
was
the
first
requirement.
The
second
thing
we
wanted
was
to
remove
what
I
called
magic
and
what
I
mean
by
that
is.
We
wanted
to
take
away
anything
that
is
not
understandable
and
obvious
invisible
by
just
looking
at
something.
B
We
didn't
want
our
developers
to
commit
code,
roll
out
something
deploy
something
and
then
some
magic
happens
that
things
just
work,
but
they
don't
know
how
it
works
and
the
reason
we
didn't
want
that
magic
was
when
something
doesn't
work
in
a
magical
scenario.
Then
finding
the
issue
is
very
difficult.
So
when
things
work
is
great,
but
when
it
doesn't
it's
a
nightmare,
so
we
wanted
to
remove
that
magic
and
make
things
simple
and
understandable
and
which
would
help
us
also
on
board
new
teams
to
them
to
new
members
to
the
team.
B
So
when
some
new
hire
and
and
an
engineer
joins
the
team,
they
can
understand
the
system
as
well
as
long
as
they
have
a
fairly
standard
understanding
of
upstream
communities.
What
kubernetes
paradigms
are,
and
they
can
get
started
very
quickly
and
the
third
objective
that
we
had
was
what
I
call
being
able
to
use
a
stack
overflow.
Now
we
all
know
stack.
B
What
I
mean
by
that
essentially,
is
that
we
wanted
to
have
the
vanilla
upstream
honored
altered
version
of
communities
running
on
our
infrastructure.
We
didn't
want
to
modify
it,
so
it
does
what
we
wanted
to
do,
but
then
lock
our
developers
out
of
using
the
standard,
kubernetes
version
and
a
lot
of
the
cases
that
we
see
in
the
market
are
based
around
that,
whether
it's
because
they
want
to
access
control
access
to
the
cluster,
they
want
to
make
usage
of
a
cluster
safe
and
secure
for
everybody
else.
B
They
kind
of
lock
it
out
and
wrap
it
inside
an
API
of
their
own
or
add
some
magic
that
will,
you
know,
comply
that
could
make
that
cluster
or
infrastructure
compile,
but
what
we
have,
what
they
want,
but
cuts
out
developers
from
the
big
community
of
developers
that
are
around
that
can
help
them
achieve
their
goals
faster.
So,
with
those
requirements
we
set
out
to
find
out
how
we
can
we
can
get
there
now,
as
you
can
imagine,
as
you've
used
communities
before
I'm
sure.
B
You
can
see
that
if
you
have
a
fair
amount
of
services
within
your
application,
let's
say:
10
15
different
micro
services
that
you
have
whether
there
are
necessarily
micro
services
in
the
true
sense
of
ward
or
just
different
services
that
talk
to
each
other.
Every
afternoon,
your
buddies,
manifest
files
can
get
quite
big.
An
example
of
that
is.
If
you
have
15
services,
you
probably
have
15
deployments
and
15
services
next
to
them.
B
So
that's
you
know
quite
a
good
amount
of
configuration,
yamo
that
you
have
as
well
as
that
you
have
namespaces
persistent
volume,
declaration
and
claims
you
might
have
ingress
controllers.
You
have
all
these
bunch
of
different
things
that,
depending
on
how
you
slice
and
dice
that
how
you
manage
your
files,
how
you
split
them
into
with
naming
conventions
and
all
the
best
practices
you
end
up
with
quite
a
lot
of
files
and
quite
a
lot
of
configuration
and
what,
while
that's
not
necessarily
an
issue
modifying
them,
could
be
an
example
of
that
is.
B
If
I
have
a,
if
I'm
running
on
kubernetes
on
Google,
just
an
example,
which
is
something
that
we
try
and
Quique
and
you
want
to
use
Google's
database
as
a
service
sequel
cloud
sequel.
The
way
to
connect
to
that
sequel
is
to
have
a
specific
that
the
recommended
Google's
recommended
way
of
Lexington
database
is
to
use
a
sidecar
within
your
part.
Now,
for
for
those
of
you
who
are
not
familiar
with
sidecars
just
quickly,
as
you
know,
increment
it
is
part
can
contain
multiple
individual
containers
and
asad
car
is
a
part.
B
It
is
a
container
that
you
insert
into
a
pod.
That
then
does
something
for
that
for
the
sibling
containers.
For
example,
it
could
be
a
lock
collector
that
collects
the
log
and
sends
the
logs
that
are
generated
by
the
application
to
syslog
or
some
other
external
mark
collection
agency,
or
it
could
be
an
encryption
at
rest
system
that
all
the
rights
are
going
through
it,
or
in
this
case
in
case
of
cloud
sequel
and
g
ke,
it
is
a
it
is.
B
It
is
a
proxy
that
sits
between
your
application
and
database
and
facilitates
communication
through
connection
pooling
and
authentication
the
way
cloud
sequel
likes
it.
What
it
means
is
that
if
you
have
a
single
sequel
instance
on
gke,
for
example-
and
you
have
15
services,
you
have
to
insert
this
configuration
into
every
single
one
of
those
part
definitions
every
time.
B
The
second
problem
that
we
had
was,
as
I
said,
we
wanted
to
have
as
let
as
little
as
possible
infrastructure
footprints.
We
wanted
to
reduce
that
to
just
probably
one
or
two
clusters
and
what
we
ended
up
with
was
beef
for
coop
entities.
We
had
snowflake
service
service
that
were
only
doing
specific
things
and
we
had
to
be
configured
to
do
those
things
and
then,
after
communities.
B
B
So
then,
as
as
a
result
of
that,
we
went
out
technically
shopping,
looking
for
solutions
that
we
can
use
to
solve
those
problems.
The
first
thing
we
wanted
to
do
was
to
fix
this
whole
thing
by
automating
generation
of
the
configuration
files.
Obviously,
the
first
solution
is
just
a
bit
manual,
which
is
what
we
started
to
do.
Make
the
changes
manually.
You
go
around
every
configuration
file
every
time
and
something
changes.
B
We
have
to
apply
to
all
these
configuration
files
committed
into
gear
and
keep
our
fingers
crossed
for
some
some,
so
we
don't
make
human
errors
and
things,
but
not
not
to
break
that's,
not
ideal,
but
it's
the
cheapest
way
to
start
with,
it
gets
expensive
very
quickly.
When
you
make
a
mistake.
The
second
solution,
which
is
way
more
advanced
than
anything
that
we
would
have
been
able
to
do
at
the
beginning,
was
to
do
something
using
admission
controllers
again,
if
you're
not
familiar
with
it,
but
measuring
controllers.
B
Think
of
them
as
a
hook
that
sits
on
kubernetes
cluster
itself,
intercepts
API
communication
between
the
client
and
the
cluster
and
makes
changes
to
those
to
those
API
requests.
An
example.
Could
be
application
of
that
through
access
access
control
or,
for
example,
it
could
be
the
injection
of
a
secondary
container
within
a
pod
to
facilitate
communication
with
the
sequel,
as
we
talked
about
in
case
with
gke
now
admission
controllers
are
great
in
terms
of
automation,
but
they
add
magic
technically.
B
So
what
we
did
was
we
decided
to
write
a
small
tool
which
then
grew
into
something
slightly
bigger,
an
open-source
it
and
we
called
it
alterans,
which
is
an
open-source
tool
to
modify
communities,
configuration
files
using
scripts.
It's
a
fairly
simple
concept.
What
it
does
is
that
you
give
it
a
kubernetes
configuration
file
and
you
write
it
script,
that's
written
in
JavaScript,
and
it
runs
that
JavaScript
in
a
safe
and
secure
way
to
modify
the
file.
B
So
it's
very
simple,
which
is
what
we
wanted,
wanted
something
simple
and
all
the
details,
and
all
the
intricacies
of
this
bit
of
that
this
tool
is
around
making
sure
those
scripts
are
written
in
a
way
that
is
simple
and
intuitive
in
terms
of
understanding
them,
and
also
these
scripts
can
run
in
a
safe
and
secure
way.
When
our
build
servers
are
our
CI
CD
servers,
so
we
don't
end
up
with
potential
security
risks,
whether
if
we
let
any
external
script
to
come
in
and
and
right.
B
So
with
all
that
context,
we're
going
to
jump
right
into
a
demo
and
show
you
three
demos,
so
the
three
demos
that
I
want
to
show
you
is
first,
one
is
a
very
simple
one.
The
basic
thing
that
I
want
to
do
is
just
automatically
add
annotation
to
all
services.
In
my
configuration
files
and
that's
a
fairly
simple
one,
so
I'm
gonna
jump
share
screen
without
the
screen
with
you.
B
Let's
say
this
is
one
of
our
best
practices.
Let's
say
this
is
what
I
want
to
do,
just
to
make
sure
that
all
services
are
have
are
stamped
with
the
dates
that
they
were
deployed
or
a
very
specific
one.
Now
this
is
obviously
a
simple
simple
example,
but
deliberately
simple,
so
we
can
focus
on
the
script
instead
of
the
purpose.
B
If
kind
is
service,
then
for
that
item
metadata
annotations
at
an
array
that
says
classic
sister
poi
that
date
now
it's
very
simple
javascript
is
a
very
popular
language.
I
see
a
lot
of
people
know
if
you
don't
as
long
as
you
can
code,
you'll
be
able
to
figure
this
out.
The
only
specific
thing
here
is
this
dollar
sign.
So
it's
a
it's
a
shorthand
that
we
created,
which
means
altered,
runs.
B
It
goes
through
every
one
of
these
pieces
of
llamó
each
one
of
these
sections
of
Yambol
is
the
first
one,
the
second
one
and
a
third
one,
and
then
replaces
the
entire
value
reads:
the
entire
value
into
this
dollar
sign
variable,
which
is
we
borrowed
from
the
jQuery
syntax
and
and
then
gives
you
access
to
that.
So
you
can
inspect
and
modify
this
and
the
other.
The
other
kind
of
tiny
thing
around
this
is
that
ain't
change
me
here.
Just
persisted
back
into
the
object.
B
B
B
If
I
can
get,
the
aah
sorry
forgot
the
command,
obviously
all
trans
modifier
right.
So,
as
you
can
see
here,
you
saw
that
file
that
we
had
the
beginning
sample
llamo.
Here
we
have
the
sample
yeah
Mel,
again,
they're,
just
dumping
it
out
into
standard
output.
You
can
add
an
out
which
you
will
see
that
will
just
write
it
to
a
file,
but
here
nothing's
changed,
except
for
this
case,
where
we
just
added
the
annotation
that
we
wanted
to
this
file.
B
If
I
run
it
again,
this
number
is
going
to
change
because
that's
the
current
date
and
that's
fairly
simple
juice
gauge
use
the
usage
of
this.
It's
a
very
simple
one.
Now
all
Trent
and
consume
me
an
old
can
consume
Jason
as
well
and
output
yeah
milord
Jason
as
well.
So
if
you
write
your
configuration
files
in
Jason,
you
can
use
them
with
Jason
or,
alternatively,
what
you
can
do.
B
You
can
use
group
control
to
pull
out
the
configuration
files
from
an
existing
cluster
run
it
through
alternate
and
push
it
back
in
if
you
want
to
make
changes
to
an
existing
existing
cluster
without
without
having
the
files
that
you
have.
So
you
can
just
do
a
good
control
describe
something
or
get
with
the
name
Cole
llamó,
which
gives
you
that
and
then
type
this
into
ultra
and
then
get
the
output
type
adding
to
control,
which
means
within
one
command.
You
make
a
change
using
ultra.
B
The
second
demo
that
I
wanted
to
show
you
was
add
a
sidecar
to
all
deployments.
Now
this
is
more
interesting.
Obviously,
the
first
one
was
just
to
get.
You
started
kind
of
sidecar
to
all
deployment.
So
what
I
want
to
do
is
that
exactly
the
gke
issue
on
gke
again,
you
need
to
have
a
such
a
secondary
container
that
runs
on
all
deployments
and
I
want
to
do
this
automatically
I
want
to
configure
this
second
secondary
container
once
and
I
want
to
inject
it
into
every
deployments
that
I
have
without
rent.
B
So
here
I
have
the
sidecar
configuration
as
you
can
see.
This
is
just
a
partial
configuration
for
the
sidecar
that
I
have
it's
not
the
entire
deployment
definition.
It
just
starts
with
the
image
which
you
would
imagine
is
the
one
that
I
want
to
put
inside
of
the
container.
So
if
you
think
about
it
within
the
context
of
our
sample
file,
here
is
my
deployment
and
here's.
B
The
containers
that
I
have
and
within
that
container
I
have
one
image,
which
is
my
application
image
and
I
want
to
insert
another
one
right
here
with
the
with
the
site.
So
what
I
need
is
to
just
define
the
sidecar
right
here,
and
this
is
what
I've
done.
This
is
a
real
example
of
a
Google
sequel
proxy
that
this
is
a
definition
and
the
configuration
that
it
needs.
You
can
see
that
this
is
not
necessarily
a
very
one-liner
configuration.
B
So
if
there
is
a
change
in
this-
and
you
want
to
make
it
therefore
make
a
change
or
me
rotate
the
keys
or
whatever
you
want
to
do,
then
you
would
have
to
do
this
everywhere
on
every
single
configuration
that
you
have
here.
I've
created
the
JSON
version
of
that
just
to
show
that
we
can
consume
JSON
as
well.
It's
the
same
configuration
just
defined
in
Jason.
B
Now,
how
do
I
add
the
sidecar
so
in
sidecar,
again
I'm
going
to
check
with
the
client
kindness
deployment,
I
load,
all
the
templates
or
the
containers
within
the
spec
of
this
template
into
this,
and
if
there
is
length
one,
which
is
something
that
I
just
wanted
to
show
that
we
can
check
how
many
containers
do
we
have
in
this
part?
Not
that
you
know
I
know.
We
know
that
we're
not
gonna
have
more
than
one,
but
in
this
case
I'm
just
checking.
B
If
there
is
the
latest
one
I
can
load
the
file
that
we
had
with
a
definition
of
the
sidecar
loaded
load,
the
sidecar
image
from
there
load
the
container
image,
use
the
tag
container,
sidecar
image
and
push
that
sidecar
in
into
my
into
the
containers.
Now
these
lines
are
quite
interesting,
so
here
you
see
that
the
first
thing
that
I'm
doing
is
I'm
loading,
this
sidecar
JSON,
which
was
the
one
that
I,
showed
you
here.
B
My
sidecar
definition
so
I'm
loading
that
first
and
this
jason
treat
is
a
is
a
function
that
Ultron
provides
you,
so
you
can
just
load
Jason.
You
have
a
yellow
one
as
well.
Another
thing
that
you
can
find
here
is
that
the
docker
image
so
Ultron
also
gives
you
these
helper
functions
around
auto
classes
around
using
some
specific
container
or
community
specific
paradigms,
in
this
case,
I'm
loading,
the
sidecar
image
just
to
get
the
sidecar
sidecar
image
tag
which
would
be
in
this
case.
B
B
B
And
I'm
running
it
with
JavaScript,
but
there
are
co-chairs
so
here,
as
you
can
see
now,
my
content
is
this:
was
my
first
one
and
now
I
have
the
secondary
one
loaded
into
this,
with
all
the
configuration
that
came
with
it
has
a
sidecar
all
the
way
and
nothing
else
has
changed.
Everything
else
stays
the
same.
B
So
here
docker
image
class,
here's
again
it's
written
in
JavaScript,
so
you
can
add
your
own
if
you
want
to-
and
it's
just
a
JavaScript
class,
very
simple,
as
you
can
see
it
just
parses
that
so
you
understand
some
of
the
nuances
of
docker
image
system
naming
system
around.
If
you
don't
have
a
repository,
it
means
it's
doc
or
IO
index.
Taco
IO-
and
you
know
things
around
this-
nothing
very
specifically
difficult,
but
it
does
the
parsing
for
you.
B
B
Another
one
is
around
understanding,
for
example,
other
help,
those
around
understanding
ports,
so
it
can
get
the
intent
on
an
external
port,
so
you
can
swap
them
around
if
you
want
to
do
anything
else
around
that,
if
you
want
and
then
containers
is
another
one
that
understands
container,
so
you
can
run
it
by
name
and
find
the
container
by
name,
and
these
helpers
are
just
sampled
helpers
that
we
have
I'm
on
to
the
third
and
last
demo
change
the
manifest
depending
on
the
cluster.
Now
this
is
not
as
complicated,
but
it's
actually
notice
very
useful.
B
This
is
something
that
we
found
around
when
we
wanted
to
deploy
something
on
two
different
clusters
and
we
had
to
make
specific
changes.
A
simple
example
of
that,
if
you
want
to
deploy
something
onto
a
mini,
coop
load,
balancer
was
not
supported
until
recently.
It
is
supported
now,
but
you
couldn't
have
a
service
that
has
load
balancer,
but
on
most
other
load
that
most
other
clusters.
You
can
have
a
load
balancer
as
type
of
the
service,
and
you
can
imagine
for
production.
You
want
to
pay
for
a
little
balance.
B
You
want
to
have
it
load
balanced,
but
if
you
have
a
developer
specific
class,
they
just
want
to
fire
up
on
your
machine
and
use
it.
You
necessarily
don't
need
in
our
palace.
On
the
other
hand,
we
didn't
want
to
have
two
different
types
of
configuration
for
one
way:
the
type
load
then
I
said
the
other
one
with
the
node
port
or
cluster
IP,
for
example.
So
we
just
want
to
try
and
see
if
we
can
use
the
altar
and
to
add
a
load
balancer
type
depending
on
the
depending
on
the
cluster
destination.
B
So
here
is
again
another
simple
way
of
doing
it.
I'm
just
checking
the
kind
again
if
it's
service
and
add
a
type
load
balancer
and
here's
an
example.
If
I
wanted
to
I
commented
this
line
out,
but
I
get
a
day
and
I
appear
as
if
I
want
to
to
the
load
balancer.
If,
in
this
specific
case,
my
cluster
has
a
specific
load
balancer,
for
example,
for
your
production,
you
probably
have
a
specific
load
on
a
server
IP
Danny
is
configured
with
your
DNS
and
if
I
wanted
to
run
that.
B
Using
my
oh
there
we
go
so
my
arm
window
was
covered
itself
with
the
zoom
window
there,
so
it
added
a
little
badass
a
type
here.
One
of
the
things
that
we
are
working
on
is
to
allow
runtime
variables
to
be
passed
into
the
script,
so
you
can
actually
run
this
command
with
something
like
build
art
and
say
something
that,
unlike
use
load
balancer,
which
will
then
be
checked,
and
you
can
check
it
in
your
JavaScript
and
make
a
decision
whether
to
add
a
parameter
or
not.
B
This
is
I'm
adding
here.
What
I'm
doing
here
is
this
script?
This
the
all
trim
script
adds
an
entire
new
service
and
a
new
deployment
to
the
man
and,
if
I,
so,
regardless
of
what
the
manifest
is,
whether
it
has
any
deployments,
but
it
has
a
namespace
or
whatever
that
might
be
I'm,
just
going
to
add
this
service
to
that,
for
example.
B
This
is
quite
useful
if
you
have
a
specific
deployment
that
you
want
to
have
on
every
deployment
of
every
application,
cluster
or
namespace
that
you
have,
whether
it's
a
you
know,
something
like
like
a
monitor
that
looks
at
cluster
events
or
something
that
you
want
to
deploy
every
time
regardless
of
world
application.
That
is
so.
B
What
we
were
doing
here
is
a
beginning,
the
namespace,
so
we
can
just
install
that
deploy
that
deployments
onto
the
same
namespace
and
here
I'm,
just
defining
the
entire
thing,
as
you
are
familiar
with
it
in
just
Jason,
fairly
simple-
and
here
is
another
shorthand
double
dollar
sign,
which
basically
is
the
root
of
the
whole
manifest
file.
That's
out
there
so
I
created
this
deployment
variable
and
I'm,
pushing
it
to
levels
up
into
the
parent.
So,
alongside
everything
else
that
I
have
I
have
another
deployment
as
well.
I'm
gonna
run
this
as
well.
B
And
there
we
go
so
what
happened
here
is
that
you
have
your
here.
You
see
that
this
deployment
was
added
to
the
same
namespace
as
the
one
that
I
defined
defined
in
my
example
file.
So
now,
every
time
I
wanted
to
try
something
automatically.
It
adds
this
now.
This
is
fairly
simple,
as
I
said,
but
it's
very
useful
and
I
kind
of
like
both
cases.
That
is
very
useful
and
simple.
At
the
same
time,
it's
very
easy
to
grasp
the
idea
and
write
scripts
for
it,
but
it's
very
useful.
B
We
find
it
very
useful,
and
one
thing
that
you
might
ask
is:
where
is
transparency
this
compared
to
some
other
way,
like
you
know,
admission
controllers.
So
what
we
do
with
this
is
that
we
have
our
configuration
file
manifest
in
a
git
repository
where
we
just
take
them
out
and
run
them
through
all
trend,
and
we
also
have
another
couple
of
open-source
projects
that
I
just
touch
upon,
and
we
run
that
through
the
pipeline
and
at
the
end
of
it
another
set
of
files
are
generated.
B
That,
then,
can
be
committed
back
into
the
git
repository
and
if
we
have
an
automatic
deployment
system
that
just
pulls
it
out
of
that
based
on
a
git
commit
hook
and
deploys
it.
That's
okay,
that's
automatic,
but
what
it
means
that
we
can
inspect
and
look
at
every
change
that
was
made
whether
it
was
manual
or
was
done
automatically
through
alterans.
B
Another
open
source
project
that
we
have
is
called
habitus,
which
we
talked
about
before
it's
a
multi-step
docker
build,
and
it
also
supports
secrets
during
the
build.
So
you
can
use
it
to
inject
secrets
into
a
build
during
the
bill
without
leaving
any
traces
on
your
images,
which
is
quite
useful
if
you
have
a
private
git
repository,
for
example,
if
you
need
SSH
key
to
to
run
your
build,
dependent,
builds
or
any
API
keys
that
you
need
during
your
build.
That's
quite
useful
for
that.
B
You
can
check
it
out,
check
it
out
on
habitus
the
I/o
and
the
last
one
is
called
copper,
which
we
actually
talked
about
on
another
webinar,
it
CN
CF,
which
is
another
open
source
project
we
have,
which
will
you
can
write
scripts
for
verifying
your
community
scripts
kubernetes,
manifest
files.
So,
within
your
pipeline,
you
are
sure
that
they
are
not
going
to
deviate
from
your
policies.
B
For
example,
nobody's
going
to
put
an
image
with
the
latest
tagging
there
or
nobody's
going
to
change
the
static
IP
address
of
a
load
balance,
and
if
it
this
happens,
then
the
copper
will
catch
them
and
it
will
fail.
The
pipeline
and
deployment
will
not
go
through
again.
This
is
fairly
simple
stuff
that
they
can
do
was
very
useful
within
the
pipeline.
You
can
find
it
in
the
copper
SH,
which
is
the
website
for
it.
At
this
point,
I'd
like
to
just
ask
if
you
have
any
questions
and
thank
you
very
much
for
your
attention.
A
Awesome
thanks
cash
for
the
presentation,
and
so
we
have
time
for
some
Q&A
now.
Just
a
reminder:
you
have
a
Q&A
tab
at
the
bottom
of
your
screen.
So
if
you'd
like
to
ask
anything,
please
drop
it
in
there
I'm
going
to
start
with
one,
that's
from
a
little
bit
earlier
in
the
presentation.
What
about
custom
resources
to
encapsulate
differences
between
environments.
B
B
So
that's
the
first
thing
we
wanted
to
do
reduce
the
time
to
have
a
working
cluster
by
making
sure
that,
if,
if
you
just
go
and
run
mini,
coop
install
mini
coop
on
a
laptop,
we
can
on
scripts
on
it
without
having
to
climb
it
with
custom
resources
that
mission
controls
and
everything.
But
it
doesn't
mean
that
custom
resource
unnecessarily
are
not
a
not
a
solution
for
this.
We
think
of
custom
resources,
mostly
for
other
purposes
other
than
modifying
the
configurations.
B
An
example
of
that
is
certificate
management,
for
example,
for
let's
encrypt
that's
a
perfect
example
of
something
that
you
can
use
a
custom
resource
for
I
hope
this
answered
your
question,
but
yes,
definitely
that
could,
if
you
want,
you
can
achieve
the
same
goal
with
it
I.
Don't
we
don't
think
that
it's
necessarily
built
for
it,
but
obviously
it's
yet
another
way
of
doing
it
and.
A
B
What
we
see
how
I'm,
as
is
a
package
manager,
not
a
silly
a
deployment
solution.
While
you
can
obviously
package
your
application
into
a
package,
but
the
way
I
could
always
compare
it.
Compare
how
helm
with
like
non
container
based
of
non-community
centric
solutions
is
comparing
helm
as
a
package
manager
with
NPM
for
nodejs,
for
example,
or
ruby
gems
for
Ruby
or
go
packages.
You
get
the
idea.
It's
more
must
be
a
package
manager
which
is
very
good
for
two
things.
B
One
is
managing
third
party,
entire
applications
being
packaged
up
within
their
entirety,
and
the
second
one
is
when
you
have
something
like
a
dependency,
something
that
that
needs
to
be
deployed,
example
would
be
in
custom
resource
that
you
just
use
helm
to
deploy
it.
Now,
your
trustus
supports
this
specific
new
type
of
custom
resource
or
anything.
That's
useful,
priming.
Your
cluster,
for
example,
associating
specific
persisted
volume
types.
B
So
that
is
what
I
see
as
the
primary
used
for
help,
but
it
doesn't
mean
that
we
cannot
write
a
script
for
hell
and
write
a
chart
that
will
deploy
our
entire
application.
One
of
some
of
the
limitations
that
we
see
with
helm
when
it
comes
to
deploying
the
entire
application,
not
necessarily
that
there's
anything
wrong
with
helm
itself.
It's
just
using
helm
for
the
entire
application.
B
One
is
primarily
around
issues
around
secrets
that
obviously
are
still
being
discussed,
how
to
solve
them,
having
secrets
in
the
files-
and
the
second
thing
is
around
it's
mostly
about
packaging
everything,
so
that
the
chart
that
you
write
is
about
what
should
be
deployed
and
how
it
should
be
deployed.
It
doesn't
necessarily
help
with
how
to
modify
this
for
different
environments
for
different
clusters.
You
can
do
it
with
if
statements
and
having
some
scripting
around
it,
but
what
we
wanted
was
to
avoid
having
a
complicated
workflow
within
the
helm.
B
B
It
does
so
in
the
best
way,
obviously
to
if
you
think
about
how
service
missions
work
usually
is
the
usual
work
through
a
sidecar
primarily
and
that's
usually
happen
either.
You
manually
go
and
change
it,
or
in
case
of
like,
for
example,
if
you
can
type
it
into
a
steel
pipe
your
manifest
file
into
a
steel,
and
it
will
automatically
inject
the
history
sidecar
into
every
service
that
it
finds
relevant,
and
that's
how
you
usually
work
and
alternate
is
exactly
the
same
thing.
B
A
B
Do
in
our
team,
so
all
turrent
itself
is
just
as
a
modifier
of
a
father
of
a
father.
What
it
does
technically
behind
the
scenes
or
under
the
hood
is
it
runs?
What
it
does
is
it
creates
a
it
uses:
chrome
v8,
to
run
the
JavaScript
within
an
isolation.
That
is
that's
that
protects
malicious
code
for
protects
you,
your
system
from
running
minute,
malicious
can
potentially
in
your
JavaScript.
B
So
yes,
these
runs
on
v8
and
that's
what
it
does
it
just
takes
in
a
file
runs
the
JavaScript,
with
some
helpers
that
you
saw,
the
output
could
be
on
to
the
standardout
good
or
it
could
go
into
a
file
as
part
of
our
pipeline
and
Skype.
Skycap
is
a
product
that
we
have
a
classics
exactly
and
integrated
with
all
trying
to
do
this.
B
A
B
What
we
are
doing
from
the
next
version
of
Ultron
is
to
allow
you
to
pass
in
a
either
a
file
or
a
bunch
of
build
or
runtime
arguments
into
the
command
line
that
there
are
then
available
to
script.
So
in
your
JavaScript,
you
can
say
if
our
build
argument
is
as
such,
such
value,
where
this
environment
then
take
this
code
path,
and
this
is
obviously
much
more
readable
to
do
in
a
programming
language.
B
That's
written
with
the
intent
of
using
controllers
like
lip,
like
ifs,
and
you
know,
for
loops
large
JavaScript,
then
having
those
conditions
within
switching
between
two
contexts
in
the
llamó
file,
which
is
used
with
something
like
go
templates
where
you
switching
to
go,
write
a
little
bit
of
go
and
then
go
back
to
yellow,
and
that
makes
the
halls
think.
Is
it
a
little
bit
difficult
to
read?
B
B
B
B
B
So
just
a
kind
of
a
pseudo
thing
that
I'm
sharing
here
would
be
something
like
coop
control
gets,
say:
config
map,
whoo
and
I'm
gonna,
get
it
as
llamo
and
pipe
it
into
alterans,
modify
it
with
such
files
and
everything
else
and
then
push
it
back
into
good
control,
applying
the
chef.
So
this
will
pull
something
out
now
it
could
be
a
conflict
map.
It
could
be
anything
else
that
you
want,
it
pulls
it
out.
Tears,
GMO
feeds
it
into
culture
and
ultras
modifies.
B
A
B
A
very
good
question,
so
there
is
basic
validation
if
the
yellow
is
completely
invalid.
Obviously
Ultron
will
complain
and
will
cry,
but
Ultron's
job
is
not
validate
the
Yama
file,
specifically
around
communities,
one
of
the
open
source
projects
that
I
touched
upon
the
end,
copper,
which
can
find
on
copper
SH,
is
exactly
built
for
that.
So
it
has
a
very
simple
DSL
that
you
can
define
not
only
the
validity
of
of
the
remote
file
where
there
is
like
constructed
correctly.
B
B
Do
we
have
an
image
attribute,
but
more
than
that,
you
can
go
more
than
that
in
copper
and
say
not
only
do
we
have
to
check
whether
we
have
an
image,
but
it
has
to
have
this
specific
name
come
from
the
specific
repository
and
that's
this
specific
range
of
versions.
So
with
copper,
you
can
even
apply
policies
around
I.
Let
my
developers
upgrade
major
minor
versions
of
saying
my
sequel
into
my
cluster,
but
not
major
versions.
A
B
We
are
working
actively
on
Ultron's
to
to
to
improve
the
code,
to
make
it
more
accessible
and
documented
better
as
part
of
the
documentation
efforts.
One
of
the
things
that
we're
doing
is
we
are
working
with
an
existing
open
source
projects
around
kalina's
ecosystem,
to
create
examples
and
samples
of
how
all
Trent
can
make
that
deployment
easier,
and
that
includes
it's
your
own
voice
as
well
as
other
things
like
using
and
using
it
with
helm.
B
So
one
of
the
things
that
Altran
can
do
can
modify,
as
you
can
imagine,
any
Yemen
or
JSON
file
and
the
for
example,
values
llamo
inhale
is,
is
another
Yama
file,
so
you
can
use
alternate
in
conjunction
with
Hell
to
come
up
with
different
variations
of
a
helm,
helm
chart
as
well.
So
that's
part
of
on
a
roadmap
to
to
create
those
samples
and
and
publish
them
as
part
of
documentation.