►
From YouTube: TGI Kubernetes 173: Pulumi Kubernetes Operator
Description
Come hang out with Joe Beda (@jbeda) as he explores the Kubernetes ecosystem. Hang out as we explore something NEW for the first time.
This week we'll be looking at the Pulumi Kubernetes Operator. This is a way to take a Pulumi deployment program and turn it into an operator so that it can be actuated via the k8s API. Hopefully we'll have someone from Pulumi join us to make sure we don't go too far off the rails!
A
Hello,
everybody
happy
friday
and
welcome
to
tgi
kubernetes.
I
am
your
host
joe
bida
for
those
not
in
the
know,
tgi
kubernetes
is
a
weekly-ish
live
stream
that
we
do
where
we
explore
all
sorts
of
things
in
the
kubernetes
space.
Sometimes
we
go
deep
on
stuff
happening
in
the
community.
Sometimes
we
play
with
new
projects.
Sometimes
we
know
a
lot
about
what
we're
talking
about.
A
Sometimes
we
don't
know
quite
so
much
this
week,
we're
gonna
be
digging
into
the
combination
of
palumi
and
kubernetes
through
the
lens
of
this
project
that
is
relatively
new
from
polomi
called
the
palumi
kubernetes
operator,
and
in
doing
so
I
am
lucky
to
have
some
great
guests
joining
me
from
paloomi
and
I'm
going
to
introduce
them
here
in
order.
So
first
we
have
joe
duffy,
who
is
joe
you're,
ceo
of
plumi
right,
I
know
founder
and
ceo
is
that
right?
I.
A
Yeah,
let
me
get
vivek
and
lee
on
board
here
too,
and
then
we
have
vivec
and
lee
and
so
yeah.
Let's
do
some
quick
introductions
of
y'all.
Why
don't
you
start
joe
and
let
people
know
who
you
are
and.
B
Yeah,
so
thanks
for
having
me
I'm
the
other
joe
joe
duffy,
so
I
founded
palumi
actually
coming
up
on
five
years
ago,
which
is
absolutely
mind-blowing
previously
lived
life,
as
you
know,
working
on
developer
platforms
and
got
to
know
joe
around
the
time
we're
starting
palumi
actually
around
the
same
time,.
B
A
All
right
lee,
why
don't
you
go
next
since
you're
sort
of
next
in
reading
order
here.
C
Absolutely
yeah
lee
briggs
I've
been
here
at
palumi.
Now
I
think
it's
coming
up
to
two
years.
I've
done
a
variety
of
things
here
at
palumi
I
started
out
on
the
engineering
team
did
a
little
stint
as
developer
advocate
and
now
on
the
sales
engineering
team.
My
day-to-day
is
just
helping
users
be
successful
with
pollumi.
C
In
my
previous
role,
I
ran
a
bunch
of
large
kubernetes
stuff
and
I
think
we
installed
1.0,
so
I've
been
in
and
around
the
kubernetes
community
for
a
long
time
and
seen
all
of
the
good
and
all
the
bad
but
yeah.
I'm
really
excited
to
to
kind
of
show
you
what
we've
been
doing
here
at
palumi,
awesome
and
vivek.
D
Hi,
my
name
is
vivek.
I
have
been
with
lumi
for
just
a
little
over
a
year
now
and.
B
D
Sorry,
obviously,
the
call
has
come
in
exactly
when
you
need
it
yeah,
so
I've
been
here
just
about
a
year,
primarily
work
on
the
kubernetes
operator
and
the
as
well
as
the
provider,
I'm
basically
part
of
the
native
provider
squad
in
blooming.
I
work
on
a
variety
of
our
cloud
providers,
so
the
aws
native,
as
well
as
azure
and
google
cloud
native
providers.
A
Awesome
well
welcome
well
so
for
for
those
who
are
relatively
new
here,
so
what
we,
what
I
like
to
do
as
we're
as
we're
getting
started
is
just
say
hello
to
all
the
folks
joining
us
from
all
over
the
world
we
have.
This
is
probably
the
worst
time
for
us
to
hold
the
live
stream
if
we
want
to
get
engagement
around
the
world
so
word
to
the
wise.
If
any
of
you
all
want
to
do
this
stuff,
because,
like
west
coast,
us
is
sort
of
the
tail
end.
A
This
is
getting
into
friday
evening,
for
so
many
other
folks,
but
we're
also,
you
know,
lucky
to
have
a
lot
of
folks
joining
us.
So
let
me
just
say:
hi
real,
quick
to
rodolfo
martin
from
the
netherlands,
steve
good
to
see
you
steve
steve,
is
out
on
the
east
coast
works
on
contour
juka
from
finland.
A
Lamati
is
always
here
good
to
see
you
again,
lamadi
choco
eric
good
to
see
you
from
dallas
khalid
david
from
sweden
jean-june
on
good
to
see
you
and
too
fruit
company
country,
federico
from
gothenburg
federico
hit
me
up
in
kubecon
once
and
gave
me
instructions
on
how
to
try
and
pronounce
gothenburg,
but
I
think
I'm
still
probably
screwing
it
up.
So
I
apologize
for
rico.
A
Yusuf
porko
from
yusuf
is
from
morocco,
patrick
from
sweden,
michael
from
phoenix
dean
from
chicago
prasad
from
chicago,
oh
man.
It's
always
great
to
see
everybody
show
up
from
all
over
the
place,
so
the
the
a
couple
of
things
while
we're
getting
started
here.
A
One
of
the
things
is
that
we
have
one
of
our
community
managers
here
at
vmware,
tanzu
lori
apple
has
been
huge
in
terms
of
helping
to
arrange
and
get
us
organized
around
tgik
I
mean
tgik
has
always
kind
of
been
my
side
thing
and
I
always
haven't
had
enough
time
to
really
do
it.
A
What's
the
topic
going
to
be
this
week,
so
we're
trying
to
get
a
little
bit
better
about
that
and
one
of
the
other
things
we're
looking
at
doing
is
having
an
alternate
time,
that's
more
friendly
for
folks
joining
from
either
europe
or
or
asia.
Just
because
this
is
not
a
great
time
zone
for
a
lot
of
folks,
so
so
stay
tuned
for
more
there
we're
going
to
be
publishing
more
of
a
schedule
there,
all
right,
so
so
joe
lee
and
vivec.
One
of
the
things
that
we
also
do
here
is
first
off.
A
We
have
shared
notes.
So
if
you
want
to
actually
wait,
we
have
a
hack
md,
which
I
love
hack
and
b
as
a
service
you
can
get
to
it
at
tgik,
dot,
io,
slash
notes.
The
the
url
is
up
here
in
the
in
the
upper
corner
here
and
we
just
like
the
crowd
source
notes.
A
So
if
there's
links
that
you
want
to
drop
in,
if
there's
things
that
we're
forgetting,
if
you
want
to
like
you
know,
note
something
down,
feel
free,
that's
a
that's
a
great
place
to
do
it
and-
and
you
know
joe
vivek
and
lee,
if
you
all-
have
links
or
things
that
you
think
you
want
to
talk
about
as
we
sort
of
go
through
this
crawl
feel
free
to
drop
those
things
in
there
too,
and
so
let
me
just
go,
I
did
a
quick
run.
A
A
I
think
the
idea
you
know
from
the
get-go
with
kubernetes,
you
know-
or
at
least
maybe
not
from
the
get-go
but
pretty
soon
afterwards
with
kubernetes-
is
that
we
recognize
that
we
weren't
going
to
be
able
to
do
everything
within
kubernetes
itself
and
so
building
frameworks.
Building
toolkits
that
run
on
top
of
kubernetes
was
really
a
big
part
of
what
we.
What
we
saw,
the
things
evolving
and
k-native,
I
think,
is
a
great
example
of
building
a
whole
bunch
of
stuff.
A
On
top
of
it,
oh
and
carlos
actually
helped
said
he
helped
write
this
blog
here.
So
thanks
for
that,
carlos
it's
good
to
see
that.
So
this
really
goes
through
a
whole
bunch
of
sort
of
everything,
that's
happening,
k
native
and
the
road
to
1.0.
A
It's
really
good
to
see
one
of
the
fascinating
things
in
this
world
and
I'd
love
to
get
people.
You
know
who
are
watching
thought
on
this.
Is
this
idea
of
1.0
in
beta-
and
I
think
you
know
there
is
this
tension
that
we
have
in
our
industry
where
something
can
be
production
quality,
and
so
we
may
call
it
1.0
from
a
quality
point
of
view,
but
we
may
be
still
learning
and
adjusting,
and
so
we
don't
want
to
necessarily
market.
A
You
know
so
that
there's
this
idea
of
like
quality
versus
promises
around
sort
of
supportability
over
time,
and
so
I
think
you
know,
especially
when
you
have
stuff
that
has
a
lot
of
apis.
You
still
want
to
be
able
to
actually
have
some
flexibility
to
learn
and
adapt
without
necessarily
having
to
support
everything.
A
For
always
and
oh,
we
lost
vivec
somehow,
and
so
that's
why,
in
the
in
the
in
the
kubernetes
world,
you'll
see
us
call
certain
apis
as
beta,
but
the
system
and
the
implementation
as
a
whole
can
be,
can
be
ga
and
it
can
actually
be
great.
So
there's
always
a
lot
of
you
know
tension
there
in
terms
of
actually,
how
do
we
really
express
our
our
thinking
around
promises
and
qualities
to
customers?
So
it's
like
joe
or
lee.
Do
you
guys
have
thoughts
on
this?
A
C
I
mean
I
from
my
perspective.
You
know
semantic
versioning
is
a
pattern
that
you
can
follow
having
a
beta
tag.
It
really
just
relies
on
your
level
of
comfort.
I've
used
so
many
beta
software
in
production-
I'm
I
wouldn't
like
to
I
wouldn't
like
to
to
tell
people
about
it,
but
I
mean
knaven
of
k-native
has
been
around
for
such
a
long
time
extremely
well
battle
tested
incredible
community,
really
talented
developers.
C
A
Yeah
definitely
agree
carlos
in
the
comments
is
saying
we
need
to
get
better
at
versioning
crds
and
how
do
we
actually
manage
and
again
aversion
is
a
promise.
Calling
something
beta
or
alpha
is
a
promise
to
users,
and
you
know
just
being
as
explicit
as
possible
about
how
we
actually
make
those
promises,
what
they
really
mean.
So
yeah,
that's
a
yeah,
so
I'm
gonna
keep
moving
unless
other
folks
have
comments
on
this.
A
The
other
another
thing
that
I
thought
was
really
interesting,
and
I
I
you
know
my
understands
folks
have
been
working
on
this
for
a
while
is
google
announced
as
part
of
gke.
This
is
an
open
source,
their
image
streaming
stuff,
and
this
is
really
really
fascinating,
because
what
it
finds
is
that
you
know
so
much
of
cloud.
A
So
much
of
our
world
is
around
virtualizations
presenting
this
facade
of
something,
and
this
is
really
presenting
the
facade
that
you
have
the
image
downloaded
when
you're
running
stuff,
and
so
the
idea
is
that
you
can,
you
know,
launch
a
container
and
then
start
accessing
it
right
away
and
then
essentially
stream
in
the
bits
that
you
need
to
be
able
to
make
this
stuff
work.
There's
a
couple
of
you
know,
there's
history
here,
at
least
in
my
time
at
google,
in
terms
of
the
way
that
app
engine
worked
back
in
the
day.
A
Is
that
like
say
the
python
runtime,
whenever
the
python
runtime
needed
a
file,
there
was
actually
a
back
end.
That
would
then
go
off
and
read
that
thing
from
bigtable,
and
so
it
was
never
actually
a
file
system
sitting
on
desk
every
everything
actually
ended
up
being
going
through
some
other
sort
of
storage
layer,
and
so
this
looks
like
taking
some
of
those
ideas
and
applying
that
at
the
container
level,
which
I
think
is
really
interesting.
A
It
also
really
reminds
me
it
happens
at
the
block
level,
not
at
the
file
level
of
some
of
the
work.
That
is
happening
with
erecto
in
terms
of
managing
duplication
of
file
systems
across
different
machines.
In
a
sort
of
you
know,
just-in-time,
tracking,
dirty
type
of
way,
so
some
really
interesting
stuff
here,
I
haven't
had
a
chance
to
play
with
it.
Yet
has
anybody
had
a
chance
to
start
using
it
and
see
what
they
see
what
they
think
about
it?
I
would
love
to.
A
So
carlos
is
a
network
file
systems
have
been
for
a
while.
I
wonder
why
this
is
available
now
what
about
open
source
like
container
d?
I
think
you
know
some
of
the
concerns
here.
I
think
are
that
you
know
you
look
at
something
like
nfs,
where
things
are
just
going
over
the
network
and
and
the
failure
modes.
A
There
aren't
great
right
because,
like
if
you
know
if
the
network,
hiccups
or
or
what
have
you
a
lot
of
stuff
can
start
falling
apart,
and
so
my
understanding
is
that
we're
reading
some
of
the
twitter
threads
on
this
is
that
this
is
essentially
the
base
image
layer.
A
So
everything
is
read
only,
and
so
you
can
take
advantage
of
very
aggressive
caching
to
be
able
to
actually
make
this
work,
and
so
it's
a
fundamentally
different
thing
than
something
like
nfs,
where
it's
just
doing
a
full
file
system
over
the
network
and
so
being
able
to
do
sort
of
just-in-time
read-only
population
of
a
file
system
is,
I
think,
what's
going
on
here,
but
yeah.
I
think
you
know
things
like
ipfs
is
really
interesting,
and
I
saw
that
nerd
control
had
support
for
ip
ipfs.
A
That
was
just
new
and
so,
and
you
know,
there's
there's
there's
star
gz,
which
is
essentially
being
able
to
snapshot
and
clone
containers.
So
there's
all
sorts
of
interesting
stuff
going
on
in
this
in
this
world
should
too
have
you
have
any
of
the
folks
on
the
call
here
had
had
a
chance
to
play
with
it.
Yet
have
you
got.
C
I
haven't
had
a
chance
to
play
with
it.
I
did
see
hit
our
native
provider
or
I
think
I
saw
a
hit
a
native
provider
a
few
days
ago.
My
my
first
thought
was,
like
you
know,
there's
so
much
usage
of
object.
Storage
for
these
kind
of
these
kind
of
problems
and
latency
becomes
such
a
huge
problem.
I
can
definitely
see
the
market.
I
personally
have
never
run
anything
that
needs.
C
You
know,
needs
images
that
fast
and
I
don't
really
want
to
be
in
that
on-call
rotation
either.
So
you
know
I'm
hoping
that
this
will
make
a
lot
of
people's
lives
a
lot
easier.
A
Yeah,
well,
I
think
it's
a
combination
of
of
of
big
images
and
needing
them
fast
right
and
I
think,
a
lot
of
the
folks
who
are
doing
this
stuff
from
scratch.
I
think
we
know
to
keep
our
our
our
images
reasonably
sized,
but
there
are
definitely
edge
cases
carlos
talks
about
ai
models
down
there,
but
you
know
when
you're,
when
you're
doing
maybe
a
legacy
java
application.
It's
not
unusual
to
start
seeing
images
get
very,
very,
very
large,
so
having
the
stuff
be
able
to
stream
in
on
the
fly,
seems
really
cool.
D
Yeah
in
a
former
life
spent
a
bunch
of
time
with
ai
researchers,
and
those
images
are
pretty
massive,
so
yeah.
This
can
be
a
fuse.
Definitely.
A
Yeah
and
I
think
somebody
dropped
into
our
notes-
the
the
some
tweets
looking
at
ipfs
support
for
container
d
and
nerd
control,
also,
which
is
really
intriguing
for
those
who
aren't
aware
ipfs,
is
stands
for
what
we're
interplanetary
files?
No,
it's
it's.
What
does
it
even
stand
for
but
like?
Essentially
it's
a
it's
a
fully
distributed
peer-to-peer
file
system?
Oh
it's
carlos
that
put
that
stuff
in
there.
Thank
you
for
that,
carlos
so
yeah.
So
I
mean
it's
a
it's!
A
A
fully
distrib
distributed
file
system
with
the
idea
that
everything
is
based
on
hashes
and
so
names
end
up
being
stable,
and
so
the
example.
I
think
that
you
know
the
the
nerd
control
stuff
is
you
can
pull
sort
of
ipfs
here?
There's
a
parallel
thing
called
ipns,
which
is
essentially
a
mapping
from
like,
say
example.com
to
this
object,
so
there's
a
there's,
essentially
a
metadata
layer
built
on
top
of
it
and
then
all
of
a
sudden.
You
know.
A
Of
course
you
know
you
start
bringing
in
some
blockchain
and
stuff
like
that
for
managing
that
mapping.
But
it's
a
really
interesting,
fully
distributed
type
of
thing,
and
I
think
that
there's
also
oh
man,
I'm
blanking
on
the
project.
Right
now.
There
is
essentially
a
a
bittorrent
like
system
for
distributing
images,
also,
which
I
believe
now
is
part
of
the
harbor
project,
which
is,
I
think,
really
interesting.
A
So
yeah,
yeah
and
david's,
saying
huge
windows,
images,
which
is
true.
I
don't
think
the
gke
service
supports
windows
images.
Yet
at
least
that
was
you
know
the
supposition
that
I
was
reading
about
there,
so
so
yeah
so
very
cool.
And
then
I
you
know
I
haven't
like
looked
at
it,
but
I
you
know,
I
saw
this
really
interesting,
blog
post.
A
Maybe
I
was
hacker
news
or
on
on
reddit,
that's
going
through
sort
of
the
different
logging
architectures
for
doing
logs,
essentially
breaking
down
grabbing
stuff
using
a
daemon
set,
and
essentially
a
standard
error,
standard
out
or
essentially
using
a
sidecar
to
be
able
to
log
to
a
file
and
have
the
sidecar
be
able
to
actually
pull
that
stuff
out.
A
And
I
just
like
the
diagrams
and
like
the
idea
of
like
hey,
there's
like
multiple
ways
to
skin
this
cat,
so
this
looked
like
a
really
really
interesting,
blog
post
that
you
all
can
check
out
if
it
looks
interesting
all
right,
let
me
read
through
the
comments
here
who
runs
windows
on
kate.
It
turns
out
that
a
lot
of
folks
are
really
interested
in
running
windows
and
kubernetes.
A
It's
it's
actually
one
of
the
huge
features
that
we
see
folks
using
out
of
one
of
the
the
tk
tkg
offerings
that
we
have
tkgi.
A
Let's
see,
choco's
saying
that
uber
has
something
called
kraken
for
distributing
p2p
images.
I
haven't
seen
that
yet
dragonfly
is
the
name
of
the
the
heart,
the
the
harbor
adjacent
project
for
doing
this
stuff
and
then
eric
saying
that
nvidia
has
some
huge
images
for
ml
stuff,
yeah
ml
stuff
starts
getting
really
really
big,
and
you
know-
and
I
think
it's
this
is
one
of
those
things
where
you
know.
A
A
We
use
oci
registries
for
distributing
images,
but
we
can
use
them
for
distributing
other
types
of
data
also,
and
so
one
of
the
things
I'd
really
like
to
look
at-
and
I
think
you
know
if
I
had
more
time-
was
maybe
a
csi
provider
for
kubernetes
that
could
actually
load
or,
as
you
know,
type
of
images
into
a
volume
so
that
you
can
distribute
large
binaries
using
the
you
know
the
oci
registries
and
then
be
able
to
actually
use
that
as
essentially
a
volume
I
think,
that'd
be
really
interesting
or
for
distributing
things
like
machine
learning
models,
because
all
the
concerns
around
supply
chain
and
signing
and
tracking
that
go
into
your
images,
you're
going
to
want
all
that
same
stuff
for
your
machine
learning
models.
A
Also
so
yeah.
It's
a
really
interesting,
really
interesting
area
to
see
these
things
converging
in
terms
of
storage.
All
right,
I
want
to
keep
going
and
then
oh
yeah,
the
container
apps.
I
meant
to
do
this.
Somebody
put
this
in
there
also
so
dapper
joined
as
a
cncf.
It
was
a
sandbox
project.
Now
it's
an
incubation
project
and
then
there's
also
these
azure
container
apps,
which
is
in,
and
I'm
probably
doing
it
a
disservice.
It's
built
on
kubernetes,
but
it's
kind
of
like
cloud
run
for
for
azure
is
my
my
tlbr
there.
A
If
I,
if
I
have
that
wrong,
let
me
know,
but
that
that
looks
really
interesting.
C
A
Yeah,
that's
super
cool
and
you
know-
and
I
love
that
we're
seeing
this
spectrum
of
of
options
here
and
I
think
the
the
interesting
thing
is
that
we
have
everything
from
sort
of
like
here's.
Some
code
just
run
it.
I
don't
have
to
worry
about
down
to
sort
of
like
I
want
to
manage
my
own
kubernetes
and
I
think,
as
many
concepts
as
we
can
share
to
sort
of
ease
transitions
between
these
worlds.
I
think
the
better
off
it's
going
to
be
for
users.
B
Also,
love
about
it
is
just
with
the
whole
dapper
project
is
kind
of
moving
the
layer
of
abstraction
up
a
little
bit
where
we
can
start
thinking
about
these
things
as
real
distributed
systems,
and
not
just
focusing
on
the
containers
and
the
pods,
but
how
they
all
relate
to
each
other.
And
so
that's
cool.
A
You
know
I
look
at
dapper,
I
look
at
spring
and
I
look
at
istio
and
what
you
see
is
that
there's
the
same
concerns
that
are
being
addressed
at
slightly
different
layers
across
these
things
at
istio,
it's
being
done
at
sort
of
the
the
network
packet
level
sort
of
like
the
l2.
You
know
l3
l4
layers
with
something
like
adapter.
Your
application
has
an
explicit
api
to
talk
to
a
side,
car
and
there's
all
sorts
of
services
built
into
it's.
A
It's
a
mesh,
but
not
a
network
mesh,
it's
a
sort
of
a
mesh
for
for
getting
other
other
information
and
other
actions.
But
it's
an
explicit
thing
right
and
then
you
have
something
like
spring
where
this
stuff
gets
built
in
at
the
language
level.
As
a
bunch
of
frameworks
and
there's
like,
I
think,
we're
still
figuring
out,
you
know
the
right
place
to
put
different
functionality
and
then
how
do
we
create
interoperability
around
this
stuff
too?
Oh
microsoft
wants
to
know
what
I
think
here,
not
right
now,
microsoft,
sorry
so
yeah.
A
So
I
think
you
know
that
that
I
you
know
that
that
that
that
drifting
of
capabilities
between
these
systems,
I
think,
is
really
cool
all
right.
So
that
is
the
news
from
lake
wobegon.
Oh
wait,
I
can't
say
the
name,
that's
the
news
from
around
the
kubernetes
world:
let's
go
ahead
and
start
talking
about
palumi
and
the
the
the
the
kubernetes
operator.
A
A
There's
been
some
work
to
sort
of
like
polish,
these
up
a
little
bit,
and
I
appreciate
you
all
helping
out
with
that
and
then
and
then
I
think
you
know
lee
you
dropped
this
in
that
there's
a
there's
a
you
created
a
demo
that
is
is
do
you
think
we
should
work
through
this
one
lee.
A
C
Minutes
ago,
so
you
know.
C
Run
it
a
few
times
it
worked
for
me,
but
you
know
that
we're
live
now
and
so
we'll
we'll
see
how
it
goes.
But
this
should
show
you
an
idea
of
the
kind
of
thing
and
the
kind
of
value
proposition
that
you
would
get
from
from
the
operator.
I
think
it's
really
important
to
to
kind
of
start
off
with
saying
why
you
might
choose
to
use
the
operator.
Obviously,
if
you
are
following
along
at
home
and
you've
never
heard
of
palumi.
A
C
B
Yeah,
so
palumi
is
an
infrastructure
code
platform
that
lets
you
use
general
purpose
programming
languages
to
do
everything
we
know
and
love
about
infrastructures
code,
but
because
it's
general
purpose
languages
you
get
abstraction,
you
get
sharing
and
reuse,
you
get.
You
know
for
loops
and
functions
and
all
those
great
things.
So
we
support
lots
of
different
languages.
Python
javascript,
but
we
support
lots
of
clouds
too.
So
aws,
azure
and
kubernetes
has
been
part
of
pollumi
essentially
from
day
one.
B
So
if
you
want
to
express
kubernetes
configuration,
you
can
do
that
in
pollumi
and
connect
it
to
other
pieces,
like
maybe
you
want
an
rds
database
instead
of
managing
your
own
mysql
instance
in
your
kubernetes
cluster.
Maybe
you
want
to
provision
kubernetes
clusters
themselves,
connect
them
to
things
like
cloudwatch
route,
53,
the
azure
equivalence
of
those
and
but
what
we
find
is.
Okay,
it's
infrastructure's
code.
You
can
run
the
cli,
that's
great!
It's
great
for
tight
loop.
B
You
know
sitting
in
your
editor
because
it's
a
programming
language,
you
get
refactoring
all
the
great
things
about
editors,
but
when
you
go
to
production,
most
people
are
going
to
want
to
automate
that.
So
how
do
you
automate
that?
Well,
we
have
a
few
different
options.
We
integrate
ci
cd
systems,
so
maybe
you
want
to
use
spinnaker.
B
Maybe
you
want
to
use
github
actions,
but
actually
we're
increasingly
finding
that
folks
want
to
manage
these
with
operators
and
actually
trigger
those
things
off
of
maybe
git
commits
or
and
so
really
doing,
the
deployments
from
within
the
kubernetes
as
a
controller
effectively
with
you
know,
we
have
policies
code
for
admission
control
and
things
like
that,
so
increasingly
we're
finding
folks
that
are
really
living
on
the
bleeding
edge
of
cloud
native.
B
Everything
really
are
are
picking
the
operator
to
do
their
deployments,
and
so
that's
kind
of
how
we
got
to
where
we
are
lee.
Did
I
do
okay
or.
C
Yeah,
I
think
the
one
thing
that
I
will
say
is
the
you
know
the
traditional
mechanism
for
deploying
infrastructure
is,
you
know
somebody
has
a
command
line
tool
and
they
either
shove
it
into
a
ci
cd
pipeline,
or
they
run
it
locally
on
their
laptop.
You
know,
there's
other
infrastructure
code
tools
out
there
in
which
that's
the
kind
of
desired
workflow.
C
The
the
building
block
for
the
pollumi
kubernetes
operator
is
the
automation
api,
which
is
kind
of
unique
to
our
product
in
the
sense
that,
and
it's
all
open
source
everything
we're
talking
about
today
is
open
source.
So
the
automation
api
allows
you
to
build
your
own
custom
workflows
when
you're
actually
provisioning
infrastructure.
So
you
can
do
things
like
build.
You
know
build
heroku
like
command
line
tools
that
will
allow
you
to
actually
deploy
things
in
kind
of
suiting.
Your
workflow,
the
kubernetes
operator,
is
just
another
one
of
those
workflows.
C
If
you're
really
used
to
kubernetes
operators,
you're
used
to
the
reconcile
loop,
we've
already
done
the
work
for
you
and
part
of
the
reason
why
this
came
along
is
once
the
automation
api
came
out.
Everyone
was
like
well,
this
looks
perfect
for
a
kubernetes
operator
and
joe
went
away
for
a
weekend
and
was
like
here
we
go.
C
Let's,
let's,
let's
start
something
off
and
and
vivek
has
taken
it
from
there
and
it's
now
you
know
at
reach
1.0
recently
and
is
now
production
ready,
and
so
these
two
things
molded
together
the
op
the
operator
is
going
to
show
you.
If
you're
used
to
the
kubernetes
reconcile
loop,
then
that's
one
way
of
doing
things,
but
you
know
one
of
our
one
of
our
colleagues
built
a
web
api
that
will
allow
you
to
do
deployments
as
well.
C
So
there's
so
many
different
things
and
the
sky
is
really
the
limit
because
of
this
automation,
api
that
allows
you
to
define
these
kind
of
infrastructure
code
workflows.
So
if
you're,
someone
who
has
ever
kind
of
waited
20
minutes
for
a
ci
pipeline
to
wait
and
been
like,
why
am
I
waiting
so
long?
And
why
am
I
refreshing
this
page?
The
automation
api
is
probably
going
to
be
something
that
you're
you're
going
to
enjoy
and
the
kubernetes
operator
is
just
building.
On
top
of
that.
A
So
one
of
the
things
that
I've
heard
from
customers
explicitly
is
you
know
they
like
using
kubernetes,
because
it's
very
easy
to
integrate
kubernetes
with
other
systems,
the
declarative
nature
of
the
kubernetes
apis
of
like
hey,
it's
just
plain
data.
You
know
I
can
just
push
my
data
to
the
cluster
and
and
relatively
reasonable
things
happen.
Sort
of
the
basis
of
you
know
the
whole
get
ops
idea
is
that
you
know,
and
I
think
that
combined
with
operators
really
means
that
we
can.
Oh,
maybe
we
lost
joe.
He
might
have
closed
his
window.
A
That
combined
with
operators
means
that
it's
really
easy
for
us
to
integrate
with
something
and
really
bring
it
to
integrate
with
with
with
higher
level
systems
and
have
them
drive
a
set
of
of
workflow
automation.
Things
now
one
of
the
interesting
things
we
probably
won't
have
time
to
explore
today,
but
we
recently
introduced
this
project
as
part
of
a
ponzu
called
cartographer,
which
is
essentially
a
supply
chain.
A
Orchestrator,
and
this
is
the
idea
around,
like
you
know,
how
do
you
actually
take
a
bunch
of
steps
in
your
supply
chain
and
stitch
those
things
together
right
now?
What
happens
is
it's
a
bunch
of
fragile
configuration
between
a
bunch
of
different
systems
where
it's
like?
Oh
at
the
end
of
my
jenkins
pipeline,
I
want
to
run
this
command
that
and
then
runs
this
other
thing
right,
and
so
configuring
and
hooking
those
things
up
ends
up
being
really
complicated.
B
B
Maybe
the
c
actually
does
stand
for
config
and
we're
all
just
confused
for
code,
but
the
cool
thing
about
it
is:
you
can
take
everything
we
know
about
software
and
apply
it
to
how
you're
managing
infrastructure,
including
things
like
versioning.
How
do
you
patch
you
know
problems?
How
do
you
release
new
versions
of
your
infrastructure
and
kind
of
one
of
the
neat
things
that
pollutant
can
do
is
actually
do
the
upgrades.
B
So
from
point
a
to
point
b,
you
really
can
use
semantic
versioning
for
your
infrastructure,
if
you
want
to
which
brings
all
this
like
software
supply
chain
concepts
to
the
world
of
infrastructure
as
well.
So
I
agree.
I
think
these
would
be
better
together.
A
Yeah
and
I
think
that's
a
critical
thing
that
I
think
people
are
starting
to
get
more
and
more
cognizant
of
now
is
that
you
can
be
great
about
controlling
your
binaries,
but
like
the
way
your
binary
behaves
can
change
dramatically
based
on
an
environment
variable
or
a
command
line
flag
or
the
dependencies
that
you
hook
it
up
to,
and
so
you
need
to
be
able
to
control
that
stuff,
just
as
well
as
you
control
any
other
code
that
goes
into
your
app
yep
all
right.
So
let's
go
ahead
and
get
started
here.
A
I
think,
let's
see,
let
me
close
some
tabs,
so
we're
going
to
start
with
so,
okay,
so
here's
what
we
have
going
on
here
is
I
I
I
played
with
fallumi
a
little
bit.
You
know
this
morning
and
yesterday
just
to
make
sure
I
wasn't
too
far
off.
We
need
to
create
an
s3
bucket
to
store
state
here.
So
now,
when
you're
launching
pollumi,
you
have
a
choice.
You
can
use
the
plumi
web
service
or
sort
of
sas
service,
which
you
know,
there's
there's
a
lot.
A
A
A
You
know
a
better
experience
if
you're
using
the
paloomy
stuff,
at
least
there's
a
video
at
the
bottom
of
this
page
here
where
you
can
see
you
know
who
is
it
mike
that
actually
went
through
and
did
this
stuff
do
this
video
you
can
go
through
and,
like
click
from
the
poll
you
know,
you
know
see
what
the
state
is
and
decode
it,
but
we're
going
to
do
something
a
little
bit
more
raw
with
the
with
the
s3
stuff
here.
A
So
I'm
going
to
create
a
bucket
here,
I'm
going
to
call
this
one
tgik
tulumi
state,
something
like
that.
I'm
going
to
be
operating
us
east
one
like
everybody
else
in
the
world.
C
Let's
just
hope
that
it
actually
stays
up
for
the
you
know
for
the
entirety
of
the
demo.
You
know
you
can
never
be
too
sure,
I'm
sure
corey
quinn,
corey,
quinn's
twitching
somewhere,
the
us
east,
one
usage.
Okay,
what
did.
A
A
A
A
The
other
thing
that
I
have
and
let
me
pull
that
up
and
I'll
just
put
a
link
in
here.
If
I,
if
we
go
to
github.com
jbeta,
and
I
think
I
have
dot
files,
I
have
this
little
bash
script
here
that
I
just
wrote
this
morning,
which
essentially
copies
it
essentially
uses
the
aws
config
to
like
preload
this
stuff
into
your
environment.
So
this
is
a
really
useful
little
thing
here.
E
Environment
from
aws,
cli
config,
but
yeah.
C
So
it's
worth
kind
of
mentioning
as
well
like
if
you're
running
this
inside
a
cluster
inside
the
cloud
environment
already,
there's
usually
some
oidc
mechanism
to
talk
to
the
cloud
provider
eks
has.
I
am
rules
for
service
accounts.
I
forget
what
azure's
one
is
called
and
then
gke
has
one
obviously
because
we're
I
think,
you're
going
to
run
this
on
a
kind
cluster
right.
A
Yeah
yeah
and.
C
So
you
need
to
provide
credentials
so
that
pollumi
knows
how
to
talk
to
the
api
somehow,
and
so
this
is
just
obviously
the
the
most
straightforward
way.
C
Barclay
just
asked
state
file.
Is
this
the
same
terraform
state
file?
The
state
file
is
considerably
different
from
terraform.
It's
got
the
same
mechanism
of
creating
a
resource
and
then
storing
the
state
of
that
operation
in
the
state
file.
And
then
another
common
question
is
pollute
pollutant
using
terraform
providers
under
the
hood.
There's
two
ways
to
look
at
this.
C
The
first
is
that
we
use
some
terraform
providers
to
actually
map
the
create
re,
create,
replace,
update,
delete
parts
of
the
schema,
but
we
don't
execute
the
terraform
providers
and
the
second
thing
is:
we
talked
about
native
providers
a
couple
times
already.
The
the
kubernetes
provider
is
generated
programmatically
from
the
kubernetes
api.
So
when
a
new
version
of
kubernetes
is
released,
so
1.22
came
out
a
few
weeks
ago.
We
get
the
next
day's
support
for
that,
because
we
programmatically
generate
those
resources
directly
from
that
api.
C
It's
a
very
common
thing
that
comes
kind
of
comes
up
like
you're,
just
using
terraform
under
the
hood.
It's
not
actually
true,
so
I'm
here
on
record,
saying
we
don't
use
the
terraform
providers,
we
just
use
the
schema
and
the
replace
and
update
and
delete
mechanisms.
Joe
you,
I
think
you
were
originally
part
of
the
the
person
who
put
this
together.
If
I
got
any
of
that
wrong,
then
please,
let
me
know.
E
B
Wanted
you
know,
folks
to
be
able
to
use
any
of
the
tarot
form
ecosystem,
and
so
we
support
you
know
any
any
provider
that
terraform
has.
We
can
plug
into
polluting
and
use
it,
but
our
native
providers
in
general
are,
you
know,
support
the
entire
cloud.
So
azure
is
open.
Api
based
kubernetes
one
in
is
open
api
based
the
google
one
is
actually
discovery,
doc,
based
which
I
don't
want
to
trigger
you,
joe.
I
know
you
have
past
history
with
deployment
manager,
but
our
native
providers
are
really
great
and
carlos
asks.
B
Why
do
we
need
state?
In
the
first
place?
It's
really
to
map
the?
I
think
joe
kind
of
spoke
to
it
earlier
in
a
nice
way,
which
is
mapping
the
pollumi
concepts
in
your
program
to
the
actual
underlying
cloud
resources,
and
we
we
have
to
be
able
to
do
diffs,
and
it's
also
good
to
know
like
what
is
the
last
known
deployment
state.
So
you
can
actually
do
drift
detection
to
detect.
You
know,
hey.
Is
the
current
live
state
in
the
cloud
different
from
what
we
thought
we
deployed
so
there.
A
We
go
what's
interesting,
though,
is
that
I
think
you
know
some
of
the
ways
that
we
define
the
kubernetes
api
and
the
degrees
of
flexibility
you
have
with
kubernetes.
Allow
you
to
annotate
things
to
the
point
where
you
know.
If
you're
building
deployments
against
kubernetes,
you
might
be
able
to
get
away
with
not
having
something
like
a
state
file
and
actually
have
annotations
and
and
labels
on
top
of
the
objects.
Not
every
cloud
lets
you
do
that,
and
even
in
something
like
aws.
The
support
for
tagging
and
annotating.
A
Api
objects
is
very
mixed
across
all
the
different
types
of
objects,
and
so-
and
so
you
know
just
this,
this,
this
need
for
a
state
file
is
just
a
reality
when
dealing
with
with
back
ends
and
or
services
that
you're
targeting
that
aren't
super
flexible
in
terms
of
carrying
alongside
car
annotation
data.
C
I
think
I
think
if
we
talk
a
little
bit
more
about
the
state
file
as
well
like
carlos,
is
making
a
very
good
a
good
good
point
that,
like
with
the
get
ops
mechanism,
you're
matching
the
kubernetes
xcd
state
or
if
you're,
using
a
git
repository,
you
still
are
driving
towards
a
desired
state.
That
state
is
just
stored
in
a
different
place.
C
So,
with
the
kubernetes
operator,
the
state
is
an
scd
and
you're
reconciling
on
the
xcd
state
and
with
git
ops,
you're,
reconciling
like
git
repositories
last
known,
commit
the
only
difference
with
polumi
is
that
we
use
object,
storage
and
a
http
interface.
To
actually
do
that,
so
the
model
is
essentially
the
same.
You
just
don't
have
this
file
that
you.
You
know
that
you're
actually
dealing
with
and
in
some
ways
it
can
actually
be
better.
C
In
my
opinion,
because
if
your
kubernetes,
if
your
xcd
cl,
if
you're
not
if
you're
unlucky
enough
to
be
using
a
non-managed
kubernetes
cluster-
and
you
have
to
start
manipulating
scd
manually-
you
know
it's
it's
a
it's
a
pretty
difficult,
difficult
operation,
whereas
with
the
you
know
with
the
state
files,
you
can
just
manipulate
the
the
file
themselves.
D
There's
also
the
possibility,
for
example,
like
as
joe
perhaps
mentioned,
that
you
know
you
could
you
could
store
state
alongside
the
object?
Essentially
like
kubernetes
has
tags
or
annotations
and
whatnot,
but
you
know
there's
also
a
situation
where
you
could
lose
resources
arbitrarily
where
that
particular
item
no
longer
exists,
like
you
know
that
reconciliation
is
something
that,
with
the
separate
state
that
we
store
you
could,
you
could
achieve
obviously
there's
ways
around
it,
but
that's
one
and
for
kubernetes
in
particular.
We
do
essentially,
because
we
know
there
is
this
pri.
D
You
know
latest
state
available
for
for
kubernetes.
We
do
a
degree
of
three
way
three
three-way
diffs
as
well,
but
that's
not
the
case
for
a
lot
of
other
resources
and
gloomy,
as
you
know,
works
with
a
large
variety
of
you
know.
A
All
right,
so
I'm
jumping
ahead
here
a
little
bit
while
you
all
were
answering
that
I
went
through
and
deployed
the
operator
links
to
the
instructions
here
and
you
can
deploy
it
either
using
polumi
itself,
but
I
don't
want
to
get
too
inceptiony
here
and
actually
confuse
people,
and
so
I'm
gonna,
I'm
gonna,
deploy
it
with
cubecontrol
directly.
A
That
takes
us
here
and
I
downloaded
the
latest
release
the
tarball
and
untarred
it,
and
so
we're
at
this
point
here.
I
don't
know
if
you
all
saw
me
do
that,
so
so
what
we
have
is
we
have
the
crds
and
so
let's
go
through,
and
if
we,
if
I
pull
up
vs
code,
we
should
be
able
to
see
deploy
crds.
A
So
we
have,
it
looks
like.
Is
there
just
one
crd
that
we're
deploying,
which
is
the
stack
crd?
That's
right!
That's
correct!
Okay,
okay,
so
this
is
essentially
stacks.paloomy.com
is
the
name
of
the
crd.
It's
a
stack
and
okay,
so
we'll
get
into
that.
So
let's
go
ahead
and
deploy
that
cube.
Control,
apply,
dash,
f,.
A
That's
so
now
you
go
get
stacks,
it's
gonna,
probably
tell
me,
you
don't
have
any
stacks
yet,
of
course
not
because
we
just
created
the
crd,
so
that's
working
and
then
we
can
go
ahead
and
run
the
the
the
operator
itself.
So
let's
dig
into
what
we
have
there.
So
this
is
going
to
be
okay,
so
the
operator
runs.
So
now
the
question
is
like
okay.
So
what
we
have?
We
have
the
operator
itself.
So
we
have
a
deployment
for
that,
a
role
in
a
role
binding.
A
It
runs
it's
getting
a
lot
of
a
lot
of
resources,
because
by
the
time
you
get
to
secrets
you
pretty
much
own
things,
but
it's
a
role,
not
a
cluster
role,
which
is
interesting,
and
so
what
that
tells
me
here
is
that
the
pollumi
operator
is
built
right
now
to
run
on
a
per
name
space
basis.
So
it's
not
a
case
where
you
install
this
into
a
cluster
and
it
becomes
a
cluster-wide
service.
A
It
really
is
sort
of
like
running
in
your
namespace.
For
your
thing,
there's
an
interesting-
and
this
is,
I
think,
one
of
the
problems
with
crds
and
stuff
like
this,
where
it's
like.
A
It
gets
a
little
bit
tricky,
because
what
you
may
find
is
that
you're
running
different
name
spaces
are
running
different
versions
of
the
pollumi
operator
and
they
may
assume
certain
things
about
the
crds
that
are
running
under
the
covers,
and
so
that's
something
that
I
think
is
gonna
is
not
something
that
we
necessarily
have
great
answers
for
across
the
kubernetes
ecosystem,
in
terms
of
how
to
deal
with
some
of
those
version.
Skus
all
right
on
some
of
the
oh
go
ahead.
Sorry.
C
C
Generally,
I
would
advocate
for
a
you
know,
a
dedicated
cluster
that
is
allowing
you
to
actually
run
kind
of
privileged
resources,
not
running
it
in
the
same
cluster,
with
your
you
know,
with
your
application
stuff
simply
because,
like
you
said
you
know
somebody
so
if
somebody
gets
control
of
your
git
repository
or
somebody's
able
to
get
control
of
this
particular
this
role,
you're
kind
of
in
a
bad
shape
already,
and
you
can
pivot
pretty
quickly
so
yeah.
D
Yeah
and
in
some
ways
like
you
know,
the
role
that
we
are
providing
right
now
does
allow
you
to
create
additional
resources
within
the
same
cluster,
but
that
can
be
paired
down.
This
is
pure
mostly
for
from
a
you
know,
usability
perspective
like
being
able
to
do
something
locally
for
your
cluster,
but
certainly,
as
lee
pointed
out
very
correctly,
you
know
that
that
is
actually
how
a
lot
of
our
hardened
users
are
actually
currently
using.
The
system
is
to
pair
down
those
that
role
significantly.
A
And
you
know
coming
from
the
google
point
of
view
and-
and
I
assume
the
red
hat
point
of
view
in
terms
of
previous
versions
of
openshift-
that's
a
kind
of
a
a
bit
of
a
surprise
right,
because
google
borg
cells
are
big
and-
and
you
know
are-
are
very
much
multi-tenant
and
shared
and
multi-purpose.
A
Whereas
I
think
you
know,
with
you
know,
early
on
services
like
gke,
you
know,
creating
you
know,
being
able
to
press
a
button
and
create
a
cluster
means
that
it's
very
easy
for
people
to
create
a
lot
of
clusters,
and
it
tends
towards
more
of
these
single
purpose
type
of
clusters.
And
I
think
what
that
means
is
that
in
the
the
kubernetes
ecosystem,
a
lot
of
the
cutting
edge
stuff.
Now
is
how
do
you
actually
start
creating
communications
and
and
stuff
across
different
different
clusters?
A
A
You
know
access
that
from
a
different
cluster,
so
creating
automation
around
cross-cluster
service,
provisioning
and
access
ends
up
being
a
really
interesting
area,
and
that's
one
of
the
things
that
I
want
to
look
at
the
the
palumi
operator
through
the
lens
of
it's.
B
Interesting,
we
actually
have
folks
who
create
kubernetes
clusters
from
within
their
operator
running
in
kubernetes
clusters,
because
you
have
the
full
access
to
eks
aks
and
you
know
gke
from
within
the
pollumi
model.
You
can
actually
do
that
from
inside
the
operator.
We
actually
have
this
because
we're
talking
about
stacks
stack
is
sort
of
the
crd,
but
a
stack
is
a
pretty
primitive
concept
in
pollumi,
which
is
you
think
of
it
as
an
environment.
In
a
sense
you
know
you
might
have
one
project
deployed
to
10
different
stacks
right
and
it
might
be
production.
B
B
A
Of
that
at
least
be
able
to
map
those
dependencies.
Yes,
all
right,
let's
keep
going
okay,
so
so
we
have
the
so
we
have
the
service
count,
roll
and
roll
binding.
We
have
all
that
stuff
going.
You
know.
The
deployment
here
is
pretty
pretty
dead,
simple,
nothing
super
fancy,
it's
just
a
single
single
operator,
that's
running
and
it
uses
that
service
account
to
be
able
to
do
things.
A
One
of
the
interesting
things
here
is
that
you
know,
because
this
is
a
sort
of
a
per
name
space
thing
that
you're
that
you're
operating
within,
I
think
by
default.
At
least
you
know.
One
of
the
things
we
were
talking
about
before
we
went
live
is
that,
as
you
know,
if
you're
executing
a
paloomy
stack
that
talks
to
kubernetes
itself
by
default,
it
will
use
that
service
account
both
for
the
operator
but
also
for
executing
a
stack
and
so
that
becomes
available
through
that.
Also,
you
know.
A
One
of
the
things
that
might
be
interesting
here
would
be
having
a
different
service
account
for
execution
from
the
operator
itself,
so
that
maybe
you
can
constrain
or
expand
the
execution
context
by
default.
A
D
Here
yeah,
one
of
the
things
that
we
have
been
talking
about
and
we
hope
to
do
very
soon-
is
being
able
to
do
exactly.
The
kind
of
thing
that
you
were
talking
about
and
also
sort
of
have
the
stock
runs
happen
in
a
in
a
self-dedicated
sort
of
job,
essentially
and
like
that
can
be
a
heavily
pared
down
sort
of
environment,
and
you
know
that
gives
you
that
degree
of
like
hermetic
kind
of
environment
as
well
yeah,
that's
that's
something.
A
So
talk
is
asking:
is
the
operator
using
global
credentials
to
talk
to
the
cloud
provider?
No,
I
think
what
happens
is
that
the
if
it's
kubernetes
and
you're
talking
to
the
local
cluster
it'll
actually
use
the
that's?
What
we're
talking
about
the
credentials
that
were
provided
to
the
that
service
account
that
we
just
created
here
would
be
used
there
by
default,
but
you
can
also,
when
you
define
the
stack
and
you'll
see
this.
A
One
of
the
things
that
you
put
into
the
stack
is
the
the
credentials
that
you're
using
or
you
can
pull
those
from
a
secret,
and
so
that's
why
we
we
created
that
that
secret
with
our
aws
credentials
earlier
on,
so
we'll
definitely
get
get
to
that.
Okay.
So
now,
I
think,
let's
see,
is
this
thing
that's
still
container
creating
it's
probably.
Is
it
downloading.
A
I
have
a
very
fast
internet
connection,
so
yeah
we're
up
and
running.
You
know
I'm
lucky
enough
here
in
seattle
to
have
gigabit
gigabit
ethernet,
so
I'm
getting
spoiled.
A
Yeah
joe
joe
took
the
opportunity
during
kova
to
move
out
to
the
islands
and
so
no
longer
here,
locally,
okay,
so
so
the
example
that
we're
gonna-
oh
wait.
We
weren't
gonna
work
through
this
example.
We
were
gonna
work
through
this
example
here,
okay,
so
the
the
this
is.
If
you
look
at
our
notes
here,
just
to
keep
everybody
on
the
same
page,
that's
the
tgik
operator
demo.
A
This
is
the
link
that
we're
working
through
right
now
here-
and
this
was
lee-
was-
was
nice
enough
to
pull
this
together
for
us?
Okay.
So
now
we
install
only
installs
the
operator
itself.
A
We
now
need
something
for
the
operator
to
reconcile,
and
so
the
way
that
this
works
is
that
we're
gonna
create
a
crd
and
the
crd
is
actually
going
to
talk
to
a
git
repo
to
be
able
to
actually
download
the
pollumi
code
that
we're
going
to
be
able
to
that
we're
going
to
actually
run
so.
This
is
the
pollumi
program
now
normally
correct
me.
If
I'm
wrong
guys.
Normally
when
you
run
paloomi-
and
you
have
something
you
run
paloomi
up,
the
the
execution
of
the
palumi
program
happens
locally.
B
It's
a
really
good
point
because
you
know
like
terraform
enterprise,
if
you're
familiar
with
that
model,
it's
actually
server
side
execution,
but
because
it's
client-side
it
means
your
credentials
can
live
wherever
they
typically
live.
You
know,
distribute
them
on
the
client.
If
you
have
a
ci
cd
pipeline,
that's
like
super
locked
down.
You
run
our
cli
from
there.
The
the
security
model
is
actually
a
lot
of
a
lot
of
folks.
We
work
with
prefer
that
model.
So
it's
an
important
point.
A
Because
when
you
deploy
something
you're
deploying
it
using
your
credentials
as
you,
so
there's
a
certain
sort
of
personal
attestation
around
that
yeah
I
mean
it's.
It's
interesting.
Some
people
like
that.
Some
people
like
to
be
able
to
say
have
this
running
on
a
machine
where
everybody
has
visibility
and
there's
a
lot
of
auditing.
I
can
definitely
see
pros
and
cons
around
that
for
sure
yeah.
B
A
Yeah,
the
the
when
I
talk
to
folks,
you
know
inside
of
tonzo
about
this.
I
often
talk
about
sort
of
like
the
spectrum
between
yolo,
ops
and
responsible,
ops,
right
and
so,
like
you
know,
when
you
start
off
developing
something
you
just
want
to
like,
get
it
out
there
and
not
sweat
the
details
about
it.
B
Yeah
one
one
cool
thing
by
the
way:
yeah,
I
know
we're
using
the
s3
back
end,
but
if
you
use
the
polumi
sas,
it's
got
auditing
built
in
and
I
think
of
it
as
almost
like
what
github
is
for
your
code.
Changes
pollumi
is
for
your
infrastructure
changes,
so
it
tracks,
like
all
of
the
fine
grain
details
of
exactly
who
changed
what
and
when,
and
it
has
a
bunch
of
integrations
with
different
identity
providers.
So
if
you
want
to
use
github
for
auth
or
atlassian
or.
A
A
You
might
have,
like
you
know,
a
weird
node
configuration
on
your
computer,
because
everybody
has
a
weird
node
configuration
on
their
computer
right.
So
it's
like
the
the
sort
of
like
it's
not
as
hermetic
as
all
that,
but
the
results
end
up
being
auditable
right,
because
it's
like
this
person
ran
a
program.
We
don't
know
exactly
what
they
ran,
but
it
caused
this
ec2
instance
to
be
affected
in
this
way.
That's
the
stuff
that
you
all
can
can
catch.
Then
that's.
B
Right
it,
it's
kind
of
it's
hard
to
wrap
your
head
around
how
pollumi
works,
initially
it's
somewhere
in
the
spectrum
of
imperative
to
declarative.
Sometimes
I
call
it
implarative.
It's
like
this
new
thing,
but
but
palumi
actually
produces
a
plan
in
an
object
graph
that
represents
the
desired
state
of
your
infrastructure
and
that's
actually
what
it
operates
on.
A
All
right,
so
the
way
that
this
works
is
that
when
we
run
this
stack
the
back
end,
this
is
actually
where
it's
going
to
be
storing
and
running.
Okay.
So
a
couple
of
questions
here
when
we
launch
the
the
so
the
back
end
is
that
specified
that's
not
specified
on
the
operator.
The
back
end
that
you're
using
is
actually
on
a
per
stack
point
of
view.
Is
that.
A
C
C
Back
end
is
important:
that's
where
the
state
is
going
to
be
stored
and
then
there's
a
kms
key
as
well
in
there,
which
will
allow
you
to
encrypt
your
state,
which
is
we're
going
to
be
using
a
secret
value.
So
we
do
need
to
actually
create
it.
I
can't
recall
if
we
created
a
kms
key
or
not
at
this
point.
A
I'll
show
you
I
I
did
one
before
and
so
okay,
so
let's
go
through
and
we'll
do
it
this
way.
So
we
have.
This
is
called
tgik
demo.
I
I
named
the
file
myfirststack.yaml
the
back
end
here,
we're
going
to
be
using
s3,
and
so
my
s3
bucket
that
we
called
was
tgik
palumi
state
right.
C
C
That
that
wasn't,
you
haven't,
you
obviously
haven't
used
palumi
before,
because
s3
buckets
are
globally
globally
unique
right.
So.
C
A
And
you
know,
and-
and
I
I
created
a
new
amazon
account
for
this
just
for
tgik,
because
my
old
heptio
one
got
got
chewed
up
at
some
point
during
the
vmware
transition
and
then
the
key
management
service
I
created
this
earlier
also
is
essentially
just
a
symmetric
key
using
the
default
stuff,
and
I
created
an
alias
for
it
called
paloomi
secrets,
and
so
I
believe
the
way
we
do.
This
is
alias.
D
A
A
C
I
I
did
just
double
check.
You
can
use
aliases,
but
you
need
to
specify
the
region
as
a
parameter.
A
C
Yeah,
it's
a
unique,
unique
arn,
whereas
if
you
use
an
alias,
you
can
use
it
there.
You
go.
A
Gotcha,
like
that,
okay,
well,
we'll
just
do
the
the
arn,
because
there
we
go,
and
so
this
is
what
you're
doing,
if
you're
doing
it
from
the
command
line,
but
here
we're
not
initializing
the
stack
from
the
command
line,
we're
initializing
the
stack
from
the
operator,
so
the
operator
is
going
to
be
essentially
running.
The
paloomy
command
line
for
us
is
what
is
what's
happening?
Okay,
all
right,
so
we
have
that
going.
So,
let's
go
through
and
see
what
we
have
here.
A
So
we
have
the
back
end
is
s3
the
secrets
provider
and
then
what
we're
doing
is
that
we're
providing
some
references
to
secrets
that
actually
have
our
access
key
and
then
we're
actually
saying
here's
the
region
that
we
have
going
on
there
and
then
the
stack
that
we're
calling
it.
So
this
is
the
name
of
the
stack.
So
tell
me
about
the
names
of
stacks
like
how
do
you
guys
name
stacks?
This
is
a
little
bit
confusing
to
me.
To
be
honest,.
C
So
with
our
with
our
sas,
you
have
the
concept
of
organizations
and
projects
that
live
in
inside
the
sas,
and
so
a
stack
name
inside
the
sas
is
simply
something
as
simple
as
dev.
You
can,
quite
happily
just
call
the
stack
dev
within
your
s3
bucket,
but
because
it's
at
a
global
level
in
the
s3
bucket,
you
can
end
up
with
stack
name.
A
C
C
Yeah,
so
what
we
would
usually
call,
this
is
something
like
tgik.dev
and
if
you
wanted
a
completely
reproducible
example
of
this,
you
would
call
it
tgik,
dot,
prod
and
obviously,
with
the
with
the
palomi
service.
It
has
a
little
bit
of
like
it's
got
the
rbac
control
in
there.
So
that's
a
really
good
reason
to
use
it.
A
Okay,
so
then,
then
the
project
is
not
really
something
that
the
paloomi
open
source
stuff
understands
project
ends
up
really
being
something
that's
a
concept
on
the
sas
side,
and
so
really
at
the
end
of
the
day.
Each
you
know.
Typically,
people
would
use
the
same
program
across
all
stacks
in
a
project
it
seems
like.
But
like
really,
you
know,
every
stack
stands
on
its
own
and
has
its
own
program
associated
with
it.
And
what
have
you?
Okay,.
C
D
E
C
B
One
useful
thing
about
stacks
by
the
way
is
config.
I
don't
know
if
we're
gonna
touch
on
config
today,
but
each
stack
has
its
own
configuration.
So
you
what's
very
common.
Is
you
have
one
program
that
defines
you
know
a
configuration
of
some
set
of
infrastructure,
but
then
you
want
to
conditionalize
it
based
on
hey
my
production
instance.
Maybe
I
have
you
know
three
nodes
in
my
you
know.
A
Okay,
okay,
I
think
I
get
that
so
the
stack
is
just
a
key
right
now
for
this.
If
you're
using
this
with
the
polumi
sas
service,
there's
some
other
sort
of
semantics
that
come
along
with
it
and
and
with
the
stack,
is
a
state
file,
a
set
of
parameters
and
there's,
and
typically,
if
you're,
using
the
command
line,
you
don't
have
a
pointer
to
a
git
repo,
because
you're
running
the
stuff
locally
here,
you're
actually
running
this
stuff
based
on
the
stuff
that
we're
seeing
in
the
command
line.
A
D
A
C
And
then
somebody
asked
a
question:
can
you
track
a
specific
folder
inside
a
repo
that
is
another
part
of
the
stack
cr
as
well?
And
it's
all
it's
all
really
done
from
git
references
at
the
end
of
the
day
right,
and
so
you
can
just
you
can
track
any.
You
know
a
common.
A
common
pattern
is
people
will
put
their
pollumi
code
next
to
their
application
code
and
you
can
have
like
an
infra
folder
or
a
deploy
folder
or
something
like
that,
and
the
operator
can
point
at
those
different
folders
as
well.
A
So
one
of
the
things
that
I
think
you
know-
and
we
probably
won't
have
time
to
explore.
It
is,
if
you
look
at
flux,
v2,
it's
essentially
a
toolkit
of
a
set
of
controllers,
and
one
of
these
things
is
essentially,
if
I'm,
if
I'm
finding
it
right
is,
is.
A
Knows
how
to
reach
out
and
track
the
latest
the
latest
get
sha
and
stuff
like
that.
So
the
idea
here
is
that,
instead
of
actually
having
something
always
grab
latest,
you
could
have
so
like.
I
would
love
to
so
one
of
the
things
that
we're
doing
with
like
customizing
some
of
the
tons
of
stuff
is
actually
having
a
duct
type
schema
for
crs
that
represent
essentially
a
get
reference,
and
so
now
what
you
can
do
is
you
can
have
something
like
flux
say.
A
Well,
every
time
this
branch
updates
update
this
git
reference
or
you
could
have
like.
Well,
you
know
somebody
has
to
sign
in
blood
to
update
the
git
reference,
and
so
there
ends
up
being
essentially
a
way
of
just
like
you're
referencing,
a
secret
to
reference,
a
get
reference
and
then
be
able
to
actually
stitch
that
into
other
pipelines,
so
yeah.
So
that's
really.
I.
C
Think
something
that
something
that's
that's
actually
made
me
think
of
that
we
should
add,
as
a
feature
vivec
is
only
being
allowed
to
only
being
allowed
to
apply
signed
commits,
would
be
a
very
interesting
way
of
actually
having
the
operator
running.
C
D
Yeah,
actually,
those
are
all
great
points,
and
I
think
a
lot
of
this
has
come
through
in
the
community
at
different
points
like
in
fact,
one
of
the
things
that
someone
suggested
was
essentially
globbing
on
to
you
know
the
the
source
control
capabilities
that
flux
has
so
like,
if
you
were
to
say,
hey
like
make
a
reference
to
a
source
control.
D
You
know
cr,
essentially
as
your
as
your
as
your
you
know,
git
repository
kind
of
thing,
and
you
know
basically
have
that
interoperability
and
so,
for
example,
if
flux
has
like
all
these
capabilities
to
deal
with,
you
know
your
signed
commits
or
whatever
it
is.
You
know
all
of
those
things
you
kind
of
get
for
free.
You
know
and
to
be
honest,
like
tracking
source
control
is
not
like
the
thing
that
is
most.
D
B
A
Looking
really
interesting,
also,
absolutely
all
right,
cool
and
then
destroy
and
finalize
means
that
when
I
delete
the
cr,
it's
actually
going
to
delete
all
the
things
that
that
cr
created.
So
I
don't
leave
orphans
around
right.
That's
because
I
see
it.
Okay,
so
I'm
going
to
try
and
deploy
this
and
then
we're
going
to
see
what
happens.
That's.
C
The
moment
of
truth,
let's
see
if
I
see
if
I
am
actually
a
software
developer,
but.
A
D
So
it's
starting.
C
D
Right
now,
let's
see.
A
D
So
you
have
oh
did
so.
Did
you
just
make
sure
that
the
secret
is
correct.
A
A
A
Access
key
id
control
get
secret.
I
may
flash
my
secrets
on
the
screen
here,
in
which
case
that's
not
the
end.
A
C
E
Let's
see
x,
let's
see
and
it's.
E
A
C
C
Exactly
so
a
question
from
carlos:
there
is
there's
no
ui
for
this.
If
you're
using
the
s3
back
ends,
you
don't
have
a
ui.
If
you're
using
the
polymer
sash,
you
do
have
a
ui
has
all
the
things
like
audit
logs
that
joe
joe
joe
two
mentioned
other
joe
mentioned.
B
C
And
then
I
want
to
know
if
it's
get
up
if
it
watches
the
git
repo,
it
does
indeed
you'll
see
you'll
notice
that
when
we
define
the
stacks
custom
resource,
it
tracks
a
specific
git
reference.
So
in
this
particular
case
we
are
tracking
the
main
branch
rather
than
a
specific
git
repo,
sorry
as
a
specific
git
commit.
C
But
of
course
you
could
update
the
the
reference
to
just
be
a
tag
or
to
be
a
git
commit
hash,
and
so
what
it's
going
to
do
is
just
pull
down
that
git
reference
and
reconcile
on
that.
How
long
does
it
track?
What's
the
default
vivec,
it's
every
five
minutes
or
something
I.
C
Every
minute,
but
it
is
configurable
like
most
other
operators,
you
can
configure
the
you
can
configure
the
reconcile
timeout.
Can
it
handle
a
git
web
hook?
I'm
going
to
leave
that
one
to
you,
vivek.
D
D
Essentially
it's
all
the
question
of
like
how
much
it
would
all
come
down
to
what
users
really
want
to
do
right
like
some
of
these
live
in
disconnected
networks,
and
things
of
that
nature,
like
the
operator
is
not
like,
you
know,
be
able
to
take
a
git
hook.
D
Essentially
so
doing
a
polling
mechanism
seemed
like
the
safest
way
to
do
sort
of
simple
you
know,
get
cracking
but
at
the
same
time,
most
applicable
across
different
environments,
but
certainly
you
know
it's
all
open
source
so
very
happy
to
take
a
contribution
if
anybody's
interested.
C
So,
just
an
additional
addendum
to
that.
If
you
want
a
pollumi
program
that
tracks
a
git
web
hook,
you
can
feel
free
to
build
one
in
the
automation
api.
You
would
build
a
web
server
that
takes
a
web
hook
from
git
and
then,
when
you
receive
that
git
webhook,
you
can
trigger
a
a
run
with
the
automation
api.
We
don't
currently
have
that,
but
I'm
now
tempted
to
put
one
together
because
it
doesn't
seem
like
it
would
be
that
difficult
and
then
the
next
question
is:
does
this
request
replace
crossplane?
C
You
know
without
getting
into
you
know
the
competitive
landscape
of
infrastructure's
code?
Pollumi
has
a
lot
more
provider.
Support
supports
languages
that
you
would
normally
use
in
your
application
development
life
cycle.
So,
instead
of
having
to
define
everything
in
configuration
languages
like
yaml,
you
can
define
things
in
actual
programming
languages
that
you
use
for
all
your
application
development
lifecycles.
So
if
you
are
watching
along
and
you
have
used
typescript
to
build
a
front-end
application,
you
can
use
exactly
the
same
language.
C
A
lot
of
what
we're
doing
here
is
in
the
same
domain
as
crossplane,
but
we
have
you
know.
I
think
we're
up
to
65
providers
off
the
top
of
my
head.
I
can't
remember
exactly
how
many
but
yeah
it's
it's,
not
it's!
It's!
It's
competitive
with
with
cross
playing
at
this
point.
B
But
I
think
I
think
the
interesting
thing
is
the
programming
model
like
that,
that
that,
for
us
is
where
we
get
really
excited,
like
the
fact
that
you
can
build
a
little
go
program
that
actually
does
infrastructure
as
code
as
part
of
how
the
program
works,
like
our
operator
itself
is
really
exciting.
The
deployment
engine
and
the
deployment
technology
is
one
side
of
it,
but
the
programming
model
is
is,
is
completely
different
and
I
think
you're.
B
On
the
screen,
right
now,
for
example,
is
a
good
good
example
of
that.
A
I
I
got
stuck
on
something
else,
but
I
think
this
actually
is
is
some
of
the
things
that
I
think
people
are
likely
to
hit
if
they
use
this.
So
so
what
we
got
here
is
that
we're
running
this
so
first
of
all,
okay-
and
I
think
this
is
just
you
know-
feedback
here-
is
that
we
ran
this
thing
and
the
status
says
you
know
last
update
attempt
and
it
failed
right.
So
I
think
there's
this
question
of
okay.
If
it
failed,
how
did
it
fail?
What
happened?
D
D
A
A
So
you
guys
are
pulling
okay,
so
here
let
me
let
me
reduce
the
screen
here
a
little
bit,
so
it
failed,
because
what
it's
saying
is
that
it's
trying
to
actually
create
the
namespace
and
the
service
account
that
we
gave
this
thing
doesn't
have
access
to
to
create
a
namespace
and.
A
We
were
talking
about
is
that
when
so
so?
Okay,
so
let's
talk
about
the
program
that
we're
trying
to
run
here,
real
quick,
so
the
stack
that
we're
trying
to
run
is
coming
from
this
git
repo.
This
git
repo
here
is
this
git
repo
is
trying
to
create
a
namespace
in
in
the
in
the
local
kubernetes
cluster
and
from
there
it's
actually
using
this
module
called
kate's
db,
which
is
a
separate
file
to
create
a
a
wordpress
database,
and
then
it's
creating
a
wordpress
instance
based
on
that
database.
A
So
carlos
is
asking
that
describe
was
actually
it
wasn't
events,
it
was,
I
think,
standard.
I
don't
know
what
what
is
the
describe
here
doing.
A
So
these
are
events
that
are
okay,
so
these
are
events
that
are
getting
put
in
there:
okay
yeah
now,
if
I'm
running
with
the
pollumi
back
end,
does
it
actually
upload
the
runs
from
the
the
operator
into
the
plume
back
end?
Okay,
so
that's
something
that
I
would
get
and
I
may
be
able
to
like
if
I
wanted
to
so
that's
one
of
the
things
that
the
sort
of
affordances
you
get
if
you're
using
the
sas
backend.
A
So
this
is
the
stack
and
if
I
open
this
file,
this
ends
up
being.
This
is
essentially
the
state
file
that
we
have
here.
C
C
There's
two
ways
I
can
think
of
to
fix
this
problem.
We
can
either
update
the
the
role
to
allow
and
make
it
a
cluster
role
and
allow
it
to
create
resources,
or
I
can
very
quickly
commit
a
change
to
this
repository
in
which
we
do
everything
in
the
default
namespace.
Well,.
A
D
I
don't
think
we
do,
but
that's
a
great
point.
I
think
we.
D
A
Said
yeah,
and
I
think
these
things
always
get
these-
they
always
get
stuck
when
you're
talking
about
permissions,
because
you
know
you
like
the
operators
acting
as
one
you
know,
set
of
credentials
to
do
something
right
and
then
you
have
so
like
same
thing
with
aws
is
that
you
may
have
one
set
of
credentials
for
accessing
your
state
file
and
to
access
the
kms,
and
you
may
want
another
set
of
credentials
that
you're
using
to
actually
execute
the
the
the
graph
right,
and
so
there's
there's
sort
of
two
sides
of
that
coin.
There.
C
I
think
we
we
can,
we
can
define
a
new
service
account
that
will
have
cluster
admin
credentials
and
pass
it
to
the
deployment
pod.
I
think
that's
probably
going
to
take
longer.
Maybe.
A
Let's
do,
let's
do
cube
control,
we're
going
to
add,
create
cluster
roll,
binding
cluster
role,
equals.
A
Buster
admin
and
we
need
service
account.
E
Equals
what
do
we
call
it.
E
E
Oh
kate's
or
that's
a
ugly
stick.
A
B
And
it's
a
very
natural
fit
too,
because
the
whole
idea
of
infrastructures
code
is
that
you
have
this
goal
state
this:
eventual
consistency
of
your
infrastructure,
but
you're
right.
You
have
to
manually
trigger
it
so
kind
of
the
marriage
of
infrastructure's
code
and
the
operator
is
actually
a
really
nice
fit.
Actually
that's
why
this
works
so
well.
D
D
It
has
reached
that
state,
and
so
now
into
like
one
thing
we
could
do
is
like
kick.
The
stack
in
some
form
like
make,
make
a
configuration
change
there,
yeah
or
but
as
far
as
like
the
operator
is
concerned,
it
kind
of
thought
that
this
is
like
you
know,
I
mean
I
could
keep
trying
it
but
like
this
is
like
a
failed
state.
Essentially.
A
Okay,
okay,
I
think
you
know.
One
thing
that
might
be
helpful
here
is
adopting
some
of
the
the
condition
pattern
to
actually
represent
sort
of
where
this
thing
is
at
right,
more
metadata
about
here's.
The
last
time
I
ran
here's,
the
state,
you
know,
there's
there's
a
bunch
of
conventions.
A
This
is
all
coming
from
the
k-native
world,
around
sort
of
the
the
conditions
and-
and
I
think,
if
you
know,
there's
a
there's
a
bit
of
an
effort
to
have
everybody
align
their
crds
with
a
set
of
conditions
which
then
allows
tooling
to
be
able
to
represent
and
show
those
conditions
in
a
in
a
good
way.
Okay,
this
thing
is:
why
are
we
not
deleting.
D
The
stack
while
you
are
yeah
because
of
the
destroying,
like
you
know,
delete
and
whatever
destroy
on
deletion
side
of
things
that
we've.
A
D
Sorry
yeah,
I
was
talking
all
the
time.
Okay,
sorry,
the
yeah
you
could.
You
could
either
remove
the
finalizer
from
the
the
stock
or
you
could,
like
you
said,
like
kill
and
restart
the
the
operator.
E
C
D
D
C
So
we
should
get
a
new
part
any
minute
that
picks
up
the
new
stack
because
I
think
we're
using
the
default
operator
configuration.
So
it
will
prob
it's
probably
in
a
back
off
because
it's
failed
so
many
times
so.
C
Reconciling
at
this
point
there
you
go.
A
Oh,
so
it's
doing
a
graceful
shutdown
and
it's
nice,
you
guys
do
the
the
leader
election
using
the
the
the
you
know,
yeah
it
uses
a
config
map.
Config
map
is
a
lock
suite.
Okay,.
A
All
right,
so
one
of
the
things
I
think
you
know
I'll
start
yeah
I'll,
give
you
guys
some
impressions
here
so
like.
I
think
it
would
be
interesting
to
think
about
how
do
we
actually
create
use
this
to
create
sort
of
an
infrastructure
vending
machine,
because
right
now,
the
idea
here
is
that
you
know
somebody
can
can
can
activate
pollumi
here,
but
there's
no
opportunity.
A
You
know
you
probably
could
do
something
using
like
an
admission
controller
or
opa
or
something
like
that.
But
there
there's
not
a
lot
of
support
for
being
able
to.
You
know
lock
down
what
people
can
do,
and
so
I
think
you
know
one
of
the
interesting
things
here
is
that
could
we
combine
this
with
like?
Could
you
actually
say
like?
A
The
next
step
after
that
would
be
able
to
create
new
crds
that
are
essentially
aliases,
are
mapped
to
stack
instances,
and
so
then
you
could
say,
give
me
a
new
wordpress
instance,
and
you
don't
even
know
that
you're
using
polumi
under
the
covers
to
be
able
to
implement
that
give
me
the
new
wordpress
instance,
and
so
these
things
are
like
could
be
built
on
top
of
what
you
have
here
right
now,
but
starting
to
think
about
those
sort
of
cross,
namespace
roles
and
that
sort
of
infrastructure
vending
machine
point
of
view.
A
That's
something
that
I
think
you
know.
We
definitely
see
a
lot
of
usage
of
in
enterprises
right
and
now
what
you
could
do
is
you
could
say
like
well,
you
know
I
could
now
use
kubernetes
as
as
my
system
of
record,
for
being
able
to
manage
a
whole
bunch
of
of
you
know
configuration
across
clouds,
and
this
is
the
type
of
thing
that
you
see.
People
do
a
lot
with
things
like
cloud
formation
on
on
the
aws
world,
so
I
don't
know
is
that
something
that
you've
all
been
been
looking
at.
Yeah.
B
For
sure
we
actually
just
launched
a
registry
recently,
and
the
registry
is
actually
a
collection
of
all
of
our
providers,
but
also
off-the-shelf
components
for
common
patterns,
reference
architectures
or
as
you're
saying,
like
kind
of
templates,
you
know
we're
working
on
a
private
registry.
So
if
you're
an
enterprise,
you
can
have
your
own
set
of
blueprints.
You
know
like
we
work
with
a
lot
of
folks
who
you
know
have
standard
architectures
that
they
just
want
to
stamp
out.
B
You
know
lots
of
instances
of
kind
of
to
your
point,
and
so
it
actually
is
really
really
good
fit
for
that.
You
kind
of
mentioned
bulpa
as
well,
and
you
know
policy
and
we
actually
have
a
policy
as
code
angle
as
well.
So
if
you
want
to
make
sure
people
aren't,
you
know
violating
costs,
policies
or
compliance
or
whatever
you
can
integrate
there
as
well,
and
that
works
with
all
these
sort
of
components
and
templates.
Also,
so
absolutely
we're
just
sort
of
scratching
the
surface.
B
D
The
the
default
back
off
like
sorry,
the
the
the
wait
period,
is
like
300
seconds
like
five
minutes
at
the
moment,
which
is.
D
Why
the
yeah.
D
Yeah,
it's
it's
actually
killed
it,
but
like
the
termination
window,
whatever
is
like
set
to
300
seconds
yeah.
A
A
No,
but
you
know,
I
think
a
lot
of
people
enjoy
this
about
tgik
because,
like
this
is
the
type
of
stuff
where
this
is,
how
you
pick
up
all
the
little
tips
and
tricks
and
sort
of
like
you
know
what
works
and
what
doesn't
work
and
what
happens
if
you
know,
if
you
know
you
hit
this
hit
these
these
things
in
in
reality,
yeah.
D
Yeah,
so
this
is
actually
one
of
the
side
effects
of
like
kind
of
force
like
okay,
destroying
things
halfway
like
so
the
stack
currently
like
the
the
the
state
of
the
stack.
You
know
it's
seeing
that
there
were
some
things
that
were
said
to
be
terminated,
and
you
know
it
basically
pulled
me
by
default-
tries
to
be
safe
about
like
destroying
or
like
vegging
on
things
that
are
currently
in
in
progress.
Basically
so
yeah,
it's
right.
We.
A
Yeah
s3
back
end
too,
so
we'll
do
that
yeah.
C
So
for
those
following
along
at
home,
the
reason
that
this
is
actually
happening
is,
if
you
provision
something
with
palumi
and
palumi,
can't
verify
that
that
api
operation
succeeded
or
not.
It
will
tell
you
that
it
has
something
called
pending
operations
in
in
in
process
if
you
use
other
infrastructures
code
tools.
This
is
a
common
thing
in
which
you
know
the
api.
C
You,
you
have
a
flaky
network
or
you
know
your
access
to
that
api
doesn't
work
lots
of
different
reasons
why
this
might
happen,
and
so
what
you
have
to
do
with
in
paloomi
is
actually
say
which
of
those
pending
operations
have
completed
and
which
haven't
completed.
So
what
you
would
normally
do
is
go
into
the
console.
Look
at
all
the
different
parts
of
the
look
at
all
the
different
parts
of
the
stack
that
have
a
succeeding
which
haven't
succeeded,
reconcile
those
manually.
C
C
A
Here
we
have
it's
working
well,
we're
doing
we're
doing
kind,
so
we're
not
gonna
get
a
database
out
of
it.
E
C
Though
that,
currently
it's
using
a
local
database,
but
in
the
the
next
step
we
will
be.
E
A
Very
cool
I
can
set
up
wordpress
and
everything.
C
So
if
you
now
that
we've
provisioned
that
main
branch,
if
you
go
and
take
a
look
in
the
actual
git
repository,
I
can
kind
of
talk
you
through
what
the
changes
will
be
here
so
right
now,
if
you
look
in
the
main
dot
in
the
index.ts.
Sorry
we'll
just
talk
about
what
we've
currently
provisioned.
We
have
a
namespace,
obviously,
which
is
pretty
familiar
if
you're
used
to
the
kind
of
yaml
and
kubernetes
interface,
and
then
we
have
a
pollumi
component
resource
in
which
we've
provisioned
a
mysql
database.
C
Obviously
this
is
not
going
to
do
as
much
good
in
production,
because
this
is
a
temporary
mysql
database.
If
we
restart
this
pod,
we're
going
don't
have
any
persistent
storage
and
then
we
also
have
a
component
resource
and
you
can
think
of
these
as
reusable
pieces
of
infrastructure
as
code,
and
so
we've
provided
a
wordpress
instance
to
talk
to
that
database.
C
And
you
can
see,
there's
lots
of
best
practices
implemented
in
this
particular
component.
So,
like
you
know,
I'm
only
providing
the
inputs
that
you
might
actually
want
to
configure
in
wordpress.
So
you
don't
want
to
mess
around
with
the
ports,
because
you
know
what
part
it's
going
to
run
on
and
all
that
kind
of
stuff.
So
what
what
we
can
do
now
is
we
can.
C
A
Bunch
of
different
parts:
let's
look
at
the
index
again
so
now,
instead
of
actually
allocating
the
database
on
cluster,
we're
actually
doing
rds
right.
C
Yes-
and
you
can
notice
like,
I
can
actually
pass
the
outputs
of
that
creation
directly
to
my
kubernetes
cluster,
my
wordpress
resource,
so
we're
going
to
create
a
a
manual
random
we're
going
to
not
a
manual
we're
going
to
create
a
random
password
for
the
database.
We're
going
to
be
able
to
use
that
within
polumi
pass
that
to
the
rds
database.
I've
called
it
scary
database
because
it's
going
to
have
to
be
open
to
the
public
for
this
demo,
because
you're
using
kind
locally.
C
If
you
were
doing
this
in
production,
you
would
use
something
like
eks
and
use
a
private
subnet
and
all
that
kind
of
stuff.
But
you
know
for
the
purpose
of
this
demo,
we'll
call
it
a
scary
database
and
then
I
can
actually
pass
the
result
of
that
scary
database
to
the
wordpress
instance.
As
an
output,
I
don't
have
to
switch
between
tools.
I
don't
have
to
switch
between.
You
know
manually,
pass
anything
and
if
I
make
any
changes
and
those
those
values
update
paloom,
you
can
reconcile
them.
C
So
all
we
would
need
to
do
now
is
update
our
stack
deployment
to
point
to
the
production
branch.
As
a
reference,
and
it
will
should
update
everything,
I'm
gonna.
C
A
C
Yeah,
you
can
so
not
the
name
space.
If
you
go
back
to
the
code,
real
quick
I'll
show
you
what
how
you
would
for
kubernetes.
At
least
you
you
can
manually,
you
can
specify
those
values.
So
if
you
look
at
the
main
space,
we've
got
metadata
name
there.
C
C
In
order
to
actually
change
that
that
resource
you
need
to
kind
of
complete,
create
a
complete
replacement,
and
so
the
random
ids
allow
you
to
do
like
blue
green
depo
deployments
and
pollumi
will
automatically
create
the
replacement.
Add
it
behind
the
load,
balancer
update
it
and
everything
will
happen,
and
it
happens
in
the
right
order.
C
No,
but
I'm
with
vivec
we've
had
enough.
I've
had
enough
heart
attacks
in
the
last
20
minutes,
so
I
think
the
third
stack
would
be
a
good
way
to.
D
D
So
that's
the
one
that
I
think
we
might
get
a
collision
on.
A
That's
that
should
be
okay,
because
I
would
assume
that,
like
this
thing
like,
if
wordpress
already
exists,
it's
like
oh
well
right
well,.
A
A
Oh
yes
dash
two
yeah,
okay!
Let's
let
me
look
to
see
what
our
current
state
in
the
cluster
here
is
just.
C
I
just
want
to
address
yuka
zuka.
Yes,
I
am
very
aware
that
that
my
sequel
instance
is
not
the
way
to
do
it.
This
is,
it's
called
a
scary
database.
For
a
reason,
it's
mainly
to
facilitate
the
demonstration
I
would
not
be.
C
I
would
not
be
advocating
for
this
in
most
situations,
and
what
I'll
do
after
this
is
finished,
is
I'll,
go
and
make
this
a
proper
production
level
database
with
an
eks
cluster
and
all
that
kind
of
stuff,
so
that,
and
if
anybody
watches
the
stream,
they
can
come
along
and
see
what
it
looks
like.
Okay,.
A
Yeah
and
we
will
we
usually
check
in
these
notes-
I'll
probably
do
it
sometime
this
weekend
or
next
week
into
a
github
repo
to
live
long
term,
and
so
so
everybody
can,
and
if
you
want
to
update
that
stuff,
we
can.
We
can
keep
that
up
to
date.
Yes,
absolutely
all
right!
So
now
this
thing
is
configured
and
it's
actually
supposedly
running.
D
C
E
Production
right,
let
me
double
check.
C
D
A
D
D
D
C
C
E
C
A
C
Think
it's
targeting
the
wrong
stack
at
the
moment.
A
So
which
of
these
succeeded?
So
here's
the
one
that,
like
that's
the
old,
the
old
one
that
failed.
A
D
The
one
that
if
you
look
at
the
just
look
at
the
the
stack
resource
itself,
like
the
ammo,
that
might
tell
you.
D
D
Interesting,
did
we
let's
see
if
there's
any
more
updates,
like
there'd,
be
by
any
chance
to
do
the
s3
back
end
without
the
dash
2
before
and
somehow.
A
I
don't
think
I
did
but
yeah.
C
A
Delete
the
objects
there
and
then
cube
control,
delete.
A
C
E
D
Super
helpful
all
right,
so
I
say
something
new
unable
to
check
out
branch
reference
not
found
interesting.
So
let's
take
a
make
sure
that
that
are
passing.
C
The
right
reference
head.
A
D
E
E
D
D
Yeah,
you
can
just
use
the
commit
directly.
A
C
You
can
see
that
I
had
this
working
with
that
exact
yaml
earlier,
so
I'm
confused
as
to
why
that
isn't
working
either.
D
Let's
delete
it,
it's
not
it's
not
like
fat
finger
badge.
That's
right,
just
make
sure
that
the
production
is
spelled
correctly.
Okay,
cool
sounds
good.
I
I
would
just
use.
A
D
The
commit
and
that
will
do
a
specific
commit
so
the.
B
D
No
just
just
a
sha,
you
could
just
specify
that
so
just
commit
like
replace
branch
with
commit
and
then
just
the
the
actual
shot
that
you
got
there.
We
go.
A
A
Yeah,
for
those
who
don't
know
joe,
has
a
long
history
doing.net
stuff
back.
You
cannot.
B
E
A
Reconciler
error,
tgik,
demo2
namespace
default,
failed
to
create
local
workspace,
failed
to
create
workspace
and
enabled
and
listen
git
repo
and
able
to
check
out
branch
reference
not
found.
Is
there
some
could
there
may
be
some
bug
where
it's
actually
we're
getting
some
contamination
from
one
run
to
the
other.
C
I
can't
see
why
that
would
be,
but
I
have
been
wrong
before.
A
D
E
D
Make
sure
that
there's
only
like
now
there
is
there's
only
one
stack
at
the
moment
right
like
we
make
sure
that.
A
D
A
Let's
just
on
the
interest
of
trying
to
actually
get
this
done,.
C
Mean
what
I
can
do
is
just
merge
the
production
branch
into
main,
and
that
will
almost
certainly.
D
Wrong
everything's
matched
here,
so
I
don't
quite
understand,
what's
going
on
here.
D
E
E
I
wanted
to
do
this.
There.
D
E
D
A
It
it's,
it
created
the
name
space
and.
E
C
So
you
should
be
able
to
go
in
here
and
go
into
rds
and
you
should
see
a
database
being
created.
C
E
C
So
if
you
drill
down
all
the
way
into
the
stack
itself-
and
so
you
can
go
into
here-
you
can
go
into
the
polomi
folder.
You
can
go
to
the
stack.
You
can
go
into
the
stack
json.
I
think
it's
that
one
right
yeah
and
then
actually
look
at
the
actual
here,
so
you
should
be
able
to
control,
f
and
search
for
password
and
it's
a
bit
small.
So
I
can't
actually
see
at
the
moment.
Unfortunately
there
we
go
much
better.
C
Something,
I
think,
is
interesting
about
the
palumi
supporters.
If
you
keep
scrolling
down
here,
you
should
be
able
to
see
the
in
lots
of
other
infrastructures
code
tools.
They,
the
password
itself,
is
stored
in
the
state
file,
and
so
your.
E
C
Exactly,
however,
when
we
specified
when
that
kms
key,
when
we
started
up
the
actual
stack
itself,
we
encrypted
that
password.
A
C
It's
using
the
kms
key
that
you've
specified
in
the
stack
to
actually
encrypt
that
value,
and
so
you
no
longer
have
to
kind
of
worry,
and
this
is
a
great
value
proposition
when
it
comes
to
the
paloomi
sas
as
well
like
one
of
the
things
that
people
say
to
us
is
we'd
love
to
use
your
sas,
but
we
don't
want
to
give
all
of
our
database
passwords
to
you
and
give
all
of
our
api
keys
to
you,
and
we
just
say
the
same
thing
like
you:
can
just
patch
a
single
command
line
flag
in
your
own
kms
key,
and
we
can't
actually
view
them
at
all,
so
it's
all
stored
in
an
encrypted
manner.
C
Obviously,
the
the
secret
itself
in
the
kubernetes
cluster
is
encrypted
by
whatever
the
kubernetes
cluster
would
use.
So
you
know
if
you're
using
encrypted
xcd
and
all
that
kind
of
stuff
yeah
that's
there.
But
I
think
this
is
really
useful
because
it
what's
happening
on
the
inside
the
operator
is
the
operator
is
making
a
call
to
aws
kms
when
it's
doing
its
run,
decrypting
that
value
with
the
kms
key
and
then
provisioning
the
actual
resources.
C
Never
leaves
your
security
boundary
and
the
the
the
operator
itself,
despite
all
of
the
kind
of
teething
issues
that
we've
had
so
far.
The
operator
itself
is
just
a
great
mechanism
of
doing
continuous
delivery
within
your
security
boundary,
and
so
you
know
you
never
have
to
worry
about
those
kind
of
things
we
can
quickly
go
through.
One
of
the
things
I
also
think
is
interesting
as
well
is
that
you
know
joe
talked
earlier
about
this
difference
between
imperative
and
declarative.
In
the
actual.
C
D
C
But
you'll
notice
in
the
actual
code
itself,
if
you
go
into
the
index.ts
yeah
in
here,
you'll
notice
that
we're
creating
a
secure
scary
database,
and
that
has
a
bunch
of
what
we
call
in
pollumi.
Outputs
right.
E
C
So
there's
that's
the
result
of
the
api
call
to
aws
when
you
create
a
when
you
create
a
database
and
polumi
knows
that,
because
we're
passing
the
output
from
a
scary
database
to
our
wordpress
instance.
C
That
means
that
palumi
knows
to
create
those
things
in
order,
so
everybody
will
always
say
well
what
about
the
difference
between
imperative
and
decorative,
because
we're
doing,
because
we're
passing
outputs
from
one
resource
to
inputs
of
another
resource
it
it
it
could.
You
know
it
knows
the
order
in
which
to
create
things
and
because
the
secret
itself
isn't
being
passed
to
anything
else.
The
secret
is
ready
to
be
created,
so
it
will
do
things
in
parallel.
It
will
do
things
much
quicker.
A
We
need
okay,
so
so,
essentially
the
model
is,
is
that
you
run
this
program.
It
generates
essentially
a
an
execution
graph,
and
one
of
the
items
in
the
execution
graph
is
create
a
secret
for
me
and
then
pass
that
secret
around.
But
the
program
this
thing
runs
it'll.
It's
already
done
and
completed
running
this
program.
Now
the
x,
the
pollumi
execution
engine
is
actually
going
through
and
actually
executing
sort
of
the
results
of
the
program
to
to
make
stuff
work.
Okay,.
B
Right,
it's
a
is
a
dag,
and
these
all
these
outputs
form,
the
the
you
know,
edges
between
nodes
in
the
graph
and
it's
actually
really
cool,
because
this
is
where
background
and
programming
languages
was
actually
helpful
to
building
pollumi,
because
these.
B
Promises
right
and
but
building
on
lee's
point
the
secrets,
understanding
the
the
engine
understands
secrets
in
a
deep
way,
so
transitively
we
can
encrypt
anything
that
secret
touches.
So
you,
you
really
can
be
sure
it's
not
going
to
leak
the
the
secret.
C
Another
failure,
I
feel,
like
this
entire
day,
has
been
a
failure
at
this
point.
What
is
what
is
the
error.
A
D
Something
because
we're
dealing
with
the
same
stack,
essentially
yeah,
so
I
mean
for
some
reason,
like
the
perhaps
before
it
tried
to
do
like
we
didn't
get
rid
of
the
state.
I
guess
right,
like
it
had
tried
to.
C
Well,
I
would
have
expected
they
are.
I
would
have
expected
the
the
kube,
the
wordpress,
the
wordpress
date
deployment
to
have
completed
by
now.
Does.
A
Yeah,
okay,
so
I
think
you
know
we
got
the
data,
so
I
think
what
happened
here
is
that
we
were.
You
know
we
got
stuck
and
we
tried
to
hack
our
way
out
of
it
and
we
had
some
concurrent
stuff
going
on
and
that
caused
us
some
problems
here.
So
let's
give
it
one
more
try
here
what
I'm
going
to
go
ahead
and
do
is
I'm
going
to
go
through
and
let
me
see
if
I
can
repair
some
stuff
we're
going
to
go
through
and
we're
going
to
delete
the
the.
A
Okay,
we'll
go
to
auckland
here
and
we'll
look
at
custom
resources.
This
stack
here
we
will
remove
the
finalizer
out
of
it
because
that's
probably
going
to
break
it
because.
C
Well,
it
will
it
what
it
will
do
is
it
will
wait
until
rds
returns
a
deletion
operation,
and
I
think
you
know
one
of
the
things
that
we
talked
about
when
we're
putting
this
demo
together
is,
is
rds
a
good,
a
good,
a
good
way
of
showing
this,
and
I
think
we
wanted
to
really
see
if
we
could
show
the
actual
way
that
you
would
do
this
with
an
actual
application,
but
rds
can
often
take
between
three
to
five
minutes.
A
Yeah,
I
think
I
think
I
think
yeah
we
there
were
some
sort
of
like
weird
concurrency
things
going
on,
because
we
were
mucking
with
things
behind
the
scenes
here.
Okay,
so
now
we've
deleted
the
stack
I'm
going
to
delete
the
the
state
file
here.
C
I've
been
advocating
for
a
while,
and
I'm
going
to
advocate
it
again.
We
should
have
a
paloomy
stack
pending
operations,
yes
they're,
okay,
I'm
really
sure
flag
like
which
is
basically,
I
know
my
rds
database
provision
correctly.
The
current
way
of
doing
it
is
that
you
would
actually
kind
of
modify
the
state
itself
and
remove
those
pending
operations.
C
We
have
got
some
ui
improvements
for
the
cli
in
place
to
actually
you
know
in
in
in
proposal
to
actually
make
that
a
little
bit
easier
like
just
and
again
it's
it's,
it's
mainly
a
safety
mechanism
right,
it's
what
what
palomi's
telling
you
is.
I
don't
know
what
happened
when
I
created
this
database
and
I
really
don't
want
to
screw
it
up
so
like
these.
Like.
C
Can
you
verify
that
these
pending
operations
actually
succeeded,
because
otherwise
you
could
end
it
with
accidental
database
deletions
and
production
outages,
so
those
pending
operations
are
just
a
gate
for
you
to
say
make
sure
that
this
is
actually
finished.
Basically,.
A
Yeah-
and
I
think
this
is
a
you
know-
this
is
a
good
lesson.
Is
that
automation
like
this
is
essentially
a
space
laser
and
if
you're,
not
careful,
you
can
have
collateral
damage
right.
You
can
actually
delete
stuff
without
recognizing
that
you're
deleting
it
and
so
yeah,
and
you
can
do
it
at
scale
right.
C
It's
it's
interesting
because
we
do
have.
We
have
quite
a
few
customers
using
this
this
operator.
You
know
some
very
large
companies
that
I
won't
name
that
you
would
certainly
have
heard
of
using
this
operator
at
a
production.
You
know
in
a
production
capacity
and
you
know
provisioning,
really
large
numbers
of
resources,
and
so
you
know
the
joys
of
a
live
demo.
A
Yeah,
I'm
sorry,
you
know
I
tickled
something
wrong
here:
okay,
so
here's
what
I
did
is
I
just
to
be
clear.
So
I
deleted
the
stacks
and
the
crds
for
the
stacks.
I
then
deleted
the
state
files
and
then
I
deleted
the
name
space,
and
then
I
restarted
the
pod.
So
I
think
you
know
we
should
be
sort
of
as
reset
as
we're
going
to
get
reset
here.
Yeah.
C
I
would
also
check,
I
would
also
just
double
check
that
there
are
no
other
name
spaces
in
the
actual
cluster,
like
the
wordpress
that,
because
I
think
what
happened
there
is
that
we
didn't
delete
the
word,
I
think
it's
tried
to
create
another
wordpress
namespace,
so
just
double
check
that.
A
I
think
I
think
we're
good,
so
no
other
name
spaces
here
and
I
I
did
do
the
three
thing.
E
E
C
Otherwise,
it
would
try
and
create
a
database
called
wordpress
that
already
existed,
and
it
would
run
into
an
issue
again
and
so
any
minute.
Now
it
should
throw
up
another
database.
C
D
Can
you
look
at
the
events
sure
yeah.
A
Yeah
it
should,
it
should
have
showed
us
events,
yeah
and
so
yeah
conditions,
no
conditions,
there's.
No.
It
shows
events
here,
usually
right.
D
D
That's
right,
so
it's
doing
its
thing.
It's
in
the
middle
of
doing
a
pull
me
up
at
the
moment.
D
D
If
you
were
using
like
the
the
productions,
is
rather
the
the
it's
like,
you
would
be
able
to
see
the
status
of
what's
going
on.
D
C
C
Yes,
because
it's
because
we're
passing
the
outputs
from
the
rds
database
to
the
wordpress
deployment,
the
the
namespace
should
exist
and
the
secret
should
exist
now
because
they
don't
have
any
that
those
outputs
don't
have
any
dependencies.
So
the
names
there.
It
is
that's
one
wordpress
native
namespace
and
there
should
be
that's
the
that's.
The
password
that's
gonna
get
stored,
that's
been
used
by
the
random
by
the
the
random
provider
has
been
created.
C
Somebody
wants
to
type
that
out
on
base64
encode
it
they
can.
They
can
go
right
ahead,
but.
C
Yes,
so
like,
though,
again
the
way
palumi
works
is
the.
If
you
go
into
the
rds
management
tab
right
now
and
if
you
go
into
the
actual,
creating
database.
There's
a
bunch
of
properties
here,
like
endpoint
port,
all
those
different
things
that
eventually
the
aws
api
will
return
to
the
pollumi
provider
and
say:
okay.
This
is
now
finished,
completing
sorry.
This
is
now
finished.
Creating
now
that
we
know
the
end
point,
I
can
pass
that
end
point
to
a
new
resource
and
I
can
start
creating
that
and.
C
How
palumi's
dag
works
and
that
allows
you
to
you
know,
do
things
in
a
specific
order.
Obviously,
we're
gonna
have
to
wait
for
wordpress.
Sorry,
the
database
to
get
created
usually
takes
about
three
or
four
minutes.
B
A
B
C
I
do
think
there's
there's
definitely
options
for
us
to
come
to
improve
here.
Right,
like
you
know,
there's
definitely
things
that
we
can.
You
know
that
we
can.
We
can
go
ahead
and
make
better
here,
especially
around
the
concurrency
you
you
should
be
able
to
see
the
actual.
If
you
do
a
cube
cuddle
describe
on
the
stack,
what
you
should
be
able
to
see
is
on
the
stack
resource.
C
What
you
should
be
able
to
see
is
the
pollumi
automation,
api's,
structured
output,
when
things
go
wrong
so
like
if
right.
A
C
D
The
the
this.
A
A
D
A
C
Exactly
and
you
should
be
able
to
open
this
up
with
port
forward
and
it
should
just
be
a
standard
wordpress
instance.
A
A
A
All
right:
well,
you
know.
I
thanks
guys,
I
think
I'm
gonna,
I'm
gonna
go
ahead
and
we
can.
We
can
go
ahead
and
wrap
up.
I
you
know
we
we
we
found
our
way
through.
We
got
it
working
hit
a
few
snags
along
the
way,
but
I
think
you
know
honestly.
I
learn
so
much
when
things
don't
go
well
than
when
they
do
and-
and
we
were
definitely
coloring
outside
the
lines
a
little
bit
here.
A
I
think
in
terms
of-
and
I
think
that
the
the
lesson
that
I
got
is
that
if
you
launch
you
know,
I
think
you
know
launch
two
stacks
that
are
trying
to
actually
allocate
the
same
resource
without
using
that
resource.
Renaming
thing
you
got
to
be
really
careful
about
those
explicit
references
versus
the
sort
of
the
the
automatically
created
references,
and
I
think
you
know
if
you're,
if
you're
really
buying
into
pollumi
up
and
down
the
stack,
then
you
let
palumi
manage
the
real
names
of
everything,
because
you
don't
need
stable
references
outside.
B
A
So
yeah
so
that
was
like
it
was
fun
to
debug
it
and
go
through
it,
and
I'm
excited
about
the
idea
of
actually
using
pollumi
like
one
of
the
things
that
I
think
is.
Is
you
know,
I
think
everybody
sees
the
the
the
promise
of
operators
in
the
kubernetes
world
and
it's
a
pain
in
the
butt
to
write
operators.
I
think
some
of
the
stuff
that
we
dealt
with
here
talks
to
some
of
that,
but
a
lot
of
times
what
we
want
to
do
with
operators
is,
we
want
to
be
like
well.
A
I
want
to
use
this
operator,
as
essentially
a
macro
to
generate
a
bunch
of
other
kubernetes
resources
and
manage
them
or
to
create
cloud
resources
and
manage
them,
and
so
I'm
excited
of
viewing
like
this
as
a
way
to
create
a
special
type
of
operator
right.
How
can
we
use
pollumi
as
an
operator
toolkit
to
be
able
to
manage
other
things
on
kubernetes
or
other
things
that
are
that
are
happening?
So
I
think
that
ends
up
being
really
interesting
and
cool
to
me
so
yeah.
A
C
B
Well,
I
was
going,
I
was
just
going
to
thank
leah
and
vivek
because
they
they
prepared.
You
know
all
the
goods
today,
and
I
appreciate
you
know
they.
They
did
all
the
hard
work
and
well
aside
from
you,
joe,
who
I
just
show.
B
Staffing,
yes,
well,
no,
it
was
great
honestly,
I'm
excited
about
some
of
those
opportunities
that
you
were
talking
about,
joe.
You
know
the
automation
api
that
we
talked
about
earlier.
You
can
literally
just
write,
go
and
embed
these
iac
capabilities
inside
of
it
and
that
that
works
really
well
with
the
operator
sdk,
and
so
I'm
really
excited
to
explore
that.
A
The
intersection
and
the
interoperability
of
these
things
starts
to
look
really
cool,
especially
like
I
mentioned
things
like
cartographer
as
a
way
to
do
this
stuff.
You
know
other
parts
of
the
cloud
native
ecosystem.
I
think
you
know,
like
the
the
interaction
with
like
the
the
flux,
get
ops
tool,
kit
and
being
able
to
you
know,
be
much
more
explicit
about.
When
do
I
actually
want
to
promote
something
into
being
used
by
a
stack
and,
like
I
don't
know,
I
think
lots
of
exciting
stuff
going
on
there.
So
yeah.
A
A
It's
the
appropriate
speed
and
and
we're
gonna
have
folks
coming
back
and
next
week
I
don't
have
the
the
schedule
coming
up,
but
we're
I'm
gonna
work
to
get
that
posted
to
the
tgik,
github
repo
and
I'll,
be
tweeting
about
that,
so
we're
getting
better
about
actually
planning
this
stuff.
So
so,
thanks
again
to
the
paloomi
folks
for
helping
us
out
and
thank
you
to
everybody
who
joined
us
and
have
a
great
weekend,
everybody.