►
B
So
this
should
hopefully
be
very
quick
and
not
particularly
too
interesting,
but
I
think
it
has
a
little
bit
of
potential
impact
into
some
of
the
work
we
may
consider
doing
in
the
future.
So
I
thought
I'd
bring
it
up
anyway.
This
is
not
up
for
you
yet,
but
it's
probably
gonna
be
up
for
review
tomorrow
or
maybe
tonight
so
I'll
just
start
by
sharing.
B
C
B
Basically,
what
this
demo
is
about
is
there's
a
kubernetes
project.
That's
actually
under
the
cuban
any
special
inch
groups,
special
interest
groups,
umbrella,
called
external
DNS
and
the
very
short
description
of
it
is
it's
a
controller
that
more
or
less
runs
inside
your
cluster
and
can
look
for
either
CR,
DS,
so
format
at
full
resources
or
just
simple
annotations
on
certain
resources
and
create
dns
records.
For
you
in
you
know,
whatever
DNS
system
you
choose,
there's
a
back
pluggable
backends
there
for
all
sorts
of
stuff,
including
things
like
Google
DNS,
wrap,
53,
CloudFlare
Akamai.
B
You
know
power,
DNS,
DNS,
all
of
the
back
ends.
There
I
said
something:
I've
been
keeping
my
eye
on
for
a
while
for
us
to
leverage
I
think
it's
actually
really
beneficial
for
us
to
leverage.
So
the
demo
here
is
just
a
very
quick
overview
of
it
running
and
doing
its
thing
in
a
dry
run
mode,
so
I'm
not
actually
changing
any
DNS
entries
or
anything
like
that.
So,
if
you
look
here,
this
is
just
under
our
you
know:
get
lab
home
files
just
a
normal
release.
B
If
I
show
you
the
home
file
itself,
you'll
see
all
of
this
stuff
very
boring,
nothing
exciting
at
all.
There's
a
chart
available
for
this
by
bitNami
or
somewhere
upstream
that
I'm,
just
pulling
in
I've,
got
my
own
chart
here
which
basically
wraps
around
it.
So
if
you
look
at
the
requirements
that
you
know
for
this
chart,
you'll
see
it
just
wraps
around
the
external
DNS
chart
upstream,
and
the
only
thing
my
chart
actually
does
is
creates
one
secret,
which
is
the
password
for
it
so
fairly
simple,
not
very
exciting
at
all.
B
Well,
while
we're
waiting
to
that
I
may
as
well,
I'll
show
you
what
how
you
would
actually
utilize
this.
So
if
I
look
at
get
lab
monitoring,
if
you
look
at
so
this
is
just
our
current
get
lab
on
a
triple.
You
reduce
this
a
little
bit
you'll
see
here
this
lying
down
here,
so
you
can
see
here
on
the
service
for
Prometheus
I've
added
a
new
annotation.
That's
just
called
you
know
external
DNS,
l
for
kubernetes
hostname
and
then
just
a
fully
qualified
DNS
record
to
create.
So
this
is
just
Prometheus.
B
You
can
do
Gilley's
mini
cube,
get
back
on
that.
This
is
all
just
testing,
so
you
know
just
basically
about
putting
this
annotation
with
whatever
DNS
record
you
would
like.
It
will
create
a
a
record
pointing
to
that
service.
Typically,
you
wouldn't
do
this
for
viously
think
it's
like
cluster
IPS
and
anything
not
externally
accessible,
but
for
things
that
you
expose
by
external
load
balancers,
you
can
put
the
annotation
on
ingress
objects
as
well.
It
will
basically
automatically
create
that
external
DNS
record
for
you
here
we
go
so
our
charts
now
installed.
B
You
can
see
here.
Basically,
it
spits
out
a
bunch
of
configuration.
The
important
part
here
is
this
is
just
running
in
dry
run
mode.
No
changes
to
newest
records
will
be
made.
You
can
see
here.
It's
looked
at
every
single
service
we
already
have
across
all
namespaces.
Looking
for
that
annotation
to
say,
I
need
to
pick
up
and
do
some
work.
They
can't
find
anything
so
what
I'm
gonna
do
while
I'm
waiting
for
that
is,
go
back.
B
So
I'm
just
installing,
basically
our
standard
get
lab
monitoring,
Prometheus
stack
into
mini
cube,
but,
as
I
pointed
out,
I've
put
that
extra
annotation
record
on
there
and
eventually,
when
this
installed
and
run
and
my
point
in
it,
this
might
take
a
little
bit
of
time
to
act.
We
install
this
chart.
B
C
B
Yeah,
it's
bad
because
when
you're
trying
to
debug
it
there's
no
way
to
actually
say
actually,
I
would
like
to
see
the
secret.
So
I
can
debug
this.
So
it's
kind
of
it's
a
it's
a
bad
thing
in
a
different
way,
but
yeah
cuz
that
caught
me
because
I
just
naturally
upgraded
the
plug-in,
not
thinking
about
it
and
then
I
just
couldn't
see
secrets
anymore.
B
So
when
this
is
going
to
take
awhile,
unfortunately,
as
it
downloads
on
the
dock
of
containers.
So
while
that's
waiting,
I
can
probably
talk
a
little
bit
about
how
this
is
useful.
So
there's
a
lot
of
places
currently
what
we
do
in
our
home
charts
enough
and
there
are
home
values
where
we
do.
What
we
do
is
we
create
an
address
in
terraform
and
we
create
dns
records
and
terraform,
and
then
we
have
to
run
g-cloud
commands
in
our
helm
file
to
pull
that
address
out.
B
So
we
can
basically
make
sure
that
the
load
balancer
we
set
up
in
kubernetes
has
the
stack
uses
that
address
has
that
same
IP
and
the
only
reason
we
do
all.
That
is
because
terraform
managers
are
DNS
and
we
want
to
make
sure
that
when
we
set
up
a
DNS
record,
it's
going
to
be
correct
to
the
load
balancer
for
gke,
when
this
service
is
available
in
a
state
where
we
feel
confident
in
using
it
and
say
it's
kind
of
stopped
us
here.
So
you
can
see
here.
B
This
record
is
probably
the
most
interesting
to
people
and
then
the
two
ones
at
the
bottom.
So
basically
it's
picked
up
our
Prometheus
service
and
it
says
I'm
going
to
create
an
AmeriCorps
and
prometheus
GK
de
la
they're
going
to
create
some
serve
records
as
well.
But
you
can
see
where's
the
a
record
in
here.
We
go
a
record
172
1702,
which
of
horses
of
mini
cubic
internal
IPE,
because
this
is
just
running
a
mini
cube
in
a
real
environment
that
would
be
a
real
life.
B
So
sorry,
I
couldn't
interrupted
myself
there,
but
yeah
one
of
the
big
use
cases
is.
We
can
get
rid
of
all
of
that
clunkiness
by
having
the
service
available
whenever
we
deploy
something
in
kubernetes
and
we
want
it
to
be
made
available
externally.
We
just
tell
it
to
create
a
load
balancer,
and
we
can
also
just
pop
that
annotation
straight
on
to
those
services
and
it
will
go
into
CloudFlare
and
create
the
DNS
record
for
us
now.
Obviously,
the
big
thing
with
that,
though,
is
how
do
you
do
this
safely?
B
You
know
if
this
is
messing
with
our
real
DNS
zones.
You
know
how
we're
gonna
make
sure
this
does
the
right
thing.
That's
really
good
external
DNS
has
a
few
safety
mechanisms
built
into
it
specifically
to
try
and
solve
those
problems.
The
big
ones
are.
It
will
not
modify
any
DNS
record
that
already
exists,
so
if
it
sees
something
that's
already
there,
it
will
just
say:
hey
I'm,
not
gonna,
touch
this.
B
This
doesn't
look
like
it's
managed
by
me
and
just
log
out
the
other
thing
it
does
that
when
it
creates
a
new
record,
it
creates
a
text
record
with
a
bunch
of
information.
Saying
hey
I
mean
I,
am
an
external
DNS
instance
running
in
this
cluster.
This
specific
identifier,
I,
created
and
managed
this
record.
B
So
that's
pretty
much.
The
end
of
the
demo,
as
I
mentioned,
I
think
that
getting
away
from
managing
those
pieces
in
terraform,
so
you
don't
have
to
kind
of
if
I
want
to
deploy
something
in
cribben
Eddie's
I
need
to
go
and
do
a
bit
of
terraform
and
then
do
a
bit
of
clogging
around
all
of
it
goes
away.
The
other
thing,
I
think,
could
be
really
interesting
to
think
about
from
a
delivery
perspective
is
we
can
start
creating
ad
hoc
DNS?
B
We
can
create
ad
hoc
kubernetes
services
of
type
load
balancer
with
any
dns
name
we
want,
which
means
we
could
start
doing
more
interesting
things.
Like
blue
green
deployments.
We
could
be
having
completely
separate
home
releases
of
like
git
lab
pods.
We
could
have
you
know,
services
that
we
set
up
easily
with
a
full
DNS
record
that
we
could
give
2h
a
proxy
to
front-end
or
all
of
that
kind
of
stuff,
just
being
able
to
really
quickly
and
easily
generate
DNS
entries
for
any
services.
B
A
A
A
53
53,
53
yeah,
and
we
generates
the
the
end
point
right
there
and
actually
works
really
really
well
I
think
we
had
one
problem
with
it
at
the
beginning,
where
we
generated
so
many
that
it
obvious
blocked
us,
so
we
kind
of
had
to
repair
down
and
like
do
some
tricks
around
it.
But
I
think
this
was
a
super
old
release.
I,
don't
think
we
are
created
this
in
a
while,
which
means
that
it
just
works
and
it
works
really
well.
A
A
Think
external
DNS
would
be
interesting
for
us
in
in
other
environments,
not
necessarily
production,
but
definitely
something
like
staging,
where
we
would
be
able
to
test
things
quickly
and
when
we
get
to
the
situation
where
the
database
can
be
easily
torn
down
and
brought
back
back
up,
we
would
be
able
to
give
more
different
environments
to
to
everyone
actually
to
test
changes.
So
I'm
all
up
for
this
type
of
testing.
If
you
want
to
do
it,
yeah.
B
And
I
think,
even
if
it's
not
necessarily
for
the
gate,
lab
helm,
releases,
I
think
just
like,
like
yeast
or
my
example:
yeah
Prometheus
we've
got
moral
people,
spinning
stuff,
just
small
things
up
and
it's
great
to
do.
They
say
you
don't
have
to
go
around
and
you
just
you
know,
terraform
stuff
or
you
know
just
it's
all
there
at
once,
but
you
just
ship
it
in
the
way
you
go.
No.
B
B
Only
mode
so
like
just
just
completely
dry
run,
not
doing
anything
special
and
then
I
might
do
a
mini
readiness,
review
kind
of
thing,
or
just
a
discussion
that
DNA
about
like,
like
basically
I,
want
to
make
sure
people
like
Hendrik
and
that
who
are
very
familiar
with
the
DNS
infrastructure,
make
sure
there's
no
red
flags
or
anything
we
need
to
watch
out
for
and
then
hopefully
I
just
want
to
even
use
it
for
internal
services.
Just
get
lab
dotnet,
nothing
get
that
calm,
and
then
we
can
just
start.
B
D
D
Currently,
no
because
the
the
end
point
for
registry
is
still
using
a
proxy
but
I
think
we've
been
discussing.
The
only
reason
we
use
H
a
proxy
for
registry
is
because
of
canary,
which
is
doing
you
know,
request
path
based
routing,
there's,
no
reason
why
we
couldn't
rig
something
up
in
Cooper
that
he'd
said
similar
or
I.
Think
we've
talked
about
it,
so
yeah
I
mean
yeah.
We
don't
have
an
issue
tracking.
That
out
should
all
open
one
up.
D
B
D
C
D
D
So
if
you
do
a
registry
like
a
docker
pool
on
a
get
lab
or
give
a
comm
registry
image,
get
lab
or
give
a
comradery
image
that
it
uses
the
canary
pod,
it
does
not
get
a
lot
of
traffic,
though
we've
thought
about
using
weights
to
shift
a
percentage
of
the
traffic
over
to
canary
that's
another
option,
but
doing
this
in
a
checkbox.
It
seems
a
bit
silly
when
the
facilities
already
exist
in
companies
to
do
this,
so
we
could
probably
just
get
rid
of
the
a
proxy
layer
at
some
point.
D
We
always
have
you
always
well,
not
always,
but
we
had
a
registry
canary
before
and
we'll
probably
do
the
same
thing
for
the
front
end
when
the
kubernetes
will
will
create.
We
have
a
canary
named
space
and
kubernetes,
which
will
test
out
first
and
we'll
just
serve.
Give
a
corgi
can
give
up
comm
traffic
to
those
pods
for
testing
and
then,
but
I
think,
eventually,
we'll
probably
consider
you
know
doing
something
different.
But
this
is
what
we
have
for
now.
I.
B
Think,
like
thinking
of
doing
something
different
as
well,
we
have
potentially
other
issues
right
like
with
things
like
sidekick.
For
example,
if
you
try
and
have
like
two
sets
of
sidekick
pods
running,
do
we
have
a
way
to
say:
hey
you,
your
sidekick,
pods
you're
running
it.
Please
don't
do
any
work.
We
want
to
want
someone
else
to
do
the
work
yeah.
D
We
don't
have
any
if
you're
talking
about
like
a
canary
sidekick
or
having
namespaces
there's
this
kind
of
a
long-standing
issue
for
that
doesn't
exist,
not
something
we
could
do
currently,
because
everything
shares
the
same
Redis
pool,
there's
no
name
spacing
for
Redis,
so
yeah.
So
that's
not
really
possible
for
us
to
have
a
isolated
set
of
sidekick
workers
without.
A
A
Cool
Graham
I
think
your
approach,
sorry
I
got
this
I
disappeared
for
a
tiny
bit,
because
I
had
to
answer
a
phone
call,
but
I
think
your
approach
was
starting
something
non
github
application
related
first
and
like
having
that
work.
Rock-Solid
with
external
DNS
is
a
good
one,
and
then
we
can
start
thinking
about
these
things
on
how
we
can
implement
it
inside
the
application
as
well
or
to
support
the
application
as
well.
So
I
definitely
encourage
you
to
continue
doing
this.
B
B
I'll,
take
that
as
a
no
in
that
case,
what
I
might
do
is
I'll
very,
very
quickly,
just
show
the
state
of
what
I've
done
involved.
It's
actually
not
that
difficult
for
me
to
just
run
through
the
more
or
less
the
the
code.
That's
in
the
repo
and
the
repo
itself.
It's
not
all
pushed
up
there
yet,
but
at
least
that
should
give
people
a
kind
of
a
clearer
view
of
how
it
kind
of
is
put
together
at
the
moment
and
how
it
looks,
and
you
know
if
there's
any
discussion
that
comes
out
of
that.
B
So
if
you
actually
look
at
this
repo
I'll
show
you
inside
the
terraform
directory
first,
if
we
actually
look
at
what
this
repo
sets
up
for,
try
and
get
away,
it's
nothing
too
exciting.
All
we
setup
from
the
actual
Google
cloud
resources
is
we
set
up
a
service
account.
We
give
that
service
account
some
permissions.
B
We
set
up
a
kms
keyring
that
vault
uses
to
encrypt
all
its
workload
and
then
the
final
thing
we
set
up
is
essentially
network,
our
cloud
NAT
and
our
qke
cluster,
so
vault
needs
a
minimum
of
five
nodes
and
they
recommend
you
have
those
five
nodes
with
nothing
else
running
on
them
from
a
security
perspective
once
again,
so,
basically
all
we
do
is
we
set
up
a
kubernetes
cluster
that
actually
has
six
nodes
in
it?
Which
is
you
know
it's
one
too
many,
but
it's
probably
okay.
B
Just
that
way,
we
get
an
emic
bit
of
extra
capacity.
I
am
leveraging
the
latest
version
of
the
gke
terraform
modules,
so
basically
I
am
using
that
workload
identity.
So
the
service
account
we
need
for
vault,
which
talks
to
G
kms.
We
don't
need
to
put
keys
or
generate
keys
or
put
them
in
one
password.
It's
all
just
works
through
G
K
G
K
is
workload
identity,
so
that
part
is
very
simple.
The
helm
file
part
is
even
simpler.
It's
literally
just
one
helm
file.
B
If
you
look
at
the
helm
file,
I'm
doing
some
ugly
hacks
to
get
a
bunch
of
values.
Out
of
terraform
itself,
but
this
here
is
basically
just
a
normal
installation
of
the
vault
helm
chart
and
everything
you
see
on
the
screen
now
is
more
or
less
all
the
configuration
that
is
needed,
which
is
actually
very
simple.
This
used
to
be
a
lot
more
complicated
because
earlier
versions
of
vault
the
storage
engine
was
console
and
it
had
other
storage
engines
like
Google
Cloud
storage,
which
is
what
we
were
using,
but
they
weren't
technically
officially
supported.
B
As
of
one
point
for
the
new
version
of
vault,
they
now
have
an
inbuilt
storage
engine
that
uses
the
raft
protocol.
So
really
now
configuring
and
installing
volatile
kubernetes
is
exceptionally
simple.
It's
just
about
there's
about
five
pods.
They
used
a
PVC
PVC
for
their
own
data
storage.
They
run
in
a
service
account
that
can
get
a
G
kms
key
for
encryption
and
decryption
of
that
data,
and
that's
more
or
less
all.
There
is
to
it
the
most
interesting
part,
I
think
to
volt
and
there's
and
is
the
part
we've
barely
scratched.
B
The
surface
on
is
the
terraform
vault
part.
So
this
is
one
small
T
is
up
and
running.
We've
created
the
GK,
a
class
that
we've
created
a
service
account
we've
installed
the
help
chart
it's
already.
How
do
we
actually
configure
vault
itself?
The
big
theme
is
things
like
orth
methods,
so
this
is
all
done
in
terraform,
even
though
it's
not
really
infrastructure.
Simply
because
terraform
is
a
hashing
coat
hashey
core
product
vault
is
a
hashey
core
product
said:
I
recommend
is
terraformed
to
setup
and
configure
volt
is
the
first-class
citizen.
B
B
The
other
thing
I'll
point
out
is
the
integration
pieces
we
have
with
volt,
which
of
that
with
our
current
tooling,
are
quite
nice.
So
for
those
you
who
aren't
aware
already
right
now,
whenever
you
run
a
CI
job
and
get
lab,
it
generates
a
JSON
web
token,
which
we
can
directly
validate
against
the
vault
provided.
Volt
has
a
couple
of
configuration
tweaks
to
be
correct
and
essentially
grant
a
policy
which
is
a
set
of
secrets
to
that
job.
B
B
So
there's
the
the
documentation
for
how
get
labid
ci
works
with
fault,
and
the
other
thing
I'll
point
out
as
well
is
helm
file,
so
the
helm
tool
we
use
for
you
know
basically
deploying
our
home
charts
at
the
moment
has
a
built-in
support
for
vault
as
well.
So
all
of
those
places
where
at
the
moment
we
did
a
really
ugly
shell
hack
of
going
to
G,
kms
and
GC
GC
s
to
get
all
of
the
secrets
and
decrypt
them
and
everything
you
can
just
replace
with
a
line
that
looks
something
like
this.
B
So
this
will
drastically
clear
up
a
lot
of
the
really
funny
stuff
we
have
relating
to
secrets
in
a
lot
of
our
helm
files
it'll
make
it
nice
and
clean
and
simple.
Once
it's
available,
so
that's
probably
the
most
useful
benefits
I
see
coming
from
getting
volts
straight
away
or
as
soon
as
possible
into
this
tooling.
B
There
is
a
bigger
question
which
I'm
not
sure,
if
we're
ready
to
think
about
yet
about
how
we
do
secrets
management
with
volt,
because
it
also
has
a
volt
injector,
which
isn't
basically
another
controller
that
can
sit
there
and
constantly
pole-vault
and
recreate
secret
objects.
When
secrets
changed
behind
the
scenes,
rather
than
binding
it
to
our
helm,
deploy
process.
B
So
it
would
mean
we
take
away
all
of
our
secrets
management
out
of
all
our
helm
files,
and
we
would
just
have
this
agent
doing
all
the
secrets
management
behind
the
scenes
that
changes
a
lot
of
the
way.
We
do
things,
though
so
I'm
not
sure.
If
that's
something
we
want
to
look
at
straight
away,
but
it's
an
option
for
us
down
the
road.
A
There
is
one
thing:
I
was
not
aware
of
the
integration
that
we
added
with
wall,
so
I
know
that
we
added
something
I
just
didn't
have
the
time
to
look
at
it.
What
what
immediately
jumps
at
me
is
that
if
we
do
authentication
with
all
directly
do
we
have
a
way
in
policies
involves
to
tie
users
to
a
policy
that
is
configured
in
the
policies
file
as
in
if
we
remove
all
of
our
secret
variables
that
we
currently
have
as
environmental
variables
in
github
groups
or
projects.
A
Theoretically,
then
it
doesn't
really
matter
who
has
access
to
the
project
like
at
least
internally.
We
don't
really
mind
whether
someone
can
see
the
code
or
not.
We
are
open-source
in
general,
so
the
major
problem
we
have
we
have
with
our
permission
system
is
that,
as
when
you're
a
maintainer
or
even
a
developer,
you
can
easily
print
out
secrets
and
then
everything
is
out
right.
So
if
we
go
this
way,
then
you
can't
really
do
much.
A
You
authenticates,
because
we
run
the
job
in
the
context
of
the
user.
It
means
that
you
can
automatically
authenticate
with
volt
and
we
can
check
you
against
the
policies
and
like
ensure
that
everything
is
kosher
there
and
then
it
doesn't
really
matter
in
which
group
the
project
lives
and
who
has
access,
because
we
have
other
ways
of
limiting
changes
to
the
code.
We
have
approvals,
we
have
you
know
like
reviewers
and
so
on
that
we
can
limit
and
then,
as
you,
whether
you're,
a
developer
or
a
maintainer
becomes
less
less
important.
B
I
think
that's
right
and
I'd
say
how
it
works.
Is
the
CI
job
will
get
JSON
web
token
and
go
to
vault
and
say
this
is
my
JSON
web
token?
I
am
in
this
kit,
get
lab
instance.
I
am
running
under
this
project
and
like
this
is
a
CI
job
under
this
project.
Under
this
code,
lab
instance
involved
will
validate
that
against
get
labs.
B
Api
go
yes
that
you
are
who
you
say
you
are,
and
then
we
tell
we
tell
volt,
ok,
CI
jobs
under
this
namespace
under
this
repo
can
have
these
secrets
so
you're
right,
we
use
get
lab
as
the
control
mechanism,
like
users
and
groups,
I
think
it
lab,
but
we
know
that
that
CI
job
once
you
once
you
have
access
to
a
CI
job,
a
master
because
your
maintainer
and
you
can
merge
to
master
or
whatever
that,
whatever
we
figure
out
that
workflow
is.
We
know
that
the
user
part
is
gone.
D
That
would
really
solve
the
problem
Erin,
because
the
problem
we
have
now
is
not
necessarily
maintain
errs,
but
developers
you
can
just
you
know,
tap
the
secrets
in
a
CI
job
I.
Don't
think
this
would
change
that
right,
because
the
job
still
has
access
to
the
secrets
and
the
events,
and
even
in
this
case,
like
I,
don't
know
whether
masking
works
at
all.
It
may
not,
which
may
be
a
problem
if
we
use
this
right
like
this,
like
the
secrets,
are
automatically
masked
for
CI
variables
in
output.
A
Thing
is
that
that's
right
now
we
blanket
give
access
to
anyone
who
has
the
same
role
right,
whether
they
can
access
it
or
not.
With
this
change,
you
at
least
have
that
control
of
saying,
if
you're,
not
in
the
policies
file
right
like
in
this
group,
you
can't
even
access
the
secret
and,
if
you,
if
you,
if
you
get
added
as
a
developer,
to
a
project,
you
immediately
get
access
to
it,
whether
you
cut
it
or
not,
so.
B
I
think
what
we
have
to
be
so
what
this
I
get,
what
my
job
and
yourself
is
saying,
I
think
what
I
would
have
to
double
check
did
the
probably
the
clearest
way
for
me
to
articulate
what
we
can
and
can't
do
would
be
to
look
at
the
contents
of
the
JSON
web
token.
So
if
we
generate
the
JSON
web
token,
they
get
lab
itself
generates,
it
does
specify.
Information
like
this
is
the
guid
lab
instance.
B
This
is
the
project
if
it
specifies
in
that
the
the
user
that
caused
this
job
to
happen,
then
definitely
on
the
vault
side.
We
can
validate
that
web
token
and
use
that
because
we
know
that
web
token
is
valid.
We
can
say,
okay,
that
information
we
can
extract.
Out
of
that,
we
give
the
policies
this
user
is
you
know
this
person
they've
got
access
to
this
policies.
If
we
don't
have
access
to
that
in
the
web
token
then
vault
will
be
able
to.
B
We
can
have
a
policy
for
the
job,
but
we
won't
be
able
to
make
any
more
clear
and
distinction
from
that.
If
that
makes
sense,
so
I
think
it'll
really
come
down
to
the
implementation.
It's
worth
noting
that
they're
also
doing
further
improvements
so
job
what
you
mentioned
about
masking
they're,
getting
masking
support
fourfold
secrets.
It's
not
there
yet
you're
right,
it's
actually
not
there
yet,
but
it's
coming
I've
seen
the
issue
and
it's
coming
so
I
think
as
a
first
pass.
B
I
definitely
wouldn't
want
to
be
using
this
like
fully
in
production
straightaway,
but
I
think
it'll
be
good
for
us
to
start
testing
and
seeing
some
of
these
problems
because
I,
even
though
they've
added
this
to
get
lab
I,
don't
think
anyone
really
uses
it
very
much.
So
they
think
they'd
be
very
interested
in
people
trying
it
out.
I.
Think.
B
A
D
So
so
I
guess
there's
there's
two
ways
we
can
use
this
one
is
for
CI
variables.
The
other
is
for
chef
and
kubernetes
secrets
and
I
would
group
those
two
together
because
the
chef
and
kubernetes,
because
I
think
we're
gonna,
probably
maintain
the
split
or
you
know
months,
not
weeks.
You
know,
hopefully
not
years,
but
so
I
would
I
would
say,
like
between
the
two
I
would
rather
focus
on
the
chef
in
kubernetes
park,
which
would
be
to
modify
the
shim.
D
B
Make
sense
and
I
think
that's
definitely,
but
the
first
focus
once
I
get
up
and
running
to
see
if
I
can
get
rid
of
the
horribleness
of
some
of
the
helm
file
stuff.
So
just
you
know
pulling
those
secrets
straight
out
and
then
yeah
I
think
definitely
you're
right.
The
next
stage
is
figuring
out
how
we
can
make
chef
work
with
it.
Hopefully
it's
not
too
bad,
but
haven't
had
a
closer
look
yet.
C
B
At
the
moment,
it's
so
now
it's
using
PVCs,
which
are
basically
Google
disks,
like
Google
storage
disks,
so
we
just
do
specials
so
we'll
need
a
backup,
job
and
I'm
happy
to
write
the
job
or
never
and
just
basically
now
it's
just
those
g-cloud
snapshot,
a
bunch
of
disks
and
Bob's
your
uncle
and
you
restore
them.
You
just
restore
those
snapshots
and
bounce.
The
pods
I
think.
B
I
asked
the
how
she
called
people
exactly
this
problem.
It's
like
okay,
well
like
do.
We
need
to
like
tell
it
to
you,
know,
sync
or
whatever
and
they're
just
like
doesn't
write
that
much.
You
know,
we've
been
fine
with
that's
what
they
gave
me
for.
That
was
the
official
how
she
caught
response,
those
like
okay
but
yeah.
Unfortunately,
we
do
need
to
change
all
that,
but
I
think
this.
This
is
better
overall
I
think,
fortunately,
so
just.
C
C
B
B
C
B
We
should
even
do
some
by
even
discs
not
showing
back
up
wise.
We
should
like,
because
the
dot
is
gonna
be
like
only
a
hundred
makes
bigger
or
something
we
should
slug
the
daughter
off
somewhere
else
as
well
like
we
should
really.
If
it's
going
to
be
this
important,
which
it
probably
will
be,
you
should
slap
that
thought
of
somewhere
else.
Yeah.
D
A
D
D
D
This
is
not
final,
yet
he
left
a
comment
that
said,
he
has
to
kind
of
go
through
these
in
detail,
with
some
code
inspection
to
kind
of
see
whether
these
cues
rely
on
disk
I/o
and
the
reason
why
we're
doing
this
is
so
for
the
next
migration.
You
want
to
be
sure
that
none
of
the
workloads
depend
on
shared
storage.
We
have
NFS
mounts
on
our
sidekick
fleet
for
cache
and
build
traces.
This
is
like
the
archives
and
the
builds
not
point,
and
we
really
don't
want
to.
You
know,
of
course
we're
not.
D
We
really
don't
want
to
do
those
NFS
amounts
on
the
kubernetes,
so
we
want
to
make
sure
everything
that
migrate
doesn't
depend
on
them
and
the
first
pass-
and
this
is
going
to
be
hey-
let's
just
migrate,
all
of
the
queues
that
don't
depend
on
any
disk
I/o.
That
should
be
easy.
Well,
we'll
see
how
easy
it
is.
D
D
Probably
we
want
something
we
want
anyway,
because
currently,
if
you
didn't
know
that,
like
some,
some
workloads
write
to
the
shared
directory,
specifically
project
exports
and
they
use
it
as
a
scratch
space
like
whenever
you
do
a
project
export
their
writes
a
very
large
tar
file,
sometimes
with
large
profile
to
that
directory
before
it
uploads
into
object
storage.
We
don't
want
that
empty.
Therefore,
every
single
pod,
like
we
have
it
for
project
export.
We
have
it
for
memory,
but
I
think
like
moving
forward.
D
It's
going
to
be
like
you
know,
for
some
two
groups
like
we
shouldn't
have
any
you
should
you
shouldn't
need
to
write
to
shared
to
shared
now
point
at
all,
so
I
think
we'll
go
ahead
and
do
that
and
then
once
we
have
that
chart
update,
then
I'll
go
ahead
and
make
that
change
on
pre
prod
will
start
tagging.
You
know
to
use
that
we're
gonna
migrate
over
and
we'll
do
some
testing
that
way.
Any
questions
about
this.
C
D
D
A
Cool
things,
Java
sharing
things
for
the
questions
as
well,
thanks
everyone
for
the
first
day
pack,
demo
I,
hope
the
time
was
not
too
late
for
Dramamine
crack.
If
it
is,
we
can
tweak
it
a
bit
but
I'm
happy
that
we
we
are
able
to
share
this
with
you
as
well,
and
if
we
need
to,
we
can
increase
the
frequency
and
do
like
two
times
a
month
and
then
two
times
a
month
for
for
the
other,
depending
on
what
we
have
to
show.