►
From YouTube: kubernetes kops office hours 20190607
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Hello,
everyone:
this
is
Friday
June
7.
This
is
cops
office
hours.
I,
am
your
moderator,
facilitator,
Justin,
Santa,
Barbara
I
work
at
Google,
a
reminder
this
meeting
is
being
recorded
and
what
we
put
on
the
Internet.
Please
be
mindful,
therefore,
of
our
code
of
conduct.
I
am
pasting,
a
link
in
chat
to
our
agenda
in
case
anyone
didn't
see
it
the
first
time
the
please
do,
but
your
name
there
were
feel
free
for
your
name
on
the
attendees
list
and
also,
if
you
have
anything,
we
do
have
a
lot
on
the
agenda
today.
A
A
B
So
the
first
thing
I
just
had
was
there's
just
a
lot
of
outstanding
pr's
that
just
need
approvals.
There's
a
bunch
for
the
old
G
teams
already
on
and
achieved
final
approval,
so
just
calling
that
out
also
I
want
to
call
it.
Thank
you
to
Justin
for
actually
updating
the
YouTube
channel
and
getting
all
the
videos
posted
that
was
awesome.
I
will.
A
Thank
you,
I
will
I
will
do
my
bit
on
the
PRS.
Other
people
feel
free
to
all
student
of
the
PRS,
but
I
will
certainly
do
that.
I
can
get
fees
into
the
next
item
and
on
the
YouTube
channel
I
had
actually
uploaded
I
I
was
confused
by
YouTube
I'd
uploaded
them.
I
know,
kids
could
manage
it,
but
for
me,
I
can't
I
had
uploaded
that,
but
not
set
them
to
public,
so
I
will
hopefully
be
better
at
it,
and
I
will
I
will
practice
with
this
video
or
shortly
after
we
are
done.
A
B
So
we
found
a
pretty
critical
bug
the
other
day
and
it
specifically
launched
templates
where
en
eyes
are
not
gonna,
be
marked
delete
on.
We
would
like
three
thousand
unallocated
Ian's
eyes
in
our
account.
Due
to
that
bug,
it's
been
pulled
in
and
cherry
picked
into
you
112,
113
and
114.
That
said,
we
should
probably
cut
112
to
release
and
get
that
out.
So
we
don't
kill
people's
accounts,
nasty
side
effects
and.
A
B
A
You
can't
launch
new
instances,
yeah
I'm,
just
trying
to
like
quantify
how
bad
how
bad
it
is.
But,
yes,
we
certainly
should
do
the
new
release
I'm
just
trying
to
like
understand
how
much
messaging
we
should
do
it
feels
like.
So
we
do
have
a
mechanism
to
force
people
onto
the
new
version
we
could
force
where
you
basically
have
to
set
an
environment.
Variable
I'm,
not
sure
that
this
qualifies
for
it,
but
we
can
think
about
that.
I
guess.
B
The
other
note
I
had
on
the
112
is
with
canal
361.
We
saw
some
political
behavior.
I
did
do
a
lot
of
the
investigation
to
this
one.
On
our
end,
so
I
don't
have
all
the
details,
but
we
ended
up
rolling.
Our
cluster
are
112
clusters.
Two
three,
seven,
two
three
seven
two,
which
is
our
tree,
been
cherry,
picked
into
1.13
about
14,
given
that
we
may
want
to
just
check
it
back
that
it
into
the
112.
If
we
do
what
you
do,
a
cut,
we've
got.
B
That
we're
using
but
I'd
love
to
see
the
issue,
because
I
just
try
not
to
do
that.
We're
open
there's
about
four
issues
on
three
six:
one:
that
there
is
a
three
six
to
release.
So
we
might
just
want
to
go
to
three
six
to
not
go
all
the
way
to
3
7
and
but
there
there's
a
some
issues
with
six
not
set
incorrectly
there's
some
issues
with
the
API
server
going
when
the
API
server
goes
away.
There's
there's
a
like
said
about
four
issues
on
three
six,
one
that
were
patched
recently:
okay,.
A
Yeah,
that's
certainly
worth
thank
you
for
being
that
up.
That's
certainly
worth
thinking
about
yeah.
We
should
definitely
get
372
into
113
114.
We
should
probably
it's
probably
coming
to
I.
Don't
know
if
anyone
has
run
one
third
or
is
running
one.
13
cups,
113
lots
of
head
yeah
so,
but
we
should
think
about
getting
that
into
beta
or
something
but
yeah.
Then
we
should
look
at
like
I
would
try
to
look
at
the
underlying
bugs
you're
in
canal
to
figure
out
a36
too
great
a
three
seven
I,
don't
know
how
different
they
are.
Yeah.
B
The
other
issue
we
noticed
or
the
other
the
other
thing
that
made
us
get
better
right.
Two,
three
seven
was
we
did
notice
that
the
version
of
flannel
that
we're
using
with
three
six
one
is
not
the
version
that
calico
is
recommending
with
three
six
one.
We
were
actually
ahead
on
flannel,
so
we
just
moved
to
370
to
get
that
old,
stink
back
up.
B
Let's
just
jump
ahead:
okay,
yeah,
so
that's
another
at
CDE
manager,
so
I
think
there's
two
issues
of
a
CD
manager.
I
think
we
should
address
the
first
one.
We
should
add
a
new
backup
commands.
You
can
manually
force
a
backup
using
a
CD
manager,
control.
B
There's,
no
one
way
to
force
that
right
now
and
we
came
into
a
yeast
case
where
we
knew
we
were
going
to
do
something
it's
on
a
cluster
that
was
potentially
dangerous
and
we
wanted
to
make
sure
we
had
a
current
snapshot
and
literally
end
up
sitting
there
waiting
and
waiting
and
waiting
for
you
know
the
next
iteration
for
it
to
just
do
the
snapshot.
Yes,
luckily,
was
near
the
top
of
the
hour
and
didn't
take
too
long,
but
that's.
A
A
good
idea,
I
will
FYI,
it
does
do
backups
when
it
is
doing
something
risky.
So,
for
example,
whenever
it
changes
the
like,
whenever
it
does
anything,
it
doesn't
back
up
first,
but
yes,
we
certainly
it
would
be
nice
to
not
have
to
rely
on
that,
even
if,
even
if
you
end
up
with
an
extra
backup,
I
think
it's
nice
to
have
to
rely
on
it.
Yep.
A
Well,
and
should
we
also
should
we
also
should
we
also
do
a
sort
of
github
release
with
it,
because
you
can
run
on
your
machine,
yeah
I,
think
that
makes
sense
too
I
think
it
could
come.
We
found
an
issue
where
restore
we
had
a
repeat
of
the
one
where
you
had
Ryan,
where
restore
just
doesn't
seem
to
restore
just,
doesn't
seem
to
work
as
you
would
expect
it
to.
A
A
Cool
yeah.
That
would
be
good.
Alright,
that's
lots
to
do
that's
good
and
then
we're
taking
one
more
item,
Ryan
actually
choked
up
one
well,
should
we
jump
to
Peter's
who's
also
talked
about
city
manager,
but
is
not
here
he
was
asking.
Should
we
delete
the
now
unused
at
CD
route,
53
records
after
migrating
to
a
CD
manager?
So
when
we
move
to
a
city
manager,
we
don't
we
no
longer
publish
the
DNS
records,
there's
no
real
value
in
having
them,
and
it
just
sort
of
seems
that
we
try
to
lock
it
down
anyway.
A
Peter
says
particulars
to
delete
them,
and
then
they
are
later
recreated
with
values
of
our
placeholder
value,
which
I
think
is
cops
doing,
that
I
assume
it's
cops.
Doing
that
and
yes,
I
think
we
should
stop
doing
that.
I
think
that
is
a
good
thing
to
stop,
but
also
make
people
happier
because
then
we
don't
have
that
weird
place
over
IP.
We
did
that
placeholder
IB
to
avoid
negative
DNS
caching,
but
yeah
and
we're
still
gonna.
A
Do
it
I
presume
for
people
to
exposing
API
server
without
using
an
EOB,
but
it
would
be
one
step
closer
to
getting
rid
of
what
is
honestly
a
hack
so
yeah
we
should.
We
should
do
that.
No
one
wants
to
it
shouldn't
be
too
bad,
I,
don't
think
but
yeah.
Please,
then
it
wants
to
feel
free.
Otherwise
I
will
try
to
get
to
it
all
right
right
and
back
to
you
I
guess:
where
do
we
jump
to
back
to
any
conflict
with
AD
flags?
Okay,.
B
Yeah,
there's
a
one
of
our
teammates
open
this,
it's
just
a
PR
to
add
in
additional
additional
flag
to
be
a
bi
server.
I'm,
not
familiar
enough
with
the
admission
go
code
and
I
just
noticed
there
we're
also
modifying
that
flag
in
there
and
I
just
wanted
somebody
else
to
take
a
look
at
that:
that's
more
familiarly
code!
So
yes,.
A
B
It
just
tells
the
API
server
to
load
that
file,
so
you
can
use
it
for
web
hook
like
web
hooks
and
stuff,
so
we're
using
it
so
we're
using
OPA
to
do
what
books
out
there
validations
on
calls
and
to
enable
authentication
into
the
OPA.
You
have
to
pass
it.
What
token
bearer
token-
and
this
is
giving
the
merit
akin
to
the
API
server
and.
A
B
A
A
We
can
have
another
look
at
that:
I
guess
and
yeah
that's
and
then
the
other,
the
other
one
I
throw
in
here
is
the
node
local
DNS
agent,
which
is
really
cool.
It's
a
daemon
set
also
consider
it's
obtaining
set
that
runs
on
every
node,
obviously,
but
it
actually
acts
as
a
caching
proxy.
It
solves
a
lot
of
issues
that
are
observed
more
frequently
on
AWS
and.
A
B
A
D
A
A
Before
before,
shutting
down
the
notes,
so
if
we
had
the
plot
disruption,
budget
I
guess
we
should
treat
part
of
an
anti
affinity
in
part
destruction,
budget
separately.
If
we
had
a
pod
disruption
budget,
it
would
inferior.
He
liked
to
drain
them
slowly
or
this
delete
the
two
pods
slowly
first
before
it,
both
Koster
autoscaler
and
the
cop
swirling
update
code,
would
would
do
that
and
not
do
them
at
the
same
time.
I
don't
know
of
any
downsides
to
plot
disruption
budget
other
than
like.
A
Yes,
all
right,
I,
don't
infinity
ones
are
more
on
huge
clusters,
though
I
don't
know
like
Rodrigo,
smiling
I,
don't
know
like
how
big
how
I
don't
know
where
they
actually
kick
in
yeah.
C
C
A
Pretty
positive
vote
for
that,
then,
okay,
alright,
let's
get
your
data!
Thank
you,
alright!
So
yeah,
we
should
definitely
do
this.
I'm
less
convinced
about
the
right.
I
think
the
daemon
said
on
the
master
is
a
bigger
change
and
I'm
inclined
to
that.
We
you
that
when
we
figure
out
how
to
better
support
more
customization
of
add-ons
I,
think
Ryan
may
you
and
I
need
to
like
I
think
we
have
a
pending
chat
to
figure
out,
like
you
had
some
working
on
that,
so
we
should.
A
Alright,
we
have
a
couple
more
items
on
the
agenda.
These
two
men,
whose
not
attendance
asks
about
inconsistency
between
the
config
base
and
the
cops
state
store,
would
want
to
do
with
this.
I
I
will
have
a
look
at
this.
It's
I'm,
hoping
it's
just
an
issue
of
documentation.
The
intention
is
cops.
State
store
is
the
location
where
your
cluster
and
instance,
group
and
I
guess
key
sets
like
objects
are
stored.
But
then,
if
you
want
to,
you
can
put
your
additional
configuration
somewhere
else.
So,
for
example,
in
theory
and
I'm,
pretty
sure
doesn't
work.
A
You
could
like
have
an
s3
bucket
as
York
upstate
store,
but
still
store
the
launch
Google
clusters
from
it.
By
putting
confidence
onto
Google
the
real
you
use,
untied
Google,
Cloud
Storage,
the
real
use
case.
There
was
to
have
your
kubernetes,
your
con
PHA
or
so
your
cluster
and
your
instance,
groups
be
in
a
kubernetes
cluster.
So
that's
sort
of
cluster
of
guys
here
DS.
So
this
is
sort
of
part
of
that
and
I
will
have
a
look
and
see
where
we
are
exactly.
Hopefully
it's
just
documentation.
A
I
wouldn't
be
surprised
if
there's
also
a
bug
or
two
around
it,
but
that's
sort
of
the
intention
that
state
store
helps.
You
find
the
the
root
objects
as
it
were,
and
then
we
render
some
configuration
needed
to
bring
up
the
cluster,
and
that
can
be
something
that
can
be
in
a
different
location.
So
if
you
wanted
to
put
your
configuration
in
a
different
bucket
entirely,
that's
it
there's
a
nice
simpler
example.
A
If
you
wanted
to
put
your
actual,
if
you
want
to
have
each
cluster
it's
own
s3
bucket,
you
could
do
that
using
config
base
in
theory.
Now
whether
it
works
I,
don't
know
sounds
like
no
is
the
answer,
but
ok,
I'll
have
a
look
at
that,
and
next
time
was
about
the
e3
which
we
talked
about
before
and
then
Peter,
where
I'm
guessing
is
still
men
in
attendance.
Is
it
possible
to
test
multiple
cluster
configurations
in
our
ete
tests
and
in
particular,
motivated
by
our
launch
templates?
A
The
answer
is
yes,
it's
relatively
straightforward
to
set
up
different
configurations,
and
so
I
think
we
do
have
a
configuration
for
like,
for
example,
testing,
calico
or
different
Network
plugins.
That
are
not
the
default.
These
jobs
are
maintained
in
the
kubernetes
test.
In
for
repo,
which
I
will
drop
a
link
to
I'm
gonna.
Try
to
find
the
actual
URL
bar
I,
don't
link
it
to
it
now,
make
it
more
specific
and
embedded.
A
So
it's
fairly
easy
to
do
that.
The
other
thing
that
we
have
is
the
ability
to
which
would
have
called
this
launch
template
thing.
We
can
do
they're,
not
either
a
tests,
but
we
can
run
unit
tests
where
they
sort
of
fake
Amazon
account
or
not.
A
figure
I
got
a
marked
Amazon
interface,
and
we
can
verify
that
nothing
is
leaked.
A
So
one
of
the
things
we
could
do
is
we
could
write
a
unit
test
that
actually
goes
and
basically
it
it
simulates
badly.
Well,
you
know
like
describe
instances
and
run
instances
and
all
of
those
things
and
sort
of
just
checks
that,
like
a
cops,
create
followed
by
a
cop
secrete,
does
some
stuff
a
cups
update
with
an
unchanged
configuration,
doesn't
change
anything
and
then
it
cops
tallit
undoes
everything
that
it
the
cops
created,
so
there's
nothing
unique
effectively.
A
We
tend
to
use
sort
of
both
strategies,
so
it's
nice
to
have
a
unit
test
to
like
make
sure
they're
nothing
leaks
and
that
updates
are
sort
of
idempotent
and
then
it
the
972
each
has
to
make
sure
that
actually,
in
practice,
everything
works
because
the
unit
tests
only
simulate
a
sort
of
very
narrow
sliver
of
the
functionality
like
we
only
like
mock
the
ec2
API
stew-like.
In
fact,
we
do
crud,
but
we
don't
do
any
like
deep
validation
of
anything.
E
I'll
see
if
I
can
do
sorry
I'm
at
home
today
yeah.
So
this
is
my
first
office
hours,
hi
representing
Zendesk
and
we're
currently
sort
of
like
in
the
migration
from
our
homespun
communities
clusters
to
cops
and
I
think
we're
currently
looking
at
around
cops,
puns
or
12.
So
just
wanting
to
get
involved
in
the
community,
not
sure
if
this
is
the
right
level
for
some
of
these
questions,
but
I'll
just
put
them
out
there
seeing
as
I'm
assuming
we've
got
the
time.
Absolutely
yes,.
D
E
The
first
one
was
from
my
colleague
so
in
our
previous
cluster.
We
relied
on
auto
scaling
group
life
cycle
books,
a
lot
to
make
sure
that
you
know,
if
instance,
got
terminated
or
killed,
we'll
still
be
able
to
drain
safely,
and
it
seems
that
at
least
that
part
isn't
supported
by
cops.
Doing
a
rolling
update
still
make
sure
that
the
training
happens
correctly.
But
if
say
Atos
says
your
instance
is
going
to
terminate
in
30
minutes.
That
still
is
not
done
safely
and
I
guess.
The
question
is:
would
that
be
feasible
to
enter
cops?
E
A
E
A
Right
we
have
in
the
cluster.
The
current
model
for
add-ons
is
everything
is
thank
you
too.
Floppy
suitcase
in
the
cluster
objects
there.
There
are
little
blocks
where
we
can.
In
theory,
add
field
sit
and
then
allow
customisation
also
to
add
on
sort
of
baked
into
corpses
Ryan
when
whose
encore
is
fixing
an
issue
where
we
weren't
necessary,
exactly
updating
those
like
when
you
changed
a
we
weren't,
necessarily
upgrading
when
you
change
all
the
options,
so
that's
gonna
get
fixed,
there's
a
issue
which
is
in
general,
like
it's
sort
of
not
very
scalable.
A
To
imagine
that
every
add-on
is
going
to
have
to
be
updated
and
require
a
cop's
update
and
every
customization
is
going
to
require
a
change
in
cops
and
abusive,
cops
that
isn't
a
great
model.
There
is
some
work
going
on
in
a
separate
of
six
austere
life
cycle,
called
sort
of
look
at
operators
for
doing
this
in
cups
itself,
we're
trying
to
solve
the
first
problem
of
like
when
a
new
version
of
what
was
it
today.
It
was
a
canal
when
new
version
of
canal
comes
out.
A
It
would
be
great
to
be
able
to
really
say
362
without
having
to
release
a
version
of
cops,
and
we
have
this
a
channel
file
which
lets
us
sort
of
update
images,
but
it
doesn't
unless
update
the
recommended,
kubernetes
version
that
doesn't
yet
have
add-on
so,
basically
moving
that
into
a
moving.
The
specification
of
add-ons
out
of
the
compiled
cop
code
into
an
external
location
that
can
be
independently,
updated
would
be
good.
I
think
we
actually
were
PR
starts
that
journey.
E
So
that's
actually
exactly
what
we
want
to
do
not
know,
but,
for
example,
core
DNS
and
I'll
admit
it's
a
complete
hack
at
the
moment,
but
we
actually
like
get
the
bootstrap
channel
file,
that's
in
s3
and
override
it
with
their
own
content,
and
you
know
if
Cox
upstream
can
actually
support
doing
that
the
proper
way
yeah.
That
would
be
ideal
for
us
yeah.
So
we
can
want.
A
That
I
mean
for
something
as
common
or
reasonable
as
changing
the
memory
on
core
DMS.
That's
something
where
we
could
just
map
a
field,
and
probably
we
should
actually
do
that
forward.
For
you
know
the
use
cases
that
people
really
have
we
do
want
to
get
to
a
world
where,
however,
unusual
your
requirement
is,
it
will
work
as
well
and
that's
sort
of
the
idea,
one
of
things
we're
looking
at
but
haven't
really
developed
yet
is
customized
patches
or
basically
pack
declarative
patches.
They
would
let
you
change
anything
you
wanted
to.
A
If
we
put
those
in
operators,
you
can
change
it
on
the
cluster
itself,
but
we
can
probably
also
do
something
in
the
cup
CLI.
That's
that
world
is
what
the
the
sub-project
is
sort
of
looking
at
trying
to
understand
how
to
how
to
do
that,
how
to
specify
the
economical
set
of
add-ons
and
then
how
to
enable
enable
anyone
to
change
that
configuration
in
a
way
that,
in
theory,
would
work
across
all
the
tools
and
could
like
be
used
in
your.
A
Like
you
had
a
proprietary
solution,
you
got
a
proprietary
solution,
I,
don't
think
it'd
still
hear
me.
I
just
picked
my
get
up,
you
have
a
priori
solution
and
what
you
could,
in
theory,
have
used
the
that
open
source
approach,
community
approach
in
your
breakfast
sushi
with
a
solution
as
well,
but
that's
sort
of
where
we
are
I.
Don't
know
that
makes
sense.
You.
B
E
One
of
the
big
things
that
we
we're
actually
looking
to
do
is
I
think
similar
to
the
discussion
before
is
where
we're
trying
to
run
Cortinas
as
their
Damon
set
everywhere,
so
not
just
on
the
masternodes
but
on
all
nodes,
and
we
also
have
a
couple
of
annotations
to
get.
You
know
Prometheus
metrics,
to
get
pulled
in
automatically
and
some
some
other
stuff
that
I'd
imagine
would
be
specific
to
our
particular
use
case.
Not
necessarily
generally
applicable
I
mean
they're,
both
okay.
A
So
the
the
Prometheus
metrics
is
one
that
again
feels
like
something
that
has
two
attributes.
One
of
them
is
that
everyone
might
want
to
use
it
and
the
other
one
of
which
is
that
it's
quite
hard.
It's
not
easy
to
change
right.
It's
sort
of
a
little
bit
fiddly
like
set
those
those
annotations
correctly.
That's
my
understanding.
My
know,
if
there's
a
flag
I
should.
E
A
Okay,
so
okay,
but
B
so
there's
certainly
a
class
of
changes
where,
where
we
would
want
to
build
them
as
fields,
and
ideally
we
are
able
to
do
that
in
a
way
that
yeah
I
guess.
The
idea
of
operators
is
that
we
could
then
define
that
those
sort
of
standard
fields
in
a
way
that
can
be
shared
across
all
the
installation
tooling.
But
yes,
a
lot
like
we're
still
a
ways
away
from
that
for
the
for
the
daemon
set
approach.
A
E
Said
we
run
coordinates
itself
as
a
as
a
daemon
set,
both
to
do
caching
and
forwarding
of
requests
to
console,
so
we
use
console
for
service
discovery
as
well
as
okay,
it's
like
with
the
different
suffixes
yeah.
So
it's
gonna
like
console
domains,
then
map
to
a
console
daemon
set
on
the
same
host
and
stuff
like
that
very
cool
yeah.
E
A
E
B
E
But
it
looks
like
there
is
also
a
lot
of
tying
together
such
that
you
know
the
keystore
essentially
has
to
live
in
the
same
s3
bucket
or
what
a
video
best
or
using
as
the
the
rest
of
the
config.
And
ideally,
we
would
actually
like
to
use
something
external
like
a
bolt
like
that.
I'm
just
wondering
you
know
is
that
feasible
to
you
know,
try
and
go
down
that
path,
or
is
it
or
is
that,
like
the
wrong
design
approach
to
be
taking?
It's.
A
A
A
In
theory,
it
shouldn't
be
too
hard
to
have
a
separate
key
store
that
is
like
in
a
different
location.
The
challenge
is,
it
turns
out
that
you
need
like
thing
like
you
need
the
CA
key
anyway,
so
it's
not
quite
as
secure
as
you
might
want,
and
you
also
you
basically
need
to
like
pass
a
lot
of
things
back
and
forth
you
the
only
thing
we
actually
need
on
the
client
side,
we
generate
all
the
keys,
client
setting
cups,
the
only
thing
you're.
A
Actually,
the
only
reason
we
actually
need
that
is
sort
of
for
coordination
and,
in
other
words
like
how
does
the
key?
How
do
the
keys
get
onto
all
the
machines,
but
that
actually
could
be.
You
could
do
that
other
ways
and
then
the
like,
if
you
had
a
volt,
the
other
one
we
need
is
just
to
build
a
cube
config
for
you.
A
That
uses
a
key
I
believe,
and
so
we
do
need
the
key
for
that,
but
I
think
it's
definitely
feasible.
I
think
it
would
be
it's
not
as
trivial
as
like,
not
putting
the
two
or
three
one,
one,
three
one
one
three
IP
addresses
in
there,
but
it's
certainly
it's
definitely.
A
lot
of
the
mechanics
should
be
there
for
for
the
work
that
went
into
like
using
CR
DS,
which
is
an
ongoing
ongoing
work.
But
it's
the
separation
I
just.
A
E
Primarily,
this
is
Pesce
called
fault
which
we've
used
internally
for
a
long
time,
and
in
fact
our
current
ship
postures
are
using
that
but
yeah
migrating
to
cops.
It
seems,
like
that's,
probably
not
gonna,
be
an
option,
at
least
for
the
immediate
future
and
just
sort
of
investigating
you
know.
If
we
wanted
to
go
down
that
path,
you
know
how
much
effort
would
it
be
and
would
even
be
possible,
so
yeah
the
it
would
be.
A
A
If
you
want
to
have
the
machines
themselves
source
from
the
from
vault,
which
I
presume
is
more
what
you
hunt,
it's
not
the
end
of
the
world
in
that
we
do
have
an
agent
that
runs
on
the
on
every
node,
which
actually
goes
and
copies
those
keys
from
s3
or
wherever
you're,
storing
them,
and
you
would
just
have
to
teach
that
agent
how
to
pull
from
vault,
which
doesn't
some
it's
not
impossible,
but
it's
certainly
not
triggered
or
trivial
yeah.
Thank
you,
I
could!
A
Actually
you
know
what
you
could
do
I
could,
if
you
wanted
to,
if
you
wanted
to
pursue
this
I
would
start
by
doing
teach
cops
command
line,
to
interact
with
vault,
but
then
dump
the
keys
into
an
s3
bucket
and
then
a
step
one
and
then
step
two
is
don't
dump
them
into
the
s3
bucket.
But
teach
note
up
the
agent
had
a
fetch
them
direct
from
vault.
A
E
A
Yeah
definitely
open
an
issue
and
we
can
like
figure
it
out
and
to
give
their
security
requirements
is
always
good
to
hear
like
the
differing
security
of
garments.
Everyone
has
and
like
it
makes
us
all
better
and,
like
you
know,
we
can
we.
We
are
trying
to
get
cops
to
be
entirely
secure
by
default.
It
is
a
ongoing
journey,
but
we
are
making
progress.
I.
A
A
All
right,
well,
it
looks
like
I
have
a
lot
of
things
to
work
on
and
yeah.
Please
thank
you
for
everyone's
contributions
and
please
feel
free
to
send
PRS
review
PRS,
send
issues
open
issues.
Thank
you
for
thanks.
Ron
is
well
for
all
the
details,
and
and
thank
you
to
everyone
and
I
will
try
to
do
a
release.
I
would
try
to
get
some
puppy
osmosian.