►
From YouTube: Kubernetes WG K8s Infra - 2021-07-21
Description
A
A
Okay,
let
me
show
you:
I
will
share
only
that
one
and
share
all
right.
Looking
at
the
billing
report,
175,
which
is
but
up
from
the
previous
spending.
C
A
Not
not
that
I'm
aware
of
that,
I
could
see
on
the
outside,
because
on
top
of
the
the
billing
report,
do
we
do
data
analysis
on
the
general
traffic
and
the
data
flow
of
the
of
the
logs,
and
we
have
not
seen
any
reduction
in
the
in
the
logs,
so
no
nothing
there.
I
think
the
main
thing
that
we're
looking
at
to
reduce
cost
is
the
mirroring
which
hippie
is
busy
working
on
the
design
document
for
that
and
he's.
A
Instead,
you
will
be
reaching
out
to
two
dimms,
probably
this
week,
to
run
it
past
you.
So
no
okay,
thanks
all
right.
So
if
there's
nothing
no
comments
about
the
billing
report,
I'm
going
to
skip
pause.
This.
A
D
For
everyone,
I'm
sorry
for
being
late,
so
my
first
item
is
basically
an
open
question
to
one
of
the
aspects
of
the
basically
of
this
working
group
so
for
people
new.
The
main
mission
of
the
one
of
the
main
mission
of
this
group
is
to
migrate
all
the
project
we
have
previously
pre
pre-submit
and
post,
submit
to
the
new
community
on
android.
D
B
We
have
to
start
down
this
path
and
we
learn
as
we
go
along,
but
I
don't
know
if
we
can
set
the
deadline
right
now,
because
there
are
so
many
things
that
we'll
end
up
learning
during
the
process.
I
think
so
yeah,
that's
that's
my
main
concern
right.
So
what
we
could
do
is
say
that
release
blocking
jobs
for
master
should
be
moved
in
123
right.
B
So
that
gives
us
three
months
and
it's
only
one
set
of
jobs
and
everybody
will
be
looking
at
the
same
set
of
jobs
because
it's
release
blocking
and
it's
only
for
master
right,
so
add
some
qualifiers
like
that
and
and
then
set
a
deadline.
And
then,
when
we
get
to
that
point,
we
realize
that
oh
we've
set
up
almost
everything
that
is
needed
and
it's
easier
to
flip
the
rest
of
the
things
over
a
period
of
time
right.
I
think
that
might
be
like
a
good
way
to
approach
the
problem.
D
I
see
I
think
I
talked
about
this
with
aaron
and
he
even
went
beyond
your
proposal
because
it
was
interesting
to
basically
shout
the
radius
blocking
and
the
message
blocking
in
different
north
pole
or
maybe
a
different,
a
dedicated
pro
instance.
D
So
I
was
saying
we
may
need
to
discuss
things
to
one
of
the
meetings,
but
it's
only
can
it
can
attend.
So
this
is
like.
I
want
to
throw
the
start
the
conversation
about
this
because
I
feel
like
we
have
2
000
job
to
migrate
and
there's
a
lot
of
things
we
discover
over
time.
I
mean
it's
more
than
two
thousand
jobs.
B
So
one
question
or
no,
when
you
were
talking
to
aaron
was,
is
we
already
have
a
way
for
the
existing
pro
to
actually
launch
clusters
in
other
places?
Right
like
we
have
the
cluster,
which
only
image
building
jobs,
run
and
kind
of
thing
right,
so
we
have
bosco.
B
Yeah,
so
are
we
talking
about
like?
Is
there
a
way
to
use
the
existing
pro
cluster
to
launch
jobs
somewhere
else,
or
are
we
talking
about
like
just
moving
to
different
clouds?
So
there
is
many,
many
all
possible
options
here
right.
Like
start,
if
you
start
from
the
beginning,
we
have
an
existing
pro.
Can
it
start
jobs
in
other
clusters
that
are
owned
by
you
know
cncf
right,
that's
nice.
D
Basically,
I
mean
there
are
two
possibilities.
We
I
mean.
The
first
thing
is
we
don't
move
outside
of
tcp?
We
still
use
gcp
resource
because
we
have,
most
of
the
credit,
are
dedicated
to
the
gcp
account.
So
the
idea
is
to
bootstrap
a
new
pro
instance
connect
that
instance
to
the
existing
block
cluster.
D
It
was
never
done
before,
so
it
would
be
a
first
if
we
do
that.
So
we
have
to
be
carefu
cautious
about
this,
or
we
basically
add
to
the
current
cluster.
I
mean
gk
cluster
different
notebook
dedicate
only
for
the
radius
blocking
and
merge
blocking.
D
B
Seems
to
be
the
less
risky
because
it's
all
you're
worried
about
is
like.
Where
are
you
burning
up
the
money
right
like
worried
about
like
exactly
where
the
pro
runs?
Because,
that's
not
you
know
just
running
pro-
is
not
what
burns
us
money
right
like
it's,
the
jobs
actually
that
they
run
see
burns
as
money.
D
A
lot
of
things-
I
don't,
I
think,
is
just
basically
improve
some
tooling
arrow
around
the
basically
the
existing
pro
job
defined.
So
we
basically
had
a
new
test
preventing
people
to
run
to
basically
run
the
pro
and
other
build
cluster.
So
it's
an
open
conversation.
We
need
to
look
at
the
option
and
basically
see
what's
what
had
been
done
before
coach
phrase
in
133.
B
Yeah,
I'm
okay
with
any
of
these
options,
because
in
the
end
you
know
the
heavy
lifting,
whoever
does
the
heavy
lifting
needs
to
take
the
decisions
right.
So
whatever
you
want
to
do
or
no
and
like
whoever
else
it
takes
takes
that
out.
D
Okay,
I
will
follow
up
on
slack
and
we
with
aaron.
E
I
might
be
keen
for
conversations
around
bosco's
and
aws
and
then
other
providers.
They
come
on
to
give
us
resources
that
are
available
for.
B
Right
so
hippie,
then
the
question
there
would
be:
what
is
a
good
way
to
stitch
the
pro
existing
pro
to
running
the
jobs
on
the
other
cloud
part
is
right,
like
we
have
one
pattern
today,
which
is
the
bosco's
pattern.
Is
there
anything
else
that
is
possible?
That
would
be
the
question
there.
F
Like
a
kubernetes
cluster
back
ended
with
stuff
that
runs
on
the
fly,
eddie
you're
familiar
with
that
one.
It's
it's
almost
like
it's
obvious.
F
No,
I'm
not
gonna
worry.
G
B
Sorry,
someone
go
ahead,
is
it
the
acs
one
amazon
container
service
or
something.
E
Yeah,
there's
a
container
service,
there's
something
I
was
saying:
how
can
we
connect
that
together?
So
we
can
have
jobs
that
run
on
that
provider
and
not
numero
won't
be
running
unless
we're
using
it,
and
I
know
if
it's
more
cost
effective
for
that
provider,
but
finding
those
other
mechanisms
that,
but
I
still
think
boss
goes
as
a
whole
idea-
is
a
great
mechanism
to
try
with
that
for
various
providers
where
it
works.
D
So
the
problem
is
not
technical,
because
what
I
mean
from
for
pro
pro
just
named
equip
coffee
to
connect
to
equipment's
cluster.
So
if
you
bootstrap
a
community
cluster
on
amazon-
and
you
provide
the
cube
configuration
pro,
it
can
schedule
projects-
that's
not
the
bridge,
the
basically.
The
idea
is
basically
to
focus
to
use
it
to
usage
of
gcp,
because
most
of
the
tooling
in
testing
from
is
very
focused
on
gk.
D
So
we
need
to
add.
We
need
to
look
at
basically
all
the
possible
issue
with
integration
between
pro
and
enik
and
a
aks
cluster.
So
I
don't
want
to
spread
most
of
the
time
on
this
part,
because
I
I
know
with
gk,
we
have
a
lot
of
integration
between
pro
and
the
blue
cluster.
G
D
Is
to
if
you
decide
to
basically
to
split
or
to
shut
the
the
blocking
job
for
a
release
between
the
between
with
no,
let
me
rephrase
this
so
actually,
all
the
pro
jobs
for
kubernetes
run
on
a
unique
cluster.
D
D
Because,
right
now
we
have
it.
We
have
an
issue
with
brick
cluster.
We
have
a
lot
of
expression
on
those
bread
clusters,
so
the
idea
is
to
ensure
we
don't
we
we
can
always
ensure.
We
have
basically
performance
during
a
release
phase.
We
separate
the
pro
job
and
we
move
them
to
a
digi
error,
a
ddk
cluster
or
a
dedicated
set
of
nodes.
D
E
Absolutely
I
don't
think
I'm
suggesting
that
we
move
hard
to
move
things
over.
I'm
thinking
that
we
look
for
easy
paths
for
the
new
things
coming
forward
exactly
so
that
we
can
distribute
easily.
I
would
never
want
to
make
more
work
for
anybody
than
we
already.
B
That's
a
nice
one,
eddie
the
pro
on
amazon,
aws.com.
G
B
How
easy
has
it
been?
Have
you
heard
problems
trouble
from
them,
or
has
it
been.
G
No,
I
haven't
heard
anything
that
I
think
micah
housler
and
jay
pipes
are
the
ones
that
run
the
infra.
I
think
mike
is
also
on
the
security
committee
for
kubernetes,
but
yeah.
I
haven't
heard
anything
about
trouble
with
it.
B
A
D
D
This
is
a
request
to
ipad
folks,
like
we
plan
to
migrate
everything
related
to
the
audit,
you
are
doing
for
the
image
falling
to
a
dedicated
project
and
if
you
can,
I
think,
if
you
can
open
the
link.
A
D
A
D
Yeah
also
anything
that's
related
to
the
work
you
are
doing
right
now,
because
we
don't
I
don't
I
didn't
take.
I
didn't
take
the
time
to
basically
explore
the
sandbox
project
and
try
to
understand
what
you
are
doing
currently
so,
basically
after
this
of
the
basically
the
python
script
that
the
bigquery
data
set,
the
other
gcs
bucket,
that's
need
to
be
migrated.
A
Okay,
what
we'll
do
is
we'll
just
verify,
because
there
is
some
trial
information
in
there
that
we
ran
some
queries
on
that
we
could
properly
remove
before
you
move
it.
So
let
us
clean
that
up
and
let
you
know:
okay,
okay,
so
I'll.
Take
that
as
a
ai
for
I
to
clean
up
the
sandbox
and
then
let
you
know
when
we're
done.
D
A
Sorry,
I
missed
that
or
not
just
say
again.
G
A
D
D
A
I
know
what,
if,
if
you
think,
we
just
run
it
over
to
the
next
meeting.
It's
it's
a
guy
by
me.
F
G
Point
so
I
know
hippie,
you
spun
up
a
conversation
with
some
folks
about
the
aws
accounts
that
are
owned
by
the
cncf.
I
just
wanted
to
check
in
on
that.
This
is
the
original
issue
where
it
turned
out.
Everyone
feels
like
no
one
has
access
to
this
and
it
still
sounds
like
they
need
quotas
increased
and
that's
like
an
easy
thing
to
do.
If
we
open
support
tickets
from
inside
the
account,
so
I
don't
know
where
that
stands
with
getting
any
of
us
access
to
those
accounts.
E
Access-
and
I
haven't
tried
it
yet-
maybe
two
days
ago
through,
I
think,
there's
some
admit
I'll
go
research
more.
I
haven't
had
time
to
look
at
it,
but
I
do
have
full
cncf
access
the
entirety
of
aws
donation
and
I
will
put
this
on
the
internet
today.
I
think
we're
going
to
meet
after
this.
Maybe
we
can
walk
through
that.
I
I
know
that
longer
term.
E
I
wanna
make
sure
that
anything
that
happens
through
this
account
or
kubernetes
specific
stuff
happens
to
some
type
of
pr
control,
within
the
case
that
I
o
repo
and
that's
putting
a
pr
in
for
that
together
later
later
today,.
G
Okay,
it
sounds
like
there
are
different
accounts,
but
yeah
we
can
talk
about
that
later
sure
they
think
cluster
api
has
its
own
account.
B
Thank
you.
I
think
that
one
was
originally
from
justin.
I
think,
if
I
remember
right,
maybe
santa
barbara,
just
an
fb.
B
Right
I
mean
the
aws
accounts
that
the
cluster
lifecycle
is
using.
Oh
okay,
really
came
from,
you
know,
was
handled
by
justin.
D
So
I
have
just
one
comment
about
this
and
a
kind
of
an
answer
to
eddie,
so
my
understanding
is,
we
can
handle
any
issue
in
those
accounts,
because
the
I
mean
those
icons,
I
basically,
I
can't
dedicate
to
all
the
sensitive
projects,
so
we
as
kids
sent
from
working
group
can
access
to
that.
That's
my
understanding.
D
So
we
we.
We
keep
the
open
the
issue
open
to
track
any
conversation
and
basically
keep
any
update
about
the
issue.
But
we
are
skating
for
what
people
don't
have
access
to
those
accounts,
because
what
else
told
us,
I
think,
two
weeks
or
last
week
is
those
icons
are
for
all
the
cnc
projects.
So
it's
gonna
be
tricky
to
basically
give
access
to
that
just
to
use
kubernetes
folks.
E
I
think
I'm
going
to
try
to
help
bridge
that
as
a
contractor
at
the
cncf,
so
that
we
can
steward
the
appropriate
sub
parts
of
the
cncf
board
level
account
for
aws
and
make
sure
that
we
have,
you
know,
add
the
same
type
of
admin
level
access
for
those
sub
areas
within
the
case
info
working
group
team.
G
G
D
Months,
I'm
not
part
of
the
conversation
between
amazon
and
cmcf,
so
I
basically
don't
know
how
to
help
you
or
tell
you
what
you
need
to
do
right
now.
B
I
know
the
background
there
is.
There
is
a
a
program:
that's
going
to
start
in
cncf
level
for
cloud
credits
from
multiple
cloud
providers
and
that
will
be
for
you
know
it
depends
on
the
cloud
product
what
they
want
to
do.
It
could
be
a
specific
project
or
it
could
be
for
all
projects.
So
you
know
they
have
some
details
that
are
being
worked
out.
B
He
was
working
on
it,
so
at
some
point
we
will
get
to
know
how
much
kubernetes
itself
will
have
allocated
for
budget
or
whatever
constraints
on
pp
hours
or
whatever
right
for
a
different.
You
know
across
different
cloud
providers
and
then
we
have
people
who
can
have
access
to
those
and
he
is
helping
with
that.
That's
the
background
context.
D
D
So
I'd
like
a
quick
question:
is
it
possible?
We
have
a
basically
dedicated
new
account
just
for
the
kubernetes
point,
the
kubernetes
orc.
Let's
see
the
comments
are,
which
means
we
migrate,
everything
to
a
new
account,
because
I
know
amazon
enough
to
know
like
you
get
an
account
part
of
an
organization
and
you
assign
a
set
of
credit.
So
it's
impossible
to
do
that
and
basically
migrate,
anything
related
to
the
kubernetes
of
and
give
adding
access
to,
seek.
B
Testing
yeah,
we
don't
know
those
details
yet
because,
because
it
could
be
worked
out,
the
first
thing
that
needs
to
do
is
like,
on
the
business
side
like
what
what
what
does
cncf
provide
to
the
cloud
orders
and
then
so
I
so
that,
like
first
step
and
then
once
we
have
some
commitment
in
certain
terms
of
money
or
whatever,
then
we
will
have
to
tp
and
other
people
will
help
figure
out
for
each
of
the
cloud
providers.
How
do
we
make
it
work
right
and
then
it
will
come
to
the
individual
projects.
G
E
Previously,
I
believe
there
was
a
non-relational
connection
to
to
a
url,
that's
public,
for
people
asking
for
donations
for
public
projects
at
amazon.
No
one,
we
don't
know
anyone
on
the
other
end
of
that
connection.
They
just
said
here's
your
200k
right
and
so
then
that's
been
stewarded
by
by
ehor
and
chris
and
has
been
the
person
creating
the
accounts
manually.
So
it's
it's.
That's
the
transparency
that
we
have
now
and
now
like.
We
need
someone
to
steward
those
things,
so
I've
just
been
granted
access
to
that.
E
I
think,
evolve,
basically
changed
the
email
on
his
account
to
match
a
a
group,
and
that's
the
group
that
I
have
access
to
now.
So
that's
the
only
thing
I've
taken
so
far.
Everything
else
is
more
relationships
I
want
to
want
to
emphasize.
This
is
all
about
conversations
and
relationships
and
setting
good.
What
what
can
we
expect
from
each
other?
And
we
don't
know
what
to
expect
from
each
other.
Yet.
D
D
G
F
E
Okay,
thank
you.
I
think
I
can,
within
our
current
scope,
without
changing
anything,
go
ahead
and
create
the
highest
level
of
subaccount
for
accounting
purposes
that
we
have
within
the
org
level.
Is
there
anything
or
not,
you're,
concerned
about
that
we
might
miss
from
an
org
level
account
with
creating
a
full
admin
sub
level
account.
E
Who
is
that
question
directed
at
oh
or
not
and
aaron,
and
anybody
this
is
everybody
that
are
not
even
the
one
asking
the
can.
I
Objective
is
I
want
the
equivalent
of
what
we
have
for
gccp,
where
somehow
we
have
a
single
account
that
we
can
use
to
see
where
all
the
money
is
and
what
resources
are
being
used.
And
I
I
really
don't
mind,
however
many
sub-accounts,
and
what
not
hang
off
of
that,
if
that's
a
better
way
to
do
things
within
amazon.
I
So
I
lack
some
domain
expertise
and
I've
lacked
bandwidth
on
this.
But
I'm
happy
to
be
involved
in
relational
aspect
and
kind
of
figuring
out
how
we
structure
this
stuff.
But
I'm
told
I'm
totally
cool
with
eddie's
priority
of
like
we
want
to
make
sure
people
are
blocked.
E
I
think
mine
was
a
technical
one
to
see
if
there's
anything
that
we
know
of
that,
that
sub
account
wouldn't
be
that
we
need
like
the
accounting
thing.
Can
we
allow
something
within
that
account
to
do
full
accounting
for
everything
within
that
org?
Or
are
there
things
within
and
it's
all
amazon
specific?
I
have
no
idea.
Are
there
things
that
are
only
org
level
that
will
need
to
have
like
another
level
of.
D
So,
for
basically
your
I
mean
the
curiosities
needs,
we
don't
we
don't
really
have
to
worry
about.
What's
set
up
at
the
arc
level,
we
basically
just
need
an
amazon
account
with
the
wrong
administrator
access.
I
I
think
I
feel,
like
maybe
eddie,
if
it's
not
you,
it's
somebody
on
your
somebody
on
your
side
of
the
house
who
can
work
with
maybe
myself.
I
But
certainly
hippie,
igor
and
chris
to
make
sure
that,
like
we
get
an
account
at
yeah
as
high
level
as
is
appropriate,
and
if
we
discover
that
that
account
is
showing
us
more
cncf
assets
than
we
should
be
seeing
we'll
work
through
that
with
y'all
to
scope
us
down
to
whatever
we
need.
I
B
I
G
Yeah,
I
I'm
definitely
nowhere
near
an
expert
when
it
comes
to
billing.
I
can
get
someone
and
I
can
get
the
questions
answered,
that
we
have
so
all
right
and
then,
as
for
credits
like
I,
it
sounds
like
we
just
the
cncf
went
through
like
the
general
credit
program
and
not
like
bob
wise
or
any
of
the
eks
team.
I
Should
probably
loop
in
bob
yeah,
I
would
say
looping
and
bob
would
be
great
that
I
would
yeah.
That
would
be
a
great
way
to
go
because
I
I
feel,
like
I
don't
know
how
much
of
the
history
you
you
all
already
explained,
but
I
felt
like
there
were
some
sort
of
piecemeal
things
that
just
happened
from
different
people
and
amazon.
As
requests
came
up,
I
don't
know
that
we've
ever
like
structured
everything
through
a
formal
process.
I
think
that's
what
we're
trying
to
bootstrap
here.
A
J
Basically,
we
have
a
css
proxy
generated
binary
and
right
now
it
is
only
in
the
staging
market,
and
so
I
think,
based
on
the
discussion,
I
understand
that
there's
no
automatic
way
to
promote
from
staging
to
release,
so
someone
need
to
manually
do
it,
and
this
is
kind
of
just
I
copied
it.
Oh,
I
copied
the
wrong
link,
but
I
have
another
one
for
csi
proxy,
similar
to
this
chaos.
J
I
I
I
That
may
even
look
an
awful
lot,
like
general
binary,
artifact
promotion,
but
I
am
not
aware
exactly
of
where
that
running,
where
that
is
running
or
how
it
is
running
or
how
it
works.
I
So
I
would
be
interested
on
going
through
this
journey
with
you
of
trying
to
figure
out
if
this
is
in
fact
a
real
thing
that
can
be
used
by
more
people
than
chaos,
because
it
looks
just
at
a
glance.
It
looks
like
what
you're
doing
is
correct,
but,
like
I
personally
don't
know
right
now,
I
would
have
to
go
figure
out.
So
where
do
I
go?
Look
to
see
if
this
is
actually
happening
successfully?
I
But
if
you
cc
on
ccme
on
that
pr,
my
username
is
spiff
xp.
I
will.
I
will
keep
track
of
this
with
you.
I
I
know
you
raised
something
about
being
unable
to
write
directly
to
the
bucket
a
long
time
ago
and
I
apologize
that
it
dropped
off
with
my
radar.
I
I've
been
working
to
kind
of
overhaul
permissions
for
if
you
do
a
slash
in
front
of
the
cc
that
way,
it'll
assign
me
as
a
reviewer.
I've
been
trying
to
do
redo
permissions
such
that.
If
we
don't
have
artifact
promotion,
I
want
to
figure
out
what
we
can
do
in
the
meantime,
but
I
like
your
approach
of
trying
to
use
what
appears
to
be
actively
used
by
chaos
and
see
if
that
works.
For
us.
J
Sure
so
I
think
from
what
I
read,
I've
got
somewhere.
Basically,
this
is
mainly
just
for
kind
of
record.
It's
not
doing
anything.
Someone
still
need
to
manually
run
some
command
to
actually
promote
to
that
release
bucket.
I
I
Okay,
that'll
work
and
if
you
are
in
a
state
such
that
this
is
really
blocking
something
that
you
need
for
the
122
release.
Let's
chat
about
it
because,
as
I'm
sure
I
or
another
human
would
be
willing
to
do
the
manual
copy
from
one
place
to
the
other.
For
now,
if
it's
going
to
unblock
you,
but
I
don't
want
that
to
be
our,
it
got
to
be
a
special
case,
sort
of
thing.
J
A
Thank
you
thanks
aaron.
Anybody
else
want
to
add
anything.
A
I
Those
quickly,
okay,
so
right
in
terms.
I
In
the
interest
of
time,
I'm
not
going
to
walk
through
infrared
gamble
too
much.
Can
you
allow
me
to
share
my
screen?
I
think
I've
joined
as
somebody
who
doesn't
have
privileges
to
do
that
right
now,.
I
It's
not
you,
maybe
arno
can
do
it.
I
forget
if
co-hosts
can
have
other
catalyzes.
No,
no,
all
right.
Thank
you.
Whoever
did
that.
Let
me
share
a
specific
window.
Let
me
just
double
check.
I
I
This
came
out
with
this
a
little
bit.
I
want
to
have
an
interest
in
the
idea
of
using
yaml
as
our
bridge
out
of
bash
to
stuff,
that's
more
manageable,
that
could
be
other
scripting
languages
or
other
programming
languages
such
as
python
or
go,
or
it
could
also
be
terraform
or
terraform
modules.
I've
looked
around
briefly,
and
it
seems
like
terraform.
I
There
is
an
idiom
of
like
unmarshalling
gamble
and
using
that
as
structured
data
to
then
go
to
create
a
bunch
of
terraform
resources.
I
I
So
you
can
see
here.
For
example,
I
have
a
category
called
e2e,
and
then
I
have
a
bunch
of
keys
for
each
of
the
bosco's
projects.
There
are
a
lot
of
them.
I
will
scroll
down
further,
as
you
have
struggles
to
keep
up
with
rendering
another
example
would
be
like
the
projects
that
are
related
to
crowd
and
how
there
are
terraform
files
for
each
project.
G
I
I'm
aware
that,
like
maybe
this
is
overly
refactoring
around
what
we
organically
have,
and
perhaps
somebody
would
be
more
interested
in
kind
of
an
already
pre-established
way
of
managing
stuff
like
this,
that
we
should
kind
of
try
to
get
our
stuff
refactored
to
fit
into
where
I'm
trying
to
like
we've
been
at
this
for
a
couple
years.
I've
noticed
some
patterns.
I
think
it's
time
to
reduce
a
lot
of
the
boilerplate
and
copy
paste
to
to
more
concisely.
I
I
Down
is
this
crash
function
here,
where,
like
we
are
moving
toward
a
world
where
we're
using
external
secrets
the
as
a
there's,
an
app
called
kubernetes
external
secrets,
you
create
a
crd
that
describes
which
cloud
specific
secret
manager,
implementation.
I
You
are
storing
your
secret
in
or
it
could
also
be
something
like
involved,
and
then
this
bash
here
is
kind
of
like
pre-provisioning
google
secret
manager
secrets
that
can
then
be
referenced
by
all
those
crds
and,
like
this
kind
of
looks
structured-ish
and
treeish,
and
this
could
be
an
example
of
something
that
could
be
pulled
into
the
gamma
so
that
the
bash
doesn't
have
so
much
data
embedded
in
it.
I
So
that's
my
feel
on
it.
If
folks
are
interested,
we
can
talk
more
about
this
in
slack
or
there's
an
issue.
I
think
I
don't
have
an
issue
to
describe
that.
We
should
think
about
this.
I
will
make
money,
but
I
will
put
it
in
the
meeting
notes
afterwards.
I
The
other
thing
I
wanted
to
talk
about
is
thanks
to
arno
and
a
couple
things
I've
added.
We
have
a
lot
more
tests
that
run
against
all
of
your
pr's
now,
so
all
of
these
verify
scripts
in
the
hack
directory
anytime,
you
open
up
it's
kind
of.
I
So
this
is
the
same
deal
that
just
doesn't
have
the
fancy
junit
xml
wrapping
right
now,
and
so
we
have
things
like
all
of
our
bash
is
now
passed
through
shale
check
and
it
will
fail
if
your
dash
does
not
meet
shell
check
standards
or,
like
all
of
your
yaml,
is
passed
through
gamble.
So
we
can
start
to
do
some
kind
of
yaml
formatting
standards,
gamma
link's,
not
really
great
about
indenting
and
stuff.
I
Some
of
the
places
we
could
go
are
we're
verifying
all
of
our
terraform
and
right
now
all
we're
doing
is
like
vanilla,
terraform
validate,
but
we
could,
if
people
are
interested
or
familiar
with
this,
do
I
don't
know
terra
grunt
or
something
that's
more
terraform
specific
about
like
making
sure
the
terraform
is
valid.
I
Another
place
where
another
path
worth
exploring
further
would
be
using
conf
test.
I
don't
think
this
script
is
going
to
be
super
interesting
other
than
to
point
out
it
points
at
the
policies
directory.
So
contest
is
something
that
uses
open
policy
agent
to
describe
policies
and
then
run
that
against
structured
data.
I
The
site
for
confessed
here
is
contest.dev
and
they
have
an
example
of
how
they're
writing
some
stuff
against
kubernetes
in
a
language
called
rego,
and
I
forget
yeah
so
as
of
today,
conf
test
could
be
written
like
you
could
write
policies
against
all
of
these
different
kinds
of
configuration
formats.
I
So
we
can
do
validation
if
we
get
more
of
the
structure
of
our
infrastructure
into
data
like
in
gamble
or
in
json
files
or
in
terraform
files,
which
is
the
hcl
hcl2
apparently,
but
you
can
start
writing
policies
that
run
against
all
that
to
describe
more
of
our
like
overarching
intents,
the
way
we
use
that
right
now
we
don't
actually
fail
on
any
policy
things,
but
we
are.
We
do
have
like
a
deprecation
policy
which
we're
using
to
warn
on
any
kubernetes
resources
that.
I
I
Excuse
me
that
need
to
be
migrated
to
v1,
and
then
I
guess
the
last
bullet
point
I
have
there
is
that
we
finally
moved
to
terraform,
1.0
and
terraform
finally
going
1.0,
so
hooray
nba,
for
that
it
was
it's
amazing
and
then,
let's
see
so
two
other
things
I'll
walk
through
real
quick,
oh
look!
We
can
probably
even
see
the
verified.
I
I
Arno
has
also
moved
to
the
crowd,
work
that
he's
been
doing
in
this
format
and
I've
taken
a
couple
of
the
others,
and
I
think
when
I
land
moving
the
kate's
dot,
io
thing
over
that'll
be
the
last
of
it,
and
then
I
can
just
say
if
you
want
to
like
all
of
the
apps
that
we
currently
run
for
the
project
are
inside
the
apps
folder,
which
is
super
cool.
I
Let's
just
go
see
which
part
of
the
verified
job
failed
for
me.
So
I
can
demonstrate
what
this
is
like
right
now,
because
we
don't
have
j
unit.
There's
like
not
a
lot
that
I
can
see
unless
I
expand
the
log.
So
I
will
do
that
and
it
says:
oh,
it
says,
contest
failed.
I
So
apparently,
something
about
contest
failed,
and
I
also
have
yemolin
warning
about
something,
but
the
amoled
passed,
because
it
was
just
a
warning,
not
an
error,
so
I
will
have
to
go
figure
out
what
I
did
that
made
comp
tests
mad
okay,
the
companion
piece
to
this
is.
I
This,
which
is
basically
it's
going
to
add,
post-submit
jobs
for
all
of
the
apps
inside
of
the
app
folder,
so
that
anytime,
you
open
up
the
pr.
This
is
probably
going
to
be
really
a
great
stuff
to
look
at
yeah.
No,
I
think
the
pr
description
is
better,
but
I
don't
have
anything
about
the
apps.
B
I
When
the
pr
touches
apps
gcs
web,
the
deploy
gcs
web
will
fire
once
that
pr
merges
when
a
pr
touches
the
case
I
o
directory
that
will
automatically
deploy
as
well,
so
we'll
at
least
have
gotten
to.
I
don't
know
straight
up,
get
up
space
deployment
for
all
of
our
apps,
at
least,
which
feels
like
a
huge
step
forward
compared
to
where
we
have
been
for
the
last
couple
of
years.
I
So
I'm
pretty
excited
about
that.
Then.
The
other
thing
that
I
we
are
pushing
to
have
this
done
by
the
time.
122
goes
out
the
door.
His
v122
kubernetes
is
going
to
be
the
last
release
that
has
naci
artifacts
published
to
this
gcs
bucket
and
this
gcr
repo.
G
I
The
cube,
adm
and
cluster
api
folks
did
a
fair
amount
of
work
to
automatically
pull
from
the
correct
gcr
repo,
instead
of
what
they
had
hard
coded.
So
we've
been
storing
ci
artifacts
in
community
hosted
places
since
v117.
I
It
seems
like
b122,
isn't
a
reasonable
time
to
say
this
is
the
last
release
we're
doing
it,
for
I've
got
one
of
those
wonderful
checks,
check
box
based
lists,
which
I
now
click
a
few
more
of
these,
where,
like
there
were
a
number
of
repos
across
the
community
that
did
have
hard-coded
references
as
I
was
working
on
some
of
these,
I
tried
to
point
everything
to
use
dl.kates.io
instead
of
using
the
hard-coded
bucket
name.
G
I
It
was
using
like
wget
or
curl,
or
something
to
like
retrieve
the
artifacts
over
http
and
the
reason
I
did
that
is
because
dl.k.I
something
that's
completely
under
our
control
and
we
can
transparently
flip
the
entire,
like
everything
that
hit
steal.
That
case
that
I
know
we
could
flip
it
all
to
the
new
bucket.
I
We
could
decide
we're
only
going
to
flip
certain
paths
to
the
new
bucket,
so
we
could
flip
like
just
the
older
releases
over
or
we
could
flip
just
d122
over
to
community
hosted
release
artifacts,
I'm
talking
about
the
ways
we
can
roll
this
out
for
actually
because
I
feel
like
this
might
be
the
thing
that
blows
our
budget.
If
I
were
to
just
slam
it
on
entirely
and
there
will
still
be
some
hard-coded
references
that
we'll
have
to
do
as
we
migrate.
The
lease
artifacts
over
to
the
community.
I
But
that's
kind
of
the
goal
for
v123
is
like
all
of
the
kubernetes
released
artifacts.
They
come
from
the
community,
it
won't
come
out
of
google.com
and
so
like
bundled
up
in.
That
is
the
artifact
promotion
process
and
helping
out
ying's
specific
issue
and
making
sure
that
everything
that
we
use
to
like
run
tests
and
ci
and
stuff
also
comes
from
the
community.
I
A
I
Her
hosting.