►
From YouTube: Kubernetes WG K8s Infra - 2021-02-17
Description
A
Hi
everybody
today
is
wednesday
february
17th.
You
are
at
the
kubernetes
kate's
in
for
a
working
group,
bi-weekly
meeting.
We
all
adhere
to
the
kubernetes
code
of
conduct
here
by
being
our
very
best
selves
and
you
know
not
being
jerks
to
each
other.
A
If
you
have
a
problem
with
the
conduct
of
this
meeting,
please
email
conduct,
kubernetes,
dot,
io
or
you're
free
to
reach
out
to
me
privately
on
spiffxp
on
all
the
things-
and
I
think
did
I
mention
these
meetings
are
publicly
recorded
and
will
be
posted
to
youtube
later,
assuming
our
zoom
to
youtube.
Automation
isn't
broken.
A
Okay,
thanks
for
the
spiel.
So
is
there
anybody
here
who
is
new?
Who
would
like
to
introduce
themselves.
B
Hi
I'm
kind
of
new,
but
I've
been
that
I've
been
to
this
meeting
before
I'm
ernest
I'm
from
microsoft,
cool.
A
Good
to
good,
to
see
you
again,
okay,
so
first
up
is
billing
review.
I
will
go
ahead
and
pull
up
the
billing
report
and
share
my
screen
for
that.
Well
share
my
browser
window
for.
A
That
I'm
just
really
looking
forward
to
the
day
when
I
do
this
live
and
there's
like
a
huge
line
up
into
the
right,
and
I
go
oh
no.
I
have
no
idea
what's
going
on,
but
thus
far
I
would
say
the
our
spend
continues
to
be
unsurprising.
It
continues
to
be
very
driven
by
weekday
traffic.
A
I
can't
think
of
anything
in
particular.
I
want
to
point
out
here:
does
anybody
have
any
questions.
A
Okay,
ai
review,
so
we
chatted
last
meeting
about
trying
to
set
up
some
google
cloud
alerts.
I
think
this
was
maybe
to
help
ricardo's
cert
manager
monitoring
stuff.
So
I
opened
up
an
issue
to
describe
how
to
do
that.
We
also
have
another
issue
describing
how
to
like
do
get
ops,
driven
dashboards.
You
can
do
your
configuration
in
yaml
and
then
use
gcloud
to
upload
that
so
anybody
who
wants
to
take
that
on,
I
think
I've
tagged
those
as
help
wanted.
A
Let's
see,
I
have
an
open
action
item
to
use:
kate's
artifacts
prod
for
release
binaries,
I
put
put
something
on
sig
release's
meeting
agenda
to
ask
them
there,
but
I
couldn't
show
up
due
to
a
standing
conflict.
I
have
with
that
meeting.
So
I
don't.
I
never
heard
back
from
anybody
about
that
and
I
just
pinged
the
release
manager's
channel.
I
know
dan
said
he's
cool.
With
this
idea.
I
will
try
to
get
some
more
consensus
from
other
release
leads,
but
basically
yeah.
A
I
want
to
move
a
bucket
called
kate's
release
over
to
the
same
project:
that's
hosting
kates.gcr.io,
so
that
all
of
the
project
artifact
hosting
costs
can
be
built
to
a
single
project,
because
it's
just
pretty
simple
that
way.
Any
questions
there:
okay,
moving
right
along
audit
job
question,
mark,
I
guess
I'll
ping
hippie
for
this
one.
C
Looking
for
the
mute
button,
I've
I've
spent
a
couple
days
on
this
at
first,
we
went
through
and
tried
to
get
it
up
and
running
again
when
we
met
some
permissions
issues
due
to
new
features
or
new
services
that
have
been
added,
and
I
don't
think
the
job
has
been
run
by
somebody
only
in
the
audit
group
in
a
while-
and
I
think
aaron
fix
some
of
that-
there's
a
pr.
It
also
add
a
script
to
ensure
that
that's
available.
C
So
it's
I
think
it's
in
a
back
script
right
now,
but
maybe
we
can
look
at
something
like
crossplane
or
other
automation
for
later,
as
far
as
the
actual
dump
or
the
automation.
This
is
just
for
the
job
part
I'm
having
some
trouble
with
the
pr
creator
binary
doing
what
I
think
it
should
do
and
wasn't
able
to
find
enough
debug
information
or
documentation
around
that
particular
piece
of
software.
C
So
I
put
it
on
pause
for
a
bit
went
over
to
look
at
the
github
binary,
the
gh
cli
and
I
didn't
see
a
great
workflow
for
only
creative
pr.
If
we
need
it,
which
might
be
just
some
more
checks
on
if
the
branch
is
already
created,
just
update
the
branch,
but
I'm
still
working
on
it.
I'll
put
some
time
in
that
on
that
today
and
tomorrow.
A
Yeah,
it's
been
been
kind
of
a
crazy
week
for
me
and
will
continue
to
be
crazy,
but
I
would
really
like
to
find
time
to
push
this
over
the
line
with
you
hippie,
because
we
can
certainly
rope
in
like
asked
in
from
call
if
pr
pr
creator
is
still
causing
us
issues,
I'm
not
I'm
not
intimately
familiar
with
that
either.
C
It's
it's
a
really
small
binary.
I
you'd
think
I'd
be
able
to
figure
it
out.
I
mean
like
the
lines
of
code.
It's
two
files,
it's
like
maybe
ten
functions,
yeah.
A
The
the
main
comment
I
had,
which
I
think
I
sort
of
posted
on
all
the
pr's
you've
opened
so
I've,
seen
that,
like
you,
you
sort
of
manually
opened
up
a
couple
to
try
and
chunk
up
the
changes
that
your
runs
of
the
audit
script
have
picked
up
and
I've
had
an
outstanding
pr
open
since,
like
january
that,
like
I
assigned
to
tim
that
to
get
his
blessing,
I
guess-
and
I
poked
I
think,
carlos
from
the
release
engineering
team
to
just
take
a
look
at
the
release
engineering
stuff
and
he.
E
A
Looked
good
but
tim's
never
gotten
to
it.
I
think
I'm
tired
of
waiting
for
tim,
so
I,
but
basically
like
a
lot
of
those
changes,
are
already
kind
of
animated
annotated
commit
by
commit
I'd,
rather
see
that
land,
and
then
we
can
rebase
your
stuff
on
top,
because
I
want
to
use
the
prs
that
either
you
open
or
that
are
created
by
the
the
audit
script
to
start
trialing
the
practice
of
hey,
let's
review
the
pr
the
bot
opened
and
have
discussions
about
like.
Why
is
this
here?
A
Can
we
link
to
an
issue
that
cost
it
or
whatever
it's
really
hard
to
do
that
if
it
opens
up
seven
months
worth
of
changes
in
a
single
pr,
but
I
will
I
will
work
to
unblock
you
on
this.
C
Thanks
for
that,
I
I
had
been
running
that
on
a
regular
basis
when
I
was
attending
these
meetings
a
while
back
and
then
I
had
other
priorities
come
into
play,
but
I've
since
I'm
pretty
much
trying
to
reallocate
myself
and
at
least
one
member
of
my
team
towards
this
work.
So
you'll
should
see
a
lot
more
fruit
and
time
out
of
that
folks.
A
Okay,
that
works.
I
guess
I'm
going
to
try
and
share
a
window.
So
you
all
just
don't
see
me
talk
all
the
time,
so
I
think
I'm
sharing
a
window
that
shows
the
command
I
had
to
run
to
unblock
hippies
audit
scripts.
I
had
to
make
sure
we
could
use
secret
manager
stuff
that
led
me
to
ask
a
bunch
of
other
sort
of
questions
related
to
auditing.
A
A
A
And
that's
it's
great
in
that?
It
is
better
than
nothing
and
it
does
seem
structured
in
a
way
that
I,
as
a
human,
can
at
least
understand
some
of
the
patterns
now,
but
I
feel
like
we
should
consider
in
the
context
of
thinking
how
we
might
want
to
use
something
other
than
bash,
to
manage
our
infrastructure,
such
as
terraform
or
cross
plane,
or
maybe
python,
or
maybe
whatever
that
we
consider
like.
How
could
we
have
the
audit
script
dump
things
in
a
way
such
that
it's
much
easier
to
automatically
answer
the
question.
A
What
changes
are
live
that
aren't
in
source
or,
conversely,
what
changes
are
in
source
that
aren't
live
right
now,
because
both
of
these
can
happen
when
humans,
like
myself,
have
the
power
to
make
changes
on
the
fly
or
prs
get
approved
and
merged
without
somebody
actually
then
manually
running
the
scripts,
and
I
definitely
want
to
get
us
to
the
place
where,
like
merging
the
pr
causes
the
script,
to
actually
run,
I
feel
like.
I
need
to
work
a
little
bit
more
understanding.
A
What
would
make
us
comfortable
allowing
that
to
happen,
and
then
the
other
thing
was
like?
I
don't.
I
don't
know.
Maybe
this
is
a
question
for
anybody
with
gcp
expertise
here
like
should
we
be
using
cloud
asset
inventory
instead
of
our
own
script,
I
feel
like
that's
a
google
cloud
product.
I
have
seen
that
has
a
bunch
of
how-to
guides
on
like
how
to
analyze
all
of
your
aim
policies
how
to
dump
all
the
information
into
bigquery.
A
The
product
overview
looks
very
much
like
if
you
are
some
kind
of
big
enterprise,
and
you
want
to
get
information
about
all
of
your
infrastructure
and
then
make
sure
it
is
compliant
with
whatever
security
concerns
or
auditing
concerns
you
have.
This
is
definitely
the
tool
for
that
she's,
like
kind
of
what
we
want.
This
is
an
open
source
project,
but
nobody
here
has
experience
with
this
tool.
I'll
just
move
on.
A
Okay:
okay,
stop
sharing
meeting
notes,
justin.
I
think
you
have
the
next
action
item
to
talk
about
the
aws
accounts.
Pr.
F
I
do
it's
a
it's
a
no
up
action
item.
I
it
is
on
me.
I
think
I
am
unblocked.
We
discussed
last
time
that
essentially
we
were
going
to
get
it
in
in
some
way
after
some
minor
adjustments
and
then
try
to
sort
of
pair
or
something
on
making
it
better.
A
Okay,
that
sounds
good.
Let's
say
I
put
a
few
other
things
here.
I
have,
I
renamed
the
the
branch
from
master
to
main
for
the
case
I
o
repo.
Last
time,
I
don't
think
I've
actually
gone
back
and
confirmed
all
of
the
jobs
are
working.
A
So
I
have
an
open
issue
for
that.
If
somebody
wanted
to
help
me
out
and
check
these
that
be
super
cool,
dropping
a
comment
in
there,
otherwise,
like
I'll,
eventually
get
to
it.
Whenever
I
get
to
it,
there's
some
jobs
tickling
out.
A
I
have
made
no
progress
on
what
project
we
should
be
billing.
Our
data
sources
against
part
of
this
involves
me
poking
justin,
which
I
haven't
done.
Part
of
this
involves
one
of
my
team
members
getting
more
familiar
with
how
to
set
up
data
sources
in
data
studio
for
billing
purposes.
A
So
I
will
bump
an
ai
on
that
too
next
time,
because
I
anticipate
one
of
us
will
have
done
more
things
and
then
the
last
ai
I
have
is,
I
noticed
the
meeting
uploads
for
the
past
two
weeks
had
not
actually
happened.
It
appears
our
automation,
upload
from
zoom
to
youtube
is
busted.
A
I've
been
told
I
may
have
to
recreate
this
meeting,
so
I'd
have
to
send
out
a
new
calendar,
invite
and
all
that
stuff
to
maybe
get
the
automation
hooked
back
up,
feeling
that
we
might
just
go
back
to
a
world
of
me
having
to
post
the
youtube
the
videos
to
like
a
personal
youtube
account
and
then
put
them
on
the
playlist
in
the
kubernetes
channel,
but
that's
way
less
cool.
I
like
seeing
a
new
kubernetes
kids
infra
thing
show
up
in
my
youtube
feed.
A
G
I
have
a
question
about
the
the
job,
the
jobs
migration
for
the
main
branch.
I
know
that
we
can
actually
create
a
pr
and
trigger
some
some
jobs
manually
right.
So
is
this
something
we
can
do
right
now
like
make
a
dumb
pr
just
to
trigger
like
a
slash
test,
blood
and
see
if
everything
is
is
going
on.
Okay,
if
everything
is,
is
working.
A
That
I
think
that
part's
been
done
already
like
generally,
I
like
I,
I
flip
the
branch
and
then
I
go
either
merge
a
pr
myself
or
I
just
wait
until
somebody
merges
pr
and
that'll
right
away
sort
of
tell
me
if
the
pre-submits
are
working
and
if
the
post
submits
are
working,
it's
more
the
the
periodics
which,
at
the
time
I
checked
not
all
of
them
had
run
yet,
and
then
I
kind
of
forgot
about
it.
A
So
you
just
need
to
click
through
on
a
couple
test,
grid
links
and
like
yeah,
they're
green,
it's
all
good
sort
of
more
generally,
I
could
find
an
ish
there's
an
issue
somewhere.
Maybe
I
can
find
it
about.
A
We
could
preemptively
change
most
of
our
pre-submits
and
post
submits
today
to
just
trigger
both
on
master
and
main,
so
that
there's
no
additional
changes
needed
for
flipping
your
branch,
the
the
periodics,
are
a
bit
tougher,
either,
depending
on
whether
you
use
bootstrap
to
clone
your
your
repo
or
pod
utils.
A
Both
of
those
need
changes
to
avoid
requiring
you
explicitly
put
the
branch
you
want
to
check
out,
because,
most
often,
if
you're
running
a
periodic
job,
you
probably
want
whatever
the
default
branch
is
for
the
repo,
not
a
specific
branch,
unless
you're
doing
like
release
branches,
I
can
post
links
to
those
issues
and
a
couple
more
that
would
help
out
with
renaming
stuff
in
general,.
A
And
I
think
we
talked
about
this
at
last
week's
sig
testing
meeting.
If
I
have
the
recording
for
that
posted.
A
Okay,
I'm
just
totally
gonna
punt
on
the
first
item
on
the
agenda.
I
don't
even
know
why
I
put
it
there
there's
a
question
about:
should
we
set
up
a
budget
and
alerts?
I'm
not
gonna.
We'll
talk
about
that
later:
okay,
ricardo!
Please
regale
us
with
yeah.
E
G
E
Yeah,
I
I
think
that
yeah
I
can
actually
go
to
the
end
of
the
meeting,
so
you
can
go
ahead
and
go
ahead
and
we'll
stick
with
the
current
agenda.
G
G
A
Do
would
you
like
to
go
now.
E
I
can
stay
until
the
end,
so
ricardo
can
go
ahead
and
go
since
he
was
he
was
on
there.
First
sorry,
yes,
okay,.
G
G
Do
you
want
me
to
share
my
screen
or
just
sure
I
I
can
happily
give
you
coast.
D
G
Okay,
so
yeah,
so
here
we
go
so
this
is
so
the
the
problem
that
we
got
was
like.
Certs
can
expire
and
we
don't
monitor
them.
It
was
from,
I
think,
two
or
three
meetings
ago.
So
the
idea
was,
how
can
we
use
search
manager
metrics
to
to
to
alert
if
some
certificate
is
going
to
expire
and
and
we
don't
renew
them
mostly?
G
So
this
is
the
first
problem,
the
the
metrics
that
are
exported
by
set
manager.
I
don't
think
actually.
This
is
like
a
problem
but
was
like
they
are
expressing
as
unix
time.
So
I'm
not
getting
like
how
much
time
we
have
until
the
certificates
they
expire,
but
I'm
getting
what
the
unix
epochs
time
that
this
certificates
they
are
getting
to
expire.
G
So
it's
pretty
hard
to
to
to
to
turn
that
into
alerts
in
in
in
google
cloud
or
in
in
pro
videos
and
so
on.
So
we
need
to
transform
the
timestamp
to
something
alertable
right,
like
we
have
x
days
or
x
seconds
before
this
certificate
is
going
to
to
get
expired.
So
we
need
to
elect
that
and
see
if
this
is
like
an
important
certificate
or
if
this
is
something
that
we
can
like
disregard.
E
G
Probably
revoke
this
certificate,
so
the
first
attempt
it
was
a
prometheus
plus
gcp,
sidecar
plus
tech
driver,
because
this
is
what's
recommended
by
google
cloud
into
how
you
can
export
your
your
metrics
from
kubernetes
to
stackdriver.
So
the
idea
that
they
use
there
is
is
that
you
have
a
prometeos
running
inside
your
namespace
that
can
scrape
the
the
search
manager.
G
It
writes
the
magics
to
write
a
headlock
and
a
side
card
from
gcp.
It
can
read
the
the
right
ahead
logs
and
and
write
to
stackdriver.
So
this
was
the
the
steps
that
were
followed
in
starting
the
prometeos
server
and
we
don't
have
to
allocate
volumes
or
anything
else,
because,
as
this
is,
this
is
mostly
shared.
G
G
We
can
tell
prometheus
to
make
the
the
the
cause
of
the
like
the
expiration
time
stem
minus
the
time
now
and
say
how
much
seconds
or
how
much
days
we
we
we
have
and
then
right
attitude
to
his
tech
driver
and
the
cons
was
that
this
needs
admin
privileges
by
by
preparing
gcp
dogs.
So
this
this
first
attempt
that
I
failed
and
if
some
someone
with
some
more
expertise
at
gcp
that
can
help
me.
I
think
this
is
the
best
approach,
because
the
site
can
it
can
read
the
metrics
from
wall.
G
I
can
see
it
reading
the
mat
external,
but
it
cannot
write
into
stack
driver.
So
I
don't
know
if
this
is
like
a
old
container
or
some
problem
with
stackdriver
or
something
like
that.
So
the
second
attempt
was
gk,
a
matrix
agent
that
is
based
in
open,
telemetry
collector.
So
I
saw
that
you
have
like
an
open
telemetry
collector
in
all
the
the
google
cloud
cluster
that
can
pre-promote
use
from
the
cluster
and
then
and
then
write
into
stackdriver.
G
So
that's
nice,
because
google
uses
actually
the
open
telemetry,
but
but
it's
a
some
slightly
more
modified
version
too.
Only
to
contemplate
the
stackdriver
driver
and
the
the
the
prometheus
scraper-
and
this
is
really
easy
to
implement,
but
you
need
to
use
open,
telemetry,
collector
contrib
with
other
third-party
things
or
jk
metrics
agent
and
none
of
them
they
they
support
the
prometheus
recording
rules,
because
this
is
how
prometheus
server
writes
into
the
server
and
not
not
the
way
it
can
convert
the
metrics
and
and
expose
them
to
stackdriver.
G
G
So
the
third
attempt
was
a
ice
scraper
like
developing
something
I
did
this
with
the
help
of
lucas
from
from
container
solution
that
can
scrape
the
kubernetes
object
and
write
into
stackdriver
or
amazon
cloud
watch
or
prometheus
or
anything
else,
using
the
the
same
idea
of
the
open,
telemetry
collector,
but
with
the
modifications
that
we
would
so
it's
based
on
an
open,
says:
open,
sensors,
open
telemetry-
and
this
was
something
like
hard
to
do-
but
only
needs
permission
to
get
certificate
objects
and
not
the
secrets
and
permissions
to
send
the
metrics
to
stack
driver
and
can
be
combined
with
scenario
two
if
we
desire
and
also
exporting
to
much
multiple
exporters.
G
But
the
cons
is
that
this
needs
to
be
implemented.
That,
like
I,
am
not
that
wannabe.
So
I
I
made
a
lot
of
pretty
things
in
my
in
my
code
and
then
lucas
corrected
a
lot
of
them,
but
it's
yeah,
I'm
not
a
devil,
so
it
works.
It
works
fine.
So
I
could
make
like
those
metrics.
G
This
is
like
the
missing
seconds
until
the
certificate
expire
and
the
namespace
and
the
owner
that
I've
put
into
into
a
label
and
the
certificate
name,
and
I
could
export
this
directly
to
his
leg
without
any
any
any
any
sort
of
of
problem
other
than
this
is
not.
This
is
a
click
ops.
This
is
not
a
github,
so
you
have
to
you
have
to
sign
in
your
bot
or
some
or
your
gcp
cloud
into
your
slack.
G
You
cannot
do
this
automatically,
so
this
was
like
the
certificate,
and
I
have
like
this
alerting.
If
this
is
below
five
days,
then
alert
me,
then
I
got
those
alerts
in
into
his
life
and
this
work.
So
I
why
what
I
am
showing
you
here
is
that
we
need.
We
need
an
approach
decision
now
if
we
can
use
the
open,
telemetry
approach,
which
is
better
to
maintain,
because
it's
a
community
thing,
but
we
need
to
verify
how
to
turn
the
metrics
into
alerts
on
stackdriver,
because
the
unix
epoxy
is
not
alertable.
G
If
we
can
use
a
development
scraper
and
need
to
check
how
to
improve
this,
and
I
will
need
some
help
from
you.
Folks,
because
this
is
like
this-
this
needs
a
roadmap
we
can
check
also,
if
this
the
secrets,
the
the
the
public
search
and
the
dates
of
this
those
public
searches
instead
of
the
search
manager,
certificate,
object
and
about
the
gcp
sidecar.
G
So,
and
this
is
an
ai
for
this,
like
once
we
decide
the
approach,
establish
the
timeline
and
implement
need
to
automate
the
generation
of
stackdriver
alerts,
and
this
is
the
same
issue
that
you've
posted
there.
Also,
I
I
think
it's
a
I
can.
I
can
try
to
take
a
look
into
that
how
to
how
to
implement
that
into
githubs
and
define
what
should
be
alerted
in
this
great
period.
G
So
if
we
should
alert
also
if
no
metrics
were
received
in
in
an
amount
of
time
or
what's
the
certificate
or
what's
the
the
the
metric
that
we
want
to
monitor
from
the
certificate
like
this
is
going
to
expire
in
less
than
x
seconds
or
less
than
x
days.
So
this
is
all
that
was
done,
and
so
now
we
need
to
to
see
the
next
steps.
I
guessed
I
could
follow
the
five
minutes
right.
A
Ricardo
that
was
awesome.
Thank
you
for
yeah
seriously,
like
thank
you
very
much
for
clearly
laying
out
the
options
and
talking
about
the
pros
and
cons
and
all
that
stuff.
I
I
don't
know
I've
been
talking
a
lot.
Does
anybody
else
have
any
thoughts
about
next
steps
or
comments.
H
I
just
want
to
tell
that
gk
matrix
asian
is
already
deployed
in
the
gk
cluster,
so
we
can
basically
use
some
specific
annotation
on
the
services
to
basically
push
the
metrics
to
stackdriver
and
define
the
alerts
with
terraform.
H
G
F
Why
to
me
the
reason
why
I'm
super
excited
about
this
working
group
case
in
for
us,
because
we're
actually
trying
to
use
kubernetes,
you
know
to
dog
food
kubernetes
and
find
out
what
gaps
are,
and
so,
when
we
find
things
that
are
not
easy,
but
maybe
should
be
easy
like.
How
do
I
alert
when
an
object
is
not
in
my
expected
state
like
I,
I
think
we
should
be
solving
those
problems
in
the
kubernetes
project
and
not
like
adding
in
dependencies
on
my
employers,
like
proprietary
services
and
so
like.
F
We
can
certainly
have
like
things
like
stackdriver
or
pagerduty
or
whatever
it
is
as
like
syncs
as
options
as
it
were,
but
I
think
you
know
writing
a
our
own
code
that
puts
it
into
our
own
control
to
to
alert
on
these
things
and
like
encouraging
people
to
plug
in
their
own
systems
into
it
like.
If
someone
wants
an
email
alert.
That's
great
that
to
me
is
you
know
why
I'm
excited
about
this
group.
A
Awesome
I
come
from
the
perspective
of
e
too
many
things
I
have
very
little
bandwidth,
and
so
I
would
be
interested
in
what's
the
least
amount
of
code
that
I
have
to
maintain,
but
I
also
kind
of
share
justin's
perspective,
like
I'm
not
here
to
shill
for
my
employer,
so
not
neces,
like
I
think,
there's
an
alpha
feature.
You
could
enroll
in
to
automatically
have
your
prometheus
metrics
scraped
without
even
having
to
worry
about
managing
the
agent,
but
I'd
rather
we
use
something.
A
That's
I
mean
if,
if
that
works,
that
would
be
great.
I
think
that's
like
workload.
Identity
works
very
well
for
us
for
protecting
us
securely,
but
since
it's
alpha
I
don't
know.
A
A
A
A
So,
like
I
kind
of
vote
for
that
in
terms
of
least
amount
of
code
that
you
have
to
maintain
and
staff,
then
we
have
to
staff
for,
but
I
kind
of
agree
with
justin
that,
like
alerting
when
an
object
is
not
at
the
desired
state,
is
a
pretty
good
general
problem
and
that
would
be
a
great
gap
to
write
some
code
to
solve.
A
So
developing
your
own
scraper
sounds
cool
too.
I
I
just
come
from
the
perspective
of.
Are
you
sure
you
will
have
time
to
maintain
that?
But
I
think
that
would
be
great
and
awesome.
I
don't
mean
to
sound
like
the
the
killjoy
to
to
justin's
like
go
forth
and
be
awesome,
because
I.
A
F
G
G
I
was
just
going
to
say
that
yeah
I've
just
tried
to
put
everything
here
to
say:
okay,
we
need
to
take
a
decision,
but
I
like
I
like
the
idea
of
coding
something
also.
I
just
can't
say
that
my
code
is
pretty
beautiful,
to
be
something
official
so,
but
I
will
try
to
at
least
put
some
unit
tests.
Justine.
C
A
My
big
thing
is
like
I
don't
want
to
impose
a
timeline
on
you,
I'm
more
interested
in
you
sort
of
guessing
at
the
timeline
that
you
can
support
or
that
if
we
could
find
another
contributor,
it
could
help
help
you
support
and
I
can
try
to
find
some
time
in
parallel
to
like
noodle
with
you
on
the
stack
driver
sidecar
to
see
if
we
have
like
a
node
co,
no
code
solution,
kind
of
up
a
little
bit
sooner.
G
Sounds
good
sounds
great
yeah.
I
guess
we
can
we
can.
We
can
try
to
to
to
see
this.
This
tech
driver
sidecar
it's
a
it's
a
good
approach
as
a
as
a
short
path,
because
I
think
we
we
we
still
have
to
to
solve
this
problem
right
and
we
we
need
a
short
path
for
that,
and
I
I
I
can.
I
can
keep
developing
this
and
and
see
how
this
can
be
improved.
G
A
Sure
that
that
can
be
broken
out
to
an
sku,
I
kind
of
want
to
cut
us
off
there
in
the
interest
of
time.
Sorry,
I
got
really
excited
so
sorry.
A
It's
all
good
ernest,
let's,
let's
chat
real
briefly
about
donating
the
azure
subscription.
B
Yeah,
so
I
was
just
wondering
what
the
next
step
is.
So
if
for
everyone
who
doesn't
know
so
right
now,
azure,
we
run
some
pre-submit
jobs
and
pull
submit
jobs
on
azure
infrastructures
and
the
way
we
do
it
is
that
we
we
mount
a
secret
file
to
pro
clusters
and
the
proud
job
will
read
the
secrets
from
that
file.
B
And
the
bad
thing
is
that
when
we
try
to
rotate
secrets
within
the
file,
we
have
to
send
the
secret
file
in
plain
text,
to
the
current
test
info
on
call,
which
is
not
ideal.
So
I
was
just
so.
I
opened
this
issue
on
case.io
to
ask
about
how
we
can
donate
our
azure
subscription
and
you
know,
kind
of
want
to
make
azure,
maybe
like
first
class
citizens
in
terms
of
testing
kubernetes,
so
yeah
yeah
just
wondering
what
the
next
step
would
be.
A
Guess
he's
not
here
yeah,
he
has
a
standing
conflict
with
this
meeting
and
I
could
probably
use
some
input
from
dimms
as
a
steering
committee
member,
but
I
feel
like
if
this
is
about
donating
the
azure
subscription
to
the
cncf
getting
chris
chris.
A
involved
is
probably
the
first
step
to
understand,
yeah
and
and
e-hor
as
well
to
understand
what
the
cncf's
requirements
are
in
the
context
of
this
working
group.
A
We
want
to
make
sure
that
we
are
using
funds
that
the
cncf
has
appropriated
or
approved
for
use
by
the
kubernetes
project.
So
we
know
for
a
fact
that
all
the
funds
that
are
used
to
pay
for
the
gcp
account
that
we're
using
are
definitely
for
kubernetes
and
kubernetes.
Only
it's
kind
of
less
clear
to
me
what
that
looks
like
for
azure
and
aws
like
I
know.
A
We
live
in
a
world
today
where
both
azure
and
aws
have
donated
like
subscriptions
or
accounts,
as
you
said,
they're
like
hooked
up
via
secrets
or
whatever,
but
I
don't
know
that
we
exactly.
A
I'm
not
sure.
It's
not
clear
to
me
how
many
people
right
here
right
now
can
like
log
into
those
accounts
and
troubleshoot
whatever
is
going
on,
so
it's
kind
of
about
like
opening
up
access
to
the
community.
A
It's
also
not
clear
to
me
how
much
money
is
being
spent
on
those
accounts,
like
the
public
report
that
we
go
look
at
is
the
spending
costs
for
gcp,
ideally,
we'd
have
either
something
similar
for
each
of
the
different
cloud
providers
that
we're
using
or
we'd
have
them
all
feed
into
one
central
report.
A
So
that's
sort
of
the
high
level
thing
I
think
conversa
like
getting
a
conversation,
kicked
out
poke
dimms
again
and
I
can
try
and
chat
with
dims
offline
and
we
can
figure
out
how
to
get
it.
Moving
on
a
more
official
stance
to
talk
to
your
specific
problem
about
having
to
rotate
secrets,
I
would
really
like
to
get
us
in
a
mode
where
we
use
google
secret
manager.
A
We
can
use
iam
policies
to
have
to
give
you
the
ability
to
update
the
contents
of
the
secret
in
secret
manager.
So
you're
not
sending
things
over
plain
text.
You're,
storing
them
in
a
very
secure
location
and
secret
manager
has
the
concept
of
versions
of
secrets
as
well.
A
So
you
could
either
ping
test
in
for
on
call
to
like
get
it
from
there
or
maybe
we
could
even
ideally
set
up
something
that
syncs
the
secrets
from
secret
manager
into
kubernetes
secrets,
and
so
then
rotation
of
secrets
just
looks
like
you
put
a
new
value
in
the
secret
manager
thing
and
it
automatically
rotates
in
the
cluster
excellence.
So
is.
B
A
Phone
call:
can
you
open
up
an
issue
about
secret
manager,
secret
rotation
with
secret
manager?
Can
you
open
it
up
in
kate's?
I
o
we'll
see
if
we
can
go
from
there,
because,
basically
right
now,
your
jobs
are
all
of
the
jobs
that
use
azure
and
stuff
are
running
in
the
google.com
owned
crowd,
build
cluster,
and
so,
like
google
is
still
paying
money
to
run
those
jobs.
Azure
is
paying
money
to
stand
up.
A
The
cluster
that's
exercised
as
part
of
those
jobs,
so
this
would
be
about
in
order
to
make
this
process
work.
We'd
be
talking
about
getting
your
jobs
to
run
not
in
the
google.com
cluster,
but
in
the
kubernetes
dot,
io
unity
owned
cluster
and
then
getting
all
the
azure
secrets
over
from
that
cluster
to
the
community
cluster.
Where
I'm
I'm,
I
don't
have
it
documented.
I
need
an
issue
to
work
against
for,
like
using
secret
manager
for
all
this
stuff.
F
Yeah
one
thing
that
might
make
you
either
more
or
less
comfortable
is
like
I
think,
once
the
secret
goes
into
kubernetes.
The
on
calls
have
access
to
it
anyway
effectively,
and
I
think
the
more
likely
problem
is
going
to
be
that
someone
a
job
accidentally
logs
that
secret
that
actually
happens
not
irregularly.
F
If,
if,
if
as
you're,
if
azure
supports
it,
one
of
the
things
I've
been
trying
or
working
towards
on
aws
is
supporting
oidc
or
whatever
that's
called
so
in
other
words,
so
there
would
be
no
explicit
secret,
but
rather
you
would
use
the
effectively
workload
identity
in
the
in
the
kubernetes
cluster
to
authenticate
to
azure.
B
I
don't
think
it's
an
option
right
now:
yeah
right
now
we
just
use
a
simple
username
and
password
and
azure
subscription
id
to
authenticate
the
job
to
create
a
cluster
on
azure.
So
yeah
I'd
like
to
get
rid
of
like
this
process
of
rotating
secret
in
plane
types
but
yeah
for
now
I'll
open
an
issue
on
cage
io
and
we
can
start
from
there.
B
D
A
Work
yeah
and
the
whatever
the
the
security
boundary
justin
described
is
is
very
real
yeah.
There
are
other
things
we
can
do
to
improve
that
situation
for
sure,
but
we
can't
necessarily
at
static
time
yeah.
We
can't
necessarily
introspect
a
bash
script
and
know
that
it's
going
to
try
and
list
all
secrets
in
the
namespace
or
whatever.
A
Okay
do
you
feel,
like
we've
answered
your
your
questions?
Ernest.
B
Yeah,
I
think
there
are
definitely
some
follow-ups
yeah,
I
think
yeah.
I
think.
A
I
know
where
to
start
okay,
that
that
sounds
good
yeah,
I
think,
like
solving
the
pain
of
secret
rotation
is
probably
the
smaller
thing
that
we
can
act
on
more
quickly
and
we
can
keep.
We
can
get
the
higher
level
conversation
of
hey.
How
do
we
make
sure
that
azure
resources
are
available
for
kate
zinfra?
C
Yeah,
I
just
wanted
to
see
if
I
could
connect
earnest
and
priyanka.
I
know
that
this
one's
focused
on
the
kate's
infra
and
then
I
know
that
there's
stuff
like
for
the
the
entirety
of
the
cncf
another
donation
program
that
they're
working
on,
I
just
wanted
to
offer
them.
A
That,
possibly
I
I
don't
know
if,
like
ihor
and
chris,
are
better
people
to
reach
out
to
you
first
or
if
priyanka
will
redirect
to
the
appropriate
people.
I
have
no.
A
A
Okay,
we
have
six
minutes
left.
How
do
you
all,
I
really
want
to
say
is
like
I,
I
know
you've
got
this
401
image
pushing
job
problem.
I
haven't
had
any
time
to
actually
troubleshoot
it.
I
will
get
to
it
as
soon
as
I
am
able
to.
Is
there
anything
we
can
give
you
that
would
help
you
troubleshoot
this
yourself.
J
A
A
Okay,
daniel
you
had
your
hand
up,
and
also
I
want
to
give
the
rest
of
the
time
to
you.
E
Cool,
so
I
know
we
only
have
a
couple
minutes
here
I
will
say,
while
we
were
going
through
a
number
of
the
other
things
I
was
kind
of
like
sitting
on
my
hands
because
I
kept
being
like.
Oh,
we
could
fix
this,
but
I
am.
I
am
100
here
to
shill
for
my
open
source
project
that
I
work
on,
but
it
is
open
source.
It's
part
of
cncf,
it's
built
on
kubernetes,
so
I
know
that
some
of
y'all
have
seen
the
conversation
around
using
crossplane.
E
There
is
going
to
be
a
demo
and
kind
of
a
presentation
later
today
where
we
can
go
a
little
bit
more
in
depth,
but
I
wanted
to
give
folks
who
aren't
able
to
make
that
just
real
quickly.
A
brief
look
at
what
kind
of
maybe
some
of
the
problems
we
could
address
with
this
are
so
aaron.
If
I
could
share.
A
E
Thank
you
all
right
so,
like
I
said
there
is
gonna,
be
a
demo
this
afternoon
in
a
presentation
where
we'll
go
through
all
the
different
components
of
it,
but
just
kind
of
speaking
to
some
of
the
things
that
we
talked
about
today.
Crossplane
allows
you
to
manage
infrastructure
via
the
kubernetes
api.
That
works
exactly
how
you
would.
E
We
provide
those
crds
via
providers,
so
you're
going
to
see
a
lot
of
parallels
with
something
like
terraform
right
of
being
able
to
plug
in
different
cloud
functionality
or
kind
of
different
of
any
api
functionality.
E
You
know
independent
of
the
different
providers
that
you're
plugging
in
you
can
do
what's
called
composition.
So
this
sounds
a
little
bit
like
terraform
modules.
Basically,
you
can
combine
different
resources
from
different
providers.
E
The
one
I
specifically
wanted
to
mention
in
relation
to
some
of
the
things
we've
talked
about
today
is
we
have
providers
for
different
cloud
providers,
of
course,
which
is
the
typical
use
case.
They
have
varying
levels
of
support.
We
typically
add
new
resources
as
as
folks
request
them,
and
then
we
also
try
to
have
a
steady
increase
as
well.
We're
also
working
with
the
various
cloud
providers
to
you
know
have
them
support
them
as
well.
E
We
have
a
kind
of
a
large
community
that
adds
these
different
providers,
so
here's
gcp,
for
example,
but
something
you
may
be
kind
of
like
less
familiar
with
from
a
management
perspective,
is
something
like
provider
helm.
So
this
is
kind
of
the
the
galaxy
brain
future
we
could
have
for
some
of
this
infrastructure.
I
guess
you
could
say
so.
We
have
resources
like
a
helm,
release
and
being
able
to
compose
these
into
higher
level.
E
So,
in
this
case,
we're
doing
things
like
creating
a
cluster
abstraction
that
creates
an
eks
cluster
which,
behind
the
scenes,
has
all
these
different
types
of
resources
in
here
that
are
required
for
spinning
up
an
eks
cluster
and
then
we're
also
putting
in
there
we're
dropping
well
once
the
cluster
is
ready,
we're
dropping
like
prometheus
into
the
cluster,
but
we're
doing
this
all
as
like
creation
of
a
single
object,
and
this
is
a
sprint
through
all
this
functionality.
E
I
promise
we'll
cover
it
more
later,
but
essentially
you
can
get
kind
of
that
similar
abstraction
workflow,
and
I
want
to
see,
if
there's
an
example,
potentially
here
or
I'll,
hop
over
to
the
docs
of
basically
having
a
single.
In
this
case
it's
a
postgres
database
type,
which
is
kind
of
simple,
but
where
it's
backed
by
different
cloud
providers.
So
you
can
say
maybe
we
have
a
pro
cluster
abstraction
and
that's
backed
by
gcp,
gke
cluster
or
maybe
that's
backed
by
you
know
an
azure
aks
cluster
or
something
like
that.
E
The
interface
you
get,
though,
from
a
creation
perspective
is
the
exact
same,
and
you
can
provision
that
and
that
could
you
know,
put
other
things
like
prometheus
in
there
in
this
case
spin
up
a
cluster
and
install
prowl
into
it,
so
that
could
solve
some
of
the
different
multi-cloud
situations,
but
anyway
that's
kind
of
the
the
sprint
overview.
Maybe
that
will
give
you
a
little
bit
of
motivation.
I
hope
folks
are
able
to
make
it
later
today,
but
also
happy
to
chat
with
folks
offline
as
well
about
some
of
the
functionality
here.
A
Awesome:
okay!
Well,
thank
you
all
for
your
time
today
and
I
am
super
excited
to
see
daniel
and
some
some
of
you
later
today
to
see
the
cross
plane
demo
so
happy
wednesday.
Take
care.