►
From YouTube: Kubernetes WG K8s Infra - 2021-08-04
Description
A
Okay,
hi
everybody
today
is
wednesday
august
4th.
This
is
the
kubernetes
kate's
in
for
a
working
group
bi-weekly
meeting.
I
am
your
host
aaron
of
sig
beard,
aaron
krikenberger,
also
known
as
spiffit
xp
at
all
the
places
we're
all
going
to
adhere
to
the
kubernetes
code
of
conduct
here
by
being
our
very
best
selves
to
each
other,
and
you
can
watch
this
meeting
on
youtube
later,
as
it
is
being
publicly
recorded.
A
Try
and
front
load
the
agenda
with
my
talking
points.
I
ran
out
of
time,
and
I
know
I
sometimes
have
a
tendency
to
say
neat
stuff
and
if
it's
written
down
I'll
have
forgotten
that
I
said
it.
I
got
your
back
all
right,
thanks
post
the
agenda
once
again,
if
anybody
wants
to
put
themselves
down
as
an
attendee
start.
A
C
A
There
we
go
so
237
000
over
the
last
28
days,
so
real,
quick
back
of
the
napkin
math.
If
I
multiply
that
by
12.
anybody
know
what
that
is,
it's
like
2.7
mil.
I
think
real.
A
A
Oh
2.8
million,
so
right
now,
at
our
current
spend
rate,
we're
about
to
cross
over
the
3
million
mark
of
our
gcp
credits,
and
so
we
need
to
potentially
take
a
step
back
and
understand
if
what
we're
seeing
is
a
new
run
rate
or
if
it
is
money
spent
due
to
prototyping
like
even
just
looking
at
this
visually
right.
You
can
see
that
this
this
isn't
even
represented
like.
If
this
is
the
new,
steady
state,
then
it's
it's
more
than
two
point:
it's
more
than
240k
in
28
days,.
A
Well,
the
number
went
down
by
about
a
third-
and
this
looks
more
like
the
typical
artifact
stuff
that
I'm
expecting
to
see.
A
So
I
don't
entirely
want
to
put
you
on
the
spot
or
no,
but
I
will
have
to
admit
I
have
not
been
keeping
careful
track
of.
What's
been
going
on
with
this
project
and
what
we're
running,
I
was
wondering
if
you
could
maybe
talk
through
what's
been
happening
there
a
little
bit.
D
A
Of
jobs
is
surprising
to
me-
and
this
is
probably
an
artifact
of
me
not
having
talked
to
scalability
as
much
as
you
have
recently
yeah.
I,
my
history
is
like
probably
two
years
old
at
this
point,
where
I
remembered
that
we
had
a
5
000
note
job,
that
we
ran
correctness,
tests
against
the
5
000
net
job,
that
we
ran
density
tests
against,
and
that
was
it
and
then
we
had
other
smaller
1
000
and
2
000,
no
jobs
that
ran
a
little
bit
frequently.
D
C
D
See
we're
looking
for
kittens
from
just.
D
Let's
see,
okay,
when
you
see
everything
with
prefix
jc
e
master,
that's
basically
most
of
the
periodic
job
running
on
that
project.
A
D
D
Okay,
that's
I
did
some
investigation
and
what's
happening
is
we
have
a
science
july
33
a
constant
cluster?
We've
got
five
kids
notes
and
that
was
basically
created
when
I
added
a
pre-submit
with
five
kids
notes.
So
I
did
some
investigation
today.
I'm
gonna
destroy
that
cluster.
Probably
tomorrow.
A
None
of
this
works.
Okay,.
D
D
So
I'm
going
to
take
off
that
shadow
just
tomorrow
or
friday.
D
If
you
can
take
it
down,
I'm
all
for
it
all
right
is.
You
just
have
to
basically
delete
all
the
instance
groups.
D
Let's
take
a
look
at
it,
okay,
but
my
my
question
is:
what's
the
baseline
for
this
project
because
we
still
don't
know,
what's
the
baseline
for
this
project
once
we
finish
to
migrate
all
the
scalability
jobs
yeah,
are
we
supposed
to
have
the
same
number
knowing
and
they
want
to
increase
the
number
of
nodes?
Also
at
the
pre-submit.
A
A
A
A
So
the
jobs
that
currently
run
on
community
infrastructure
are
things
that
are
real,
clear,
bright
lines,
they're
the
jobs
that
are
either.
I
love
how
I'm
still
scrolling
here
this
jobs
that
are
either
like
release
blocking
or
merge
blocking
the
things
that
the
vast
majority
of
contributors
use
today.
It's
these
jobs,
oh
darn,
it
I
thought
for
a
second.
It
might
actually
all
be
green
for
once.
A
That
was
amazing,
so,
like
all
of
these
run
into
the
infrastructure,
and
then
if
we
were
to
open
it
up
from
that,
it's
like
well,
let's
look
at
all
the
jobs
that
are
performing
and
this
is
less
of
a
green
picture.
A
D
A
A
And
for
grins,
I'll,
even
go
and
I'll
take
the
cluster
that
they
run
on
and
I
will
select
default
because
I
have
default.
Is
the
google.com
build
cluster?
This
is
the
cluster.
We
were
having
so
many
problems
with
back
in
the
day,
which
is
why
we
picked
the
release
blocking
and
merge
to
a
that
auto
scaled.
A
Every
single
one
of
these
jobs
like
are
they
worth
the
money?
I
don't
know.
It's
also
really
really
difficult
for
me
to
tell
you
how
much
money
each
of
these
jobs
costs,
because
the
way
the
crowd
works
is
there's
the
spent
like
if
I
think,
about
the
resources
used.
It's
the
compute
time.
B
A
Running
a
given
cod
and
then,
if
that
pod
happens,
to
go
spin
up
or
touch
other
resources.
It's
all
the
resources
that
that
pod
uses,
because
we
use
vascus
to
do
to
manage
like
pools
of
gcp
projects
and
it'll
just
check
a
random
gcp
project.
At
a
library
we
don't
really
have
the
historical
data
to
connect
which
job
used,
which
project
at
which
time
and
then
spun
up
which
things
at
which
point.
B
B
D
A
If
their
actual
policy
is
supposed
to
be
public,
but
it's
not
likely
I'm
going
to
be
there
in
person,
but
I
am
happy
to
have
this
discussion
like
doesn't
even
have
to
be
attached
to,
like.
I
think
all
I'm
suggesting
is
like
this
is
a
sign
that
it's
time
for
us
to
start
having
that
discussion,
because
I
think
this
is
this-
is
one
of
the
gnarlier
policy
questions
that
the
community
as
a
whole
is
going
to
have
to
wrestle
with
it
kind
of
even
gets
knotted
up
in
the
well
sure.
A
A
So
if
there's
any
way,
I
can
more
firmly
emphasize
that
this
is
not
arnold's
fault.
This
is
this
is
the
fruits
of
our
nose
labor,
and
it's
awesome
like
this
is
cool
I'm
just
like
so
excited
to
have
a
gnarly
policy
discussion.
Okay,.
D
C
D
I
think
one
option
was
not
to
not
to
migrate
the
failing
job
like
if
we
identify
one
job
being
failing
for
like
a
year.
We
don't
have
to
margaret.
B
A
If
I
can,
I
don't
know
if
that's
so
much
of
kate's
input
thing
or
not,
it
could
be.
I
was
going
to
try
mtf
sort
of
a
ci
policy
version,
2
son
of
ci
policy,
sort
of
proposal
that
described
ways
that
we
could
policy
a.
A
And
I
still
think,
even
if
it's
a
human
being
looking
at
jobs
and
being
like
well,
these
are
karma.
Failing
I'm
going
to
delete
them,
I'm
I
am
okay
with
that.
I
really
need
you
know
with
sufficient
time
to
say
like
hey.
This
has
sig
node's
name
on
it.
I'm
gonna
delete
these
sig
note.
Please
speak
now
or
forever
hold
your
piece.
A
I
think
that's
still
a
fair
approach
to
take
at
least
initially
just
to
help
us
sort
of
as
us
as
a
community,
not
us
the
working
group,
but
us
as
a
community
sort
of
prioritize
what
should
be
moved
over
and
what
should
the
community
pay
for
and
then
it
is.
The
other
question,
like
google
shouldn't
necessarily
be
putting
the
bill
for
for
all
this
stuff.
D
A
Now
I'm
needed
reserved
or
committed
instances
instead
of
on-demand
instances,
not.
D
Not
really,
this
basically
applies
basically
calculate
or
try
to
define
the
baseline
that
try
to
say
if
we
can
commit
basically
resource
consumption
for
a
year
as
the
first
phase
and
later
commit
three
year
of
resource
consumption
and
that's
good.
D
That's
gonna
be
applied
to
the
5k
project,
the
rear
cluster
triple
a
and
basically
all
some
of
the
scale
project
managed
by
bosco.
But
the
one
thing
I
want
to
do
is
basically
trying
to
apply
the
vision
we
have
for
the
last
milestone
in
january
or
last
or
next
year.
We
basically
define
the
baseline,
and
we
ask
for
connection.
A
I
think
that's
a
completely
valid
thing.
We
should
do.
I'm
also
just
really
genuinely
curious.
If
there
are
some
levers,
we
can
exercise
internally
to
say.
A
That
is
a
solid
idea
and
we
should
make
sure
we
loop
tim
in
on
that.
My
understanding
is:
what's
happened
with
the
credits.
A
Is
the
the
this
whole
program
started
about
now
in
august,
and
so
the
credit
renewal
deadline
came
up
in
august,
which
is
just
kind
of
a
weird
squirrely
thing
in
terms
of
lining
up
with
financial
stuff,
and
so
we've,
given
we've
renewed
for
enough
credits
to
get
us
to
the
end
of
the
calendar
year,
and
then
that
way,
we
have
a
discussion
about
renewal
of
credits
at
a
time
that
makes
more
financial
sense
for
everybody.
A
Yeah
doing
this
better-
and
I
agree
with
your
idea
being
able
to
do
capacity
planning
is
super
important
for
this.
Like
I
said,
I
I
just
sort
of
always
assumed
that
we
were.
A
We
were
going
to
have
to
do
capacity
planning
at
some
point,
but
the
work,
the
work
of
just
like
pitching
stuff
over
was
the
more
important
work
to
do
until
capacity
planning
became
important,
and
this
is
the
signal
that
it's
time
to
start
thinking
about
it.
A
C
A
Next
next
things
I'm
going
to
blow
through
kind
of
quickly,
so
just
as
a
heads
up,
if
you're
wondering
why
this
docs
only
got
a
lot
shorter,
it's
because
I
was
trying
to
type
up
the
agenda
this
morning
and
each
character
took
like
three
seconds
to
land
on
the
page.
My
browser
was
not
happy
because
the
document
was
over
120
pages
long.
A
So
I
did
the
thing
that
we've
done
for
documents
that
grow
this
long
like
the
community
docs
and
some
other
sync
docs.
I
started
it
out
into
meeting
notes
by
year.
So
all
the
previous
meeting
notes
are
still
around
they're
just
linked
up
at
the
top
of
the
top
of
the
dock
in
documents
that
are
titled
per
year.
A
A
Yeah,
okay,
essentially,
so
I
will
spend
a
lot
of
time
but
yeah.
I
agree.
It
seemed
like
the
natural
time
to
start
this
conversation
like.
I
don't
think
anybody
here
wildly
objects
to
it.
So
now
it's
the
it's
the
process
of
seeing
what
the
rest
of
the
community
thinks
about
this,
and
should
it
move
forward.
I
will
see
how
many
things.
B
A
I
may
not
get
them
all,
but
I'm
making
sure
that,
like
the
zoom
length's
not
going
to
change
the
calendar
links
aren't
going
to
change
all
that
stuff,
so
I'll
just
be
communicative
in
slack
and
on
the
mailing
list.
If
it
looks
like
we
do
need
to
change
things,
let
you
all
know
accordingly
and
for
sure
like.
If
you
have
any
opinions
or
questions
or
comments.
C
A
A
C
A
A
D
About
the
other
arcs,
I
think
it's
better
to
finish,
deploy
the
kitten,
for
instance,
before
we
move
forward
with.
I
completely.
A
D
A
Okay,
unfortunately,
I
don't
think
anybody
who
works
on
the
auditor
is
here
at
this
meeting,
but
I
feel,
like
the
auditor
has
gotten
significantly
less
noisy
with
its
alerts
in
the
k-10
per
channel.
So
I'm
really
tempted
to
close
this,
but
it
seems
like
based
on
tyler's
latest
comment
that
they're
still
working
on
this.
So
I'm
going
to
move
it
to
milestone
2023
since
then,
with
me
more
to
do.
A
This
issue
deprecating
the
google.com
locations
where
kubernetes
ci
bills
are
hosted.
The
only
reason
this
is
still
open
is
because
I
need
to
send
out
this
email-
and
I
was
just
a
couple
sentences
away
from
finishing
up
the
email
before
this
meeting
started.
So
I
will
send
this
out
soon
and
I
will
close
this
as
part
of
b122,
but
I
did
want
to
show
this
cool
graph.
It's
it's
in
bezos
units,
so
there
used
to
be
a
ton
of
traffic.
A
The
time
span
here
is
six
weeks
there
used
to
be
a
ton
of
traffic
to
kubernetes
release
dev
like
six
to
four
weeks
ago,
and
then
we
sort
of
made
this
concerted
push
to
start
moving
most
of
the
jobs
over
which
got
rid
of
most
of
the
traffic,
and
then
it
was
about
cleaning
up
a
bunch
of
the
other
repos
like
tops
and
the
cluster
api
repos,
and
things
like
that
which
got
us
down
to
this
level
of
traffic.
A
I
don't
have
I've
turned.
I
think
I
turned
on
gcs
logs
for
this,
but
I'm
not
going
to
bother
doing
the
analysis
for
now,
because
I'm
assuming
the
traffic
that
remains
is
from
older
release
branches
that
are
publishing
to
these.
These
buckets,
you
know
our
our
deprecation
window.
Our
deprecation
policy
is
older,
supported
releases
of
kubernetes
keep
pushing
to
these
places,
but
as
the
versions
fall
out
of
support,
so
too
will
the
builds
disappear
from
these
buckets
over
time,
and
eventually
this
will
drop
to
nothing
and
we'll
move
it.
A
The
kubernetes
ci
images,
gcr
repo,
also
had
basically
no
traffic
since
lumir
and
a
bunch
of
the
cluster
api
folks
already
worked
to
switch
cluster
api
projects
to
get
their
images
another
way
a
while
ago.
A
Consider
using
google
managed
ssl
services
search
for
triple
a
the
only
thing
that
remains
is
the
self-signed
search
for
kate's
I
o
canary.
Is
that
correct?
Is
there
any
reason
we
can't
make
that
a
managed
cert.
D
We
can
make
that
a
manage
certificate.
I
think
my
real
issue
is
how
we
apply
that
to
the
canary
jobs
and
also
the
production
no
to
the
can
array
torments
and
also
to
the
the
production
domain,
because
we
basically
run
the
test
on
the
both
set
of
domains.
D
So
we
have
two
options:
we
can
basically
shut
down,
set
manager,
create
a
self-signed
certificate
manually
and
basically
apply
that
to
the
canary
inquest.
He
can
be
a
10
year
certificate.
The
problem
with
that
is
every
time
we
apply.
We
add
a
new
domain
to
the
canal
retirement.
We
need
to
regenerate
the
certificate
again.
D
For
this
particular
case,
I
don't
know
because,
like
like,
I
said
it's
like
the
certificate
manner.
The
minor
certificate
controller
is
very.
A
A
migrate
away
from
kate's
federated
performance,
so
I
need
to
follow
up
with
ben.
I
basically
created
this
checklist
of
all
of
the
google
brief
briefcontext.
This
is
a
project
that
hosted
gcs
buckets
that
people
could
request
right
access
to
so
that
they
could
publish
their
results
there
on
an
ongoing
basis.
A
A
Not
google
owned
and
so
we're
just
kind
of
picking
through
and
finding
people
who
are
responsive
or
not
responsive.
We
set
a
deadline
a
while
back
of
four
weeks
and
it's
six.
It's
been
six
weeks,
so
I
think
we're
real
close.
But
since
we
passed
the
122
release
that
line
I'm
going
to
kick
it
into
123.,
but
I
think
we're
real
close
to
moving
a
full
project
off
of
our
list
of
migration.
A
This
one
we've
had
a
couple
prs
that
break
image:
promotion,
linus
and
tyler,
who
both
worked
on
the
container
image
promoter,
are
aware
of
this
issue
and
I
have
begun
working
with
it.
So
hopefully
I
don't
have
to
look
really
super
duper
carefully
at
image:
promotion
ers
when
they
come
in.
A
Usually
the
individual
sigs
just
smash
that
approve
button
and
then
jobs
break
because
they
never
saw
the
pr's
failed
tests.
A
Last
thing:
migrating
the
kate's
artifacts
gcs
logs
bucket
to
its
own
project.
So
I
feel
like
arno
and
rhianne
have
been
collaborating
on
this.
But
to
restate
what
we're
trying
to
do?
A
There
was
concern
that,
having
a
bucket
that
contains
a
lot
of
personal,
identifying
information
inside
of
the
kate's
artifacts
project
was
going
to
allow
people
who
had
access
to
the
kate's
artifacts
project
but
were
not
cleared
for
pii
to
view
the
contents
of
that
bucket.
So
what
we
decided
to
do
was
to
create
a
separate
project
whose
sole
purpose
is
to
contain
pii,
and
then
we've
set
up
artifact
logging
for
like
gcs
access
logs
for
most
of
our
buckets
to
go
to
that
single
special
purpose
project.
A
I
guess
I'm
kind
of
wondering
if
this
is
something
we
can
move
on
in
the
next
like
week
or
two.
A
I
think
this
is
a
question
kind
of
directed
at
hippie
and
yes,.
E
Yeah,
I
think
that's
fine,
fake
swing
shot
here.
Clearly,
you
want
us
to
move
the
data.
That's
currently
there
into
the
new
bucket,
which
currently
exists,
verify
the
permissions
are
the
same
and
go
ahead
at
that
point.
Let
us
flip
the
switch.
You
know
sq
in
our
not
the
flip
switch
so
that
the
new
data
goes
there
as
well.
We
need
to
coordinate
that
flip.
A
I
think
what
I'm?
What
I'm
proposing
is
that
this
does
not
need
your
involvement
at
some
point.
I'm
one
of
the
few
people
who
has
the
permissions
to
send
to
flip
the
switch
to
have
all
new
logging
data
land
in
the
new.
B
A
E
Okay,
rihanna,
that's
kind
of
everything
that
you
need
here.
You've
got
the
reporting,
so
we
need
to
copy
that
data
over
and
update
where
your
buckets
are
pointing
and
then
give
aaron
the
the
go
ahead
to
pull
the
switch.
I
think
that
would
take
yeah
we'll
do
that
this
in
the
next
seven
days.
E
A
Yeah,
because
the
worst,
the
worst
that
would
happen
is
I
I
flipped
the
switch,
and
I
are
sync
all
the
historical
stuff
over
but
you're
not
pointed
at
the
new
stuff.
So
your
results
are
gonna.
You
know
you're
right,
okay,.
E
A
I
also
just
wanted
to
take
the
time
real,
quick
to
kind
of
celebrate
some
of
our
successes.
It
kind
of
gonna
go
through
these
real,
quick
and
see
what
jumps
out.
I
know
we
had
carlos
managed
to
use
kate's
infrastuff
in
order
to
publish
gcp
vm
images,
which
are
now
usable
by
other
projects
for
the
cluster
api
gcp.
The
cluster.
A
A
Arnold
and
tim,
hawkins
and
others
migrated
us
away
from
circ
manager,
except
that
one,
sir,
to
gke,
managed
certificates
and
I've
never
had
to
look
at
an
email
about
our
certs,
expiring
and
flip
out.
It's
it's
really
been
great.
I
I,
I
bent
a
lot
of
bash
so
that
our
audit
job,
so
that
we
have
an
audit
job
that
opens
up
pull
requests
automatically.
I
bet
there's
one
open
right
now
and
it's
become
a
lot
less
noisy
than
it
used
to
be.
A
You
know
this
started
from
a
script
that
the
ii
team
wrote
and
we've
since
sort
of
improved
it
to
be
a
little
more
selective
about
when
it
bumps
a
pr.
It
only
updates
this
pr
when
there
are
changes
instead
of
every
time,
we've
improved
a
little
bit
how
it
scrapes,
gcs
buckets,
so
we're
now
getting
feedback
every
two
hours
instead
of
every
six
hours,
and
it's
pretty
easy
to
see.
We've
added
more
information.
A
What's
done
so
now,
I
know
that,
like
the
probability
trusted
cluster
just
upgraded
to
kubernetes
120.,
that's
awesome,
and
you
know
why
that
happened.
It's
because
we
moved
to
release
channels
which
is
totally
something
that
arno
pushed
never
had
to
think
about
upgrading
a
cluster
ever
again
and
we've
managed
to
catch
all
the
like
deprecated
and
action
required
type
stuff
to
hit
these
release
channels,
because
arno
helped
me
get
conf
test
up
and
running
and
then
started
writing
open
policy
agent
college
policies
to
sort
of
catch
the
deprecated
resources
that
we
needed
to
change.
A
Let's
see
you
know
I've
some
of
the
bash
I
bent
allow.
It
allowed
us
to
add
custom
roles
like
the
audit
viewer
role,
so
that
the
iit
aim
is
capable
of
looking
at
whatever
they
need
wherever
they
need,
and
we
are
much
more
comfortable,
adding
contributors
to
this.
A
We
made
a
ton
of
progress
in
using
ci
images,
but
we
didn't
quite
get
there.
Thanks
to
arno
and
jim
doga's
help,
we
managed
to
reorganize
all
of
the
apps
that
run
on
kubernetes
infrastructure
under
a
single
apps
directory,
and
then
thanks
to
that,
I
was
capable
of
setting
up
and
job
so
that
if
any
one
of
these
directories
gets
hit
by
pr
there's
a
job
that
automatically
deploys
them
because
they
all
have
deploy.sh
script.
A
With
a
couple.
Exceptions
like
publishing
bot
is
still
a
manually
deployed
thing,
but
it's.
It
is
so
amazing
that,
like
kate's
dot
io,
you
can
make
a
change
to
an
nginx,
config
and
it'll
just
magically
deploy,
and
if
you
potentially
make
a
typo
that
causes
that
nginx
config
to
break
engine
x,
it
doesn't
hit
prod
and
break
prod
right
away,
which
is
also
cool.
A
And
I
don't
know,
I
think
I'm
gonna
stop
there,
but
I
just
wanted
to
like
take
a
moment
to
celebrate
and
thank
everybody
for
all
of
the
work
that
we've
done
during
the
last
release
cycle,
because
it
was
a
lot.
Oh
yeah,
the
last
one
I.
B
A
Was
we
got
our
terraform
to
terraform
1.0
and
we've
moved
from
using
terraform
just
to
manage
like
a
few
gke
clusters
here
and
there
to
like
standing
up
service
accounts
setting
up
by
provisioning,
gcs
buckets
and
using
it
to
help
the
ii
team
sort
of
stand
up
some
of
their
stuff
or
stand
up
the
pii
project?
I'm
really
excited
about
potentially
leaning
harder
into
that.
A
B
A
People
understand
like
here
is
everything
that
we
have
done
and
if
this
sort
of
thing
interests
you
come
help
us
out,
I'm
gonna
skip
doing
this
for
123,
because
we're
almost
out
of
time
already
and
I'd
rather
talk
to
some
of
the
other
things
that
folks
have
on
their
agenda.
So
far.
No,
you
had
something
about
container
image
hosting
for
electro.
D
Before
I
start
eddie,
can
we
skip
your
item
and
I'm
gonna,
be
your
butler,
for
my
great
trish
party
works
great!
Thank
you!
Okay,
okay,
okay,
so
my
request
is
kind
of
simple.
The
next
election
for
the
community
is
supposed
to
happen.
This
year,
say
country
breaks,
use
a
cncf
internship
to
build
a
platform.
That's
gonna
be
used
by
for
the
election.
It's
called
elector
and
they
plan
to
use
that
to
basically
do
the
election.
This
year.
D
Josh
perkins
was
the
mentor
for
this
internship
and
he
asked
me
if
it's
possible.
We
basically
host
the
container
image
for
this
project
in
the
kkm
from.
A
A
There
are
other
things
like
the
only
exception
that
we've
made
thus
far
is
images
that
are
packaged
up
as
part
of
the
kubernetes
release,
where
we
require
the
utmost,
like
build
provenance
capabilities,
to
know
for
sure,
for
sure
that
this
is
the
version
of
fcd
or
for
dns
or
whatever,
but
other
stuff
like
we
don't
we
don't
post
triage
party
and
people
are
actively
depending
on
triage
party
to
run
their
portion
of
the
community.
E
We
don't
host
api
snoop
either
and
that's
because
that's
the
cncf's
job
to
help
help
the
things
that
our
community
needs
and
I'd
love
to
find
a
way
to
further
that
conversation,
so
that
the
things
that
need
to
go
beyond
the
container
of
kubernetes
that
serve
abroad.
I
think
that
are
a
dependency
of
kubernetes
that
we
can
work
with
the
cncf
to
find
a
way
for
that
to
that
to
work.
E
I
know
that
I'm
actively
working
with
priyanka
on
a
program
to
allow
our
to
allow
our
projects
to
to
use
some
credits,
and
I
think
for
something
like
this
I'd
love
to
kind
of
connect
with
josh
and
see
what
avenues
are
there.
It
may
not
fit
into
our
current
thing,
but
I
definitely
make
sure
we
find
a
way
for
this
to
work,
to
support
the
efforts
of
the
kubernetes
community
without
necessarily
shoving
it
into
the
box
that
is
kubernetes
yeah.
A
Like
I
definitely
have
a
training
of
copy
pasting
everything
we're
doing
here
and
using
it
as
a
much
larger
that
cnc
actually
did
one
thing
at
a
time.
We
still
have
a
lot
of
infrastructure
to
migrate
just
for
kubernetes,
so
this
is
where
our
focus
is.
I've
had
to
give
this
answer
to
other
things
that
are
related,
like
cryo
was
the
most
recent
one
where
it's
like.
Yes,
I
think
it's.
D
D
A
Suggested
path
forward
would
be,
I
think,
github
has
a
container
registry
that
is
available
for
public
repos
at
no
cost.
So
if
you
publish
the
image
there,
the
kubernetes
community
definitely
runs
software
that
is
not
built
and
produced
by
the
kubernetes
community
to
run
itself
again,
I
point
to
triage
party,
so
I
have
zero
problems,
standing
up
a
deployment
of
electo
in
kate's
infra.
That
makes
total
sense.
A
D
C
A
That's
conversation
yeah,
I
would
say
open
up
if
my
answer
I
am,
but
one
of
you
know
end
people
involved
in
this.
I
think
opening
up
an
issue
to
have
the
discussion
would
be
good.
It's
yet
another
reminder
that
we
need
to
kind
of
put
together
our
case
law
or
historical
precedence
for
what
is
in
scope
in
the
outer
scope
for
this
okay.
D
Oh
okay,
last
question
is
kind
of.
Is
it
like?
It's
a
simple
question,
also
a
complex
question:
where
are
we
running
that
infrastructure?
How
we
put
everything
in
kubernetes,
public
or
basically
an
identical
project?
It's
not
cost,
as
there
is
nothing
to
basically
run
just
a
deployment
place.
The
database,
I
would
say,
kubernetes
public,
that's
fine,
all
right,
so
we
don't
care
about
security.
Like
we
have
the
list
of
products,
we
publish
everything
against
database
who's,
gonna
access
to
that
election
system.
A
E
Up,
I
just
wanted
to
make
sure
that
I'm,
if
possible,
if
we
could
get
the
youtube
of
this
up
today,
because
I'm
about
to
help
run
a
weekly
three
to
seven
pm
thing
locally,
and
this
would
be
lovely
material
to
say
this
is
what
it
looks
like
to
run
the
cloud
together.
It's
going
to
depend
on
how
quickly
zoom
finish
gets
transcoding.
E
A
E
D
A
Thank
you
for
your
time,
thanks
thanks
everybody,
so
much
yeah.
I
super
appreciate
everybody's
time.
Thank
you
for
all
you've
done
to
make
this
such
a
successful
release
cycle
into
this
stuff,
and
I
look
forward
to
seeing
you
all
again
in
two
weeks
and
online
between
now
and
then
thank
you.
Thank
you.