►
From YouTube: Kubernetes SIG K8s Infra - 20230621
Description
A
A
Let's
start
do
we
have
a
note
taker?
Oh
yeah.
We
have
no
ticker
but
I'm.
Sorry,
thank
you.
Jiffy
you're,
the
best
got
you
okay.
Before
we
start
do
we
have
any
person
new
to
the
call
we
have.
One
person
obviously
feel
free
to
do
the
intro
start
required,
but
you're
free.
C
D
So
I'm
Maureen
it's
my
first
time
joining
just
to
check.
What's
going
on
and
get
a
sense
of
what
kids
infra
is
about.
B
Yeah
happy
to
introduce
myself
hi
everyone.
I
am
Hannah
Aubry
from
fasting,
representing
fast
forward
our
open
source
program,
an
open
internet
initiatives,
super
happy
to
support
kubernetes
and
this
this
sig.
Thank
you.
A
Okay,
let's
jump
to
the
first
thing,
bidding
and
costs
I,
don't
see
anything.
A
E
One
thing
that
Rhian
pointed
out
I
want
to
say
like
two
weeks
ago
was:
there
was
a
spike
in
gcp
cost,
not
a
lot,
but
it
had
to
do
with
scale
testing,
and
then
it
went
back
down
which
surprise
no
one.
A
A
We
got
that
spy
of
cost,
so
this
is
like
I
think
we
are
aware
of
that.
So
that's
it
cool.
A
A
A
A
Okay,
thank
you,
oh
on
edit
programs,
with
what
what
do
we
prefer
a
daily
or
months
I
will
go
monthly,
so
I
don't
check.
F
F
Stability,
job
stability
and
costs
side.
Other
thing
is
that
we
need
to
expect
that
Trump
over
time
in
ABS
bill,
because
we
are
going
to
start
posting
packages
there,
and
this
is
a
topic
that
I
added
for
open
discussion
about
OBS.
But
it
is
something
to
expect
to
see
here
in
the
upcoming
weeks
that
there's
probably
going
to
be
some
increase,
because
we
are
going
to
start
out
the
GIF
for
that
as
well.
A
A
G
A
I
think
it's
fine.
We
hope
that
they
think
in
in
slack
up
that
conversation
about
that
see,
what's
happening,
yeah
the
question
will
be.
Is
it
worth
doing
another
rhetoric?
That's
the
main
question.
If
we
yeah
also
GMC
is
not
here
so
I,
don't
think
we
have
come
to
have
this
conversation,
that's
fine!
We
can
talk
about
it.
I.
G
Should
be
able
to
get
the
same
report
the
same
stuff
in
two
weeks
or
by
two
weeks,
probably
won't
take
two
weeks
to
load,
but
yeah
it
should
be,
should
be
the
same
report.
A
Okay,
any
other.
B
Hello,
everyone,
so
thank
you
so
much
for
the
Post
you
recently
put
out
and
the
kind
words
about
dl.kates.io
adopting
a
CDN.
We
were
super
excited
and
wow
awesome
to
hear
that
y'all
are
just
gonna.
B
Do
a
straight
cut
over
of
the
traffic,
especially
volume
which
y'all
are
operating,
so
we
were
kind
of
interested
to
see
it
says
live
stream
here,
but
after
some
consideration
and
presuming
y'all
will
be
sort
of
in
a
you
know,
a
focus
on
making
sure
everything
goes
smoothly,
maybe
not
a
live
stream,
but
we'd
love
to
possibly
capture
some
content.
You
know,
for
example,
if
you
all
would
be
coordinating
on
a
zoom
call
like
this.
B
Perhaps
recording
that
and
of
course
cherry
picking
the
super
exciting
moments
from
an
addition,
any
sort
of
observability
tools.
In
particular,
you
know
our
suite
of
observability
Tools
recording
those
at
the
moment
to
see
you
know:
traffic
graphs
go
up
Etc.
B
B
And
then,
of
course,
to
the
point
about
a
blog
post,
you
know,
as
you
feed
the
traffic
on
our
Network
and
notice,
any
performance
improvements,
we'd
love
to
work
with
you
on
a
post
detailing
those.
A
So
nothing
much
except
that
and
the
rest
is
basically
do
monitoring
and
Ensure.
We
don't
deal
with
two
major
breaking
infrastructure
because
that's
gonna,
be
it
if
you
feel
like.
Basically,
the
rollout
is
too
breaking
for
the
user,
which
just
roll
back.
So
it's
a
matter
of
like
doing
doing
the
rollout
for
10
minutes
and
be
in
monitoring
mode
for
two
to
three
hours
or
maybe
the
entire
day,
slash
night,
depending
on
how
I
will
do
that
and
that's
it.
A
A
E
I
definitely
think
like
record
cutting
the
pr
against
Kate's
dot
IO.
Of
course
you
would
say
that
record
record
cutting
it
over
record,
cutting
the
pr
against
Kate's
dot,
IO
watch
using
monitoring
of
traffic,
essentially
getting
caught
off
moving
forward
to
vastly,
and
then
you
know
maybe
five
minutes
of
us
watching
that
and
then
we
can.
E
B
Right
and
something
we'd
like
to
do
from
our
side,
we
have
a
team
called
Mission
Control
and
they
do
something
called
live
event,
monitoring
so
from
our
side.
We'll
also
have
a
team
of
sres
monitoring
your
traffic
to
make
sure
it
goes
smoothly,
and
we
could,
even
if
you
want
have
some
of
them
join
your
meeting
and,
of
course,
we'll
be
monitoring
to
make
sure
nothing
falls
down
on
our
side.
E
A
E
A
Kind
of
nine
nine
to
eight
yeah
opportunity
to
should
be
fine
for
me
all
right
if
I
make
yeah.
If.
A
B
A
B
E
E
B
Yes,
yeah
well,
I
was
asking
because
we
have
I've
just
learned.
There
is
an
API
to
some
parts
of
our
observability.
Excuse
me
observability
tools,
I'm,
not
sure
exactly
which,
but
you
know,
if
the
plug-in
to
your
existing
dashboards,
and
so
maybe
you
could
see
like
the
graphs.
These
graphs
go
down.
These
graphs
go
that.
C
A
I
think,
right
now
we
are
so
focused
on
the
rollout.
That
observability
is
something
we
want
to
do.
Post
roll
out
because
we
just
Leverage
The
fastly,
currently
dashboard
would
be
more
specific,
I
know
for
sure.
In
the
fastly
console
you
have
some
dashboard
to
see
what's
happening,
it
pass
distribution
per
consulate
yeah
those
kind
of
things
we
can
just
use
that
for
the
recording
and
use
that
for
a
few
days
and
later,
because
we
want
to
be
open
about
this.
A
A
I,
don't
see
in
this
a
big
blocker
for
this
because
fascia
as
an
exporter
for
Primitives
metrics
or
it's
kind
of
straightforward
to
do
this.
B
A
Like
yeah,
it's
the
because
the
origin
is
also
the
origin
is
also
public,
so
the
idea
is
to
be
able
to
capture
at
least
two
petabyte
of
traffic
during
the
rollout.
I.
Think
that's,
like
my
hope,
because
currently
people
can
bypass
the
fast
they
can
bypass
fastly
to
it
directly.
The
origin,
so
I
think
the
question
is
like
right:
now
we
roll
out
fastly
and
we
use
the
GCS
public,
which
is
public
as
a
narration
and
later
we
changed
Advocate
to
something
private
that
we
only
serve
for
fasting.
It's
telling
discussion,
but
we
haven't.
A
We
are
not
clear
about
this,
but
I
think
the
first
thing
like
because
we
want
to
make
we
want
to
implement
some
change
on
how
we
handle
that
origin,
slash
GCS
packet.
We
want
to
first
put
fast
in
front
of
it
and
later
do
whatever
child
we
want
without
breaking
user,
because
one
benefit
of
with
us
is
like
it's
a
caching
system,
so
we
can
have
think
in
cash
and
break
the
origin,
and
it
will
be
fine
for
the
end
user
and
we
can
fix
whatever
things
we
broke
in
the
back
end.
B
My
last
question
is,
you
know
after
we
capture
the
screen
recordings
and
such
you
know,
we'd
love
to
have
someone
from
this
group.
You
know
possibly
do
some
voice
over
or
any
of
you
super
excited
about
doing
such
a
thing.
B
A
B
You
sweet.
C
A
I
think
there's
like
a
sorry.
My
question
is
for
Jiffy
how
we
want
to
handle
communication
possible
now.
Do
we
want
to
do
cncf
translating
oh
Community,
faster.
E
I
would
for
a
lot
of
the
stuff,
I
would
say
community
handles
the
comms
and
then
cncf
will
just
boost
it.
E
I
think
tentatively,
we're
going
to
do
leading
up
to
Chicago
kind
of
a
rolling
thunder
approach
about
all
of
the
Kate's
and
for
donations
up
until
Chicago
and
then
it'll
be
a
big
thing.
That's
my
thing
or
that's
my
guess.
C
E
You're
making
this
kind
of
difficult
for
me
realistically,
what
we've
done
so
far
with
the
other
donations
is
the
community
has
posted
blog
posts
and
we
have
Amplified
it.
In
the
background,
the
cncf
has
been
working
with
the
donors
to
work
on
marketing
materials
because
usually
they
want
them
leading
up
to
larger
events
as
additional
kind
of
ammunition,
so
to
speak,
Hannah
I
kind
of
defer
to
you.
E
If
you
want
this
to
be,
you
know
coinciding
with
the
cncf
I,
can
connect
you
with
the
correct
folks
and
they
can
handle
that,
because
I
am
definitely
not
a
marketing
person,
but
yeah.
B
B
A
F
Okay,
so
let
me
get
started
the
normal
BS
quickly,
so
I
think
that
from
cigarettes,
the
blocking
Parts
until
now
are
done.
We
have
the
integration.
We
are
publishing
packages,
but
now
we
are
coming
to
the
infra
part
of
it
like
how
do
we
want
to
post
packages
and
how
it
works
in
the
in
the
OBS
platform
itself,
so
I
left
the
link
to
the
slack
thread
that
has
some
information
and
a
lot
of
discussion
between
mostly
me
deems
from
our
side
and
the
OBS
team,
so
I
recommend
checking
it
out.
F
If
you
like,
want
to
see
some
in-depth
details
about
how
everything
is
going.
But
I
would
like
to
do
a
quick
recap
for
this
meeting
as
well
and
that
we
eventually
try
to
gather
some
feedback.
The
first
way
is
that
I
would
like
to
briefly
present
how
OBS
works
when
it
comes
to
networking
traffic
and
what
they
can
offer
us,
so
they
basically
have
a
network
of
mirrors
like
they
have,
let's
call
it
their
own
CDN
and
basically,
that
is
provided
by
the
community
and
speaking
of
community.
F
What
I
have
been
told
from
Fox
is
that
they
can
handle
like
one
petabyte
per
month
with
all
mirrors,
but
they
don't
use
all
mirrors
for
all
packages
right.
So
in
reality
it
is
like
maybe
30
gigabytes
per
second,
or
something
like
that.
So
that's
probably
not
going
to
be
enough
for
us
right,
because
we
actually
don't
have
any
idea
about
how
much
traffic
we
use,
because
we
don't
have
its
site.
F
It
was
described
earlier
by
Ben
that
the
Google
repo
is
used
for
gcloud
and
many
other
stuff,
and
we
can't
like
just
filter
out
kubernetes
packages
and
figure
out
the
traffic
So,
based
on
what
we
have
on
the
arcade
sio
and
with
registry
and
with
everything
I
think
it
is
very
safe
to
assume
that
packages
are
significant
amount
of
traffic
and
that
we
need
to
prepare
for
at
least
what
we
have
with
the
arcade
sio.
F
If
it
is
less
that's
great,
it's
going
to
be
a
problem
if
it
is
more,
but
let's
be
prepared
on
having
something
similar,
because
packages
are
the
let's
call
it
most
popular
way
to
install
kubernetes
like
if
you
go
to
the
kubernetes
documentation
instantly.
Everything
via
packages
is
the
recommended
way.
So
I
think,
assuming
that
the
packages
are
very
popular,
is
a
safe
bet.
F
So,
given
the
constraints
that
we
have
like
that,
we
don't
know
exactly
how
much
bandwidth
we
need
and
if
they
can
provide
about
like
30
gigabytes,
I,
think
that
we
need
to
add
our
own
mirrors.
That's
going
to
take
part
of
traffic
and
that's
also
a
good
thing,
because
we
are
providing
some
guarantee
that
we
have
our
own
mirrors
that
we
maintain
and
that
we
know
that
it
works
so
that
if
some
mirror
decides,
we
don't
want
kubernetes
packages
because
that's
way
too
much
traffic
or
something.
D
F
That
that
we
have
stability
and
that
package
system
is
safe,
that's
how
it
works,
but
the
problem
is
that
in
theory
this
is
nice
in
Practical.
It's
a
problem
because
the
way
that
mirrors
are
working
in
OBS
is
that
they
expect
you
to
have
a
server
a
VM
instance
whatsoever
that
you
run
all
the
infra
there
and
that
they
can
use
our
sign
to
access
that
to
read
packages
and
eventually
to
push
packages
and
that's
not
going
to
work
it
as
feedback.
It's
right,
because
our
thing
doesn't
support
it.
F
But
there
is
two
ideas
right.
F
First,
one
is
being
that
we
have
one
small
server
in
ABS:
that's
going
to
be
like
a
Sophie,
that's
used
for
communication
with
OBS
and
that
that
server
has
every
bucket
mounted
and
that
it
they
are
like
OBS
is
pushing
via
AirSync
to
that
server,
and
then
we
are
like
they're
doing
that
on
a
bucket,
because
bucket
is
multiple
that
mounted
or
that
server.
So
that's
some
idea
right
now
and
another
possibility
that
we
are
going
to
research
is
that
they
use
air
clone
to
push
to
S3
directly.
F
F
Don't
need
server
or
its
scope
is
going
to
get
get
much
lower
and
then
we
are
going
basically
to
like
simplify
the
infrastructure
but
still
use
S3
buckets.
Every
buckets
are
nice
because
we
can
easily
replicate
them
in
many
regions
like
reduce
the
cost
and
stuff
like
that.
There
are
also
some
ideas
from
my
site
to
do
something
similar
to
registry
caters
IO,
where
we
have
these
redirector
thingy.
C
F
Not
rush
it
a
lot
of
OBS
Forks
or
vacations
I'm
going
to
try
to
follow
up
as
much
as
possible
with
them
to
try
to
move
this
forward.
But
this
is
the
situation
right
now
how
it
is
going
to
look
like
and
that's
why
I
said
the
bill
is
going
to
go
up
because
we
are
going
to
start
pushing
stuff
in
ABS.
I
am
probably
going
to
request
a
new
account
for
that
in
couple
of
days,
probably
maybe
even
tomorrow,
and
then
we
are
going
to
start
playing
out
infrastructure.
F
One
part
of
the
problem,
the
second
part
of
the
problem,
is
CDN
solution,
because,
ideally
because
we
have
many
buckets
and
everything,
we
should
probably
put
some
CDN
in
front
of
it.
There
I
had
some
discussions
with
are
not
today,
and
one
option
is
to
use
cloudfront
ABS
solution,
but
I
am
not
sure
that
I'm
a
huge
fan
of
it,
because
cloudfront
wouldn't
be
a
permanent
solution,
because
it
is
expensive
and
there
may
be.
F
Potentially
other
problems
like
I
would
be
more
in
favor
like
I
know,
this
is
super
unpopular
opinion,
but
to
use
fastly,
because
this
is
the
way
we
want
to
go
and
like
if
fastly
he's,
providing
all
the
tooling
and
everything
I
think
that
it
is
maybe
the
best
idea
to
use
it.
That
I
think
we
have
some
like
space
there.
F
We
could
hit
have
packages,
at
least
for
the
beginning
like
this
is
the
OBS
project
itself
is
Richard
Alpha
with
128
like
it
is
not
going
to
be
that
used,
and
we
like
set
up
everything
in
place
with
fastly,
with
the
solution
that
we
want
to
use
a
long
term,
and
we
eventually
hope
to
be
able
to
increase
the
traffic
if
that's
needed
for
packages
and
artifacts.
E
I'll
just
say,
first
and
foremost,
good
work,
diving
into
this
rabbit,
hole
and
coming
out
somewhat
unscathed.
F
A
A
A
They
have
a
network
of
mirror.
We
can
just
set
up
one
and
trying
to
CDN
slash
network
mesh,
be
part
of
it.
I
will
pay
for
serving
package
or
they'll,
be
open,
all
the
open,
all
the
OBS
packages,
but
in
a
single
region,
a
single
location
because
they
are
open
yet
about
how
they
redirect
the
user
requests
to
the
close
to
the
like
most
closed
server
of
any
user.
A
F
Yeah,
but
the
thing
is
that
you're
eventually
going
to
get
users
download
from
that
server
right,
because
the
way
when
we
talk
about
mirrors,
like
mirrors,
are
used
by
users
to
download
stuff.
It
is
not
about
like
just
the
storage.
The
storage
itself
is
not
this
problem.
The
problem
part
is
like:
users
are
like
going
to
download
stuff
from
like.
A
F
Yeah
the
problem
with
that
might
be
that
I,
don't
know
that
sir,
if
servers
are
in
Europe
that
they
have
is
like
going
to
be
enough
to
handle
our
packages
like
the
problem
is
the
traffic
that
we
are
going
to
generate
with
packages
right
because,
let's
say
we
put
a
server
in
U.S,
it
is
going
only
to
be
used
by
US
users
in
the
US.
We
have
a
problem
with
users
in
the
Europe.
They
are
going
to
use
their
servers,
but
are
their
service
going
to
be
able
to
handle
that
load?.
F
A
F
A
F
Yeah,
that's
not
the
case.
They
have
multiple
servers
for
sure,
as
you
can
see
in
the
mirrors
report,
but
I
think
that,
because
a
lot
of
the
servers
are
coming
like
from
universities
and
stuff
like
that,
I
doubt
that
they
have
a
lot
of
bandwidths
they
okay
and
they
can
handle
the
kubernetes
project
like
I,
really
doubt
that.
A
F
A
F
A
F
That
part
is
okay,
but
if
what
I
wanted
to
make
point
is
like,
even
if
you
use
cloudfront
or
any
other
CBN,
we
are
still
going
to
like
have
part
of
traffic
going
to
OBS
mirrors
that
part
of
traffic
going
to
our
mirrors.
Even
if
you
use
cloudfront
or
anything
else,
because
for
now,
I
wouldn't
really
opt
out
of
OBS
mirrors
at
all.
F
A
A
F
Part
of
the
OBS
Network
mesh,
like
that's,
not
a
question
it
is
just
like
if
we
want
to
have
like
in
front
of
those
S3
buckets
if
we
want
to
have
any
CDN
or
if
you
just
want
to
provide
direct
access
to
S3
buckets
and
like
their
CDN.
The
way
it
works
like
handles
the
traffic,
because
maybe
now
that
I
think
better.
If
you
just
provide
like,
have
10
buckets
in
different
regions
that
provide
direct
taxes,
so
no
CDN,
then
their
mirror
is
going
to
take
care
of
balancing
the
traffic
between
regions.
D
F
Between
they're
different
OBS
mirrors,
so
maybe
we
don't
even
need
Syrian
at
all.
A
F
I
think
they
can
access
that
stuff
via
https.
Okay,.
A
A
A
B
That's
a
lot
of
it's
a
lot
of
traffic,
but
it's
not
so
much
for
us.
Let
me
speak
to
my
team
and
see
what
we
can
do.
I
think
fairly,
confident
that
we
can
get
that
yeah.
A
C
A
F
D
F
F
A
Okay,
next
update
on
job
migration.
F
Yes,
that's
me
again,
yes,
so
the
first
thing
that
I
wanted
to
highlight
is
the
general
progress,
which
is
quite
good.
I,
think
that
we
are
managing
to
do
pretty
well.
The
community
is
doing
some
amazing
job.
I
would
also
like
to
thanks
to
everyone
who
was
involved
with
this
I
think
we
have
two
focus
on
call.
This
is
if
tech,
if
sorry,
if
I
mispronounced
your
name
and
Ricky,
thank
you
so
much
for
helping
us
with
this
thanks
to
everyone
else
in
the
community
who
decided
to
help
us
with
this
I.
F
Don't
have
complete
numbers
for
day,
but
let's
hope
that,
maybe
for
next
time
we
can
come
up
with
something
and
that's
about
the
general
progress.
Let's
continue
with
this
and
let's
see
how
we
can
make
this
move
forward.
Another
topic
that
I
had
is
we.
F
C
F
Folks
doing
job
migrations
might
need
to
pay
attention
to
like,
for
example,
the
requesting
the
resource
requests
and
limits
stuff
with
like
clean
Jobs
go
Max
procs
some
issues
that
Cube
ADM
team
had
and
stuff
like
that
tried
to
have
it
like
in
the
center
Place
documented
outside
the
issue
in
the
markdown
document
that
can
be
easily
linked
to
that
false
contract.
For
example,
I
think
folks
are
not
really
aware
of
the
monitoring
dashboard
that
we
have
or
like
how
to
properly
handle
the
resources.
F
So
I
think
we
should
still
have
a
document
that's
going
in
depth
about
this,
like
we
should
have
like
summary,
like
tldr
like
just
do
this,
but
if
you
need
to
like
want
to
do
it
in
a
more
proper
way
or
something
like
that,
then
read
this
document
to
learn
about
what
dishes
you
might
encounter
stuff
like
that.
So
that
is
something
that
I
would
propose
that
I
could
work
on
and
that
we
could
try
to
come
up
hopefully
soon
and
that
we
provide
that
as
a
resource
to
the
community.
F
Another
thing
is
about
grafana,
the
dashboard
that
we
have
for
monitoring.
Jobs
turned
out
to
be
very
useful
and
I
think
it
was
again
Ricky
who
brought
it
up
that
we
might
consider
moving
that
dashboard
to
default
cluster
spell
and
because
I
know
that
default
cluster
is
often
considered.
There's
no
touch
like
we
don't
touch
it,
but
I
would
still
create
a
PR
to
propose
I.
Think
there'd
be
a
final
dashboard
because
it
should
be
mostly
transcript.
F
Final
dashboard,
maybe
some
additional
service
monitor,
but
it
should
be
easy
to
Port
it
to
default
cluster,
and
then
someone
might
take
a
look
at
the
job
that
they
have
running
and
determine
the
resource
requested
limits
beforehand
before
they
even
start
with
migration
and
then
have
from
Day
Zero
proper
requested
limit
set.
They
avoid
the
stress
with
their
jobs.
So
that's
also
something
that
I
have
planned.
I
have
one
more
topic,
but
before
that,
do
we
have
any
questions
or
anything
any
comments.
A
Are
you,
okay,
opening
a
tool
request
to
start
the
migration
guide
in
the
testing
variable?
Yes,
I
am
okay,
it's
okay,
so
like
just
like
Ricky
and
other
folks
involved
in
that
migration.
So
they
give
the
feedback
we
can
have
like.
We
basically
trying
to
improve
the
guide
to
the
point.
It's
easy
for
folks
entrance
to
do
the
migration
so
yeah.
F
F
Okay,
another
topic
that
I
have
Arno
and
I
had
a
discussion
today
about
accounts
and
we
are
running
into
some
problems.
We
have
Jeffy
on
call
as
well
and
if
you
can
help
us
with
that
too,
okay,
so
the
way
that
we
have
in
the
default
cluster,
we
have
access
to
some
resources
that
were
not
coordinated
with
cncf
in
any
way
they
were
given
to
the
kubernetes
community
for
some
sub
project
and
the
way
that
it's
worked,
because
I
actually
had
a
similar
experience
doing
that
for
the
cluster
API
provided
digital
ocean.
F
When
digitalocean
donated
some
credits
back,
then
it
was
basically
that
you
eventually
coordinated
something
with
the
cloud
provider.
With
someone
going
to
donate
resources,
they,
like
give
you
access.
Some
of
the
sub-project
maintenance,
get
the
key
so
everything
else
they
email
testing
for
on-call
and
they
they
add
those
secrets
in
the
default
cluster.
So
that
was
how
is
this
working
and.
F
Because,
in
my
opinion,
that
might
be
too
much
like
if
someone
already
like
donated
something
to
the
kubernetes
project
and
it
works
for
like
many
years
and
stuff
like
that,
I,
don't
think
we
need
to
do
anything
about
like
having
like
additional
overhead
of
having
them
go
through
processes
and
stuff
like
that,
because
they
already
did
so.
They
got
the
sponsorship
for
their
projects,
approach
it
for
the
kubernetes
project
and
they
got
stuff
sorted
out
like
I
think
we
have
many
great
examples
like
digital
lotion.
F
If
is
one
of
them,
OBS
is
great
example
like
OBS
doesn't
have
an
effective
with
cncf.
It
was
basically
done
by
sick
release
on
their
own,
like
there
are
many
stuff
when
the
six
are
going
directly
negotiate
something
with
the
someone
who
is
going
to
donate
something
they
get
the
access
and
they
put
it
in
the
cluster
and
like
I,
think
we
should
keep
the
same
way
of
doing
so
and
like
eventually,
if
possible,
to
try
to
get
taxes
and
try
to
get
contacts.
F
E
I
I
think
generally
you're
right.
We
don't
stop
projects
or
cigs
from
reaching
out
to
different
companies
and
getting
donations.
That's
like
that
is
your
prerogative.
Please
continue
doing
that.
E
So
long,
like
my
cncf
hat
on
and
then
also
my
like
caring
about
making
sure
this
stuff
doesn't
just
fall
into
a
black
hole
so
long
as
these
credential
Secrets,
whatever
wind
up
in
a
place
that
more
than
a
single
person
can
use
them,
then
yes,
that's
fine
and
then
the
minute
that
these
donors
maybe
want
to
get
some
marketing
support
or
work
with
the
cncf
and
yeah.
Please
engage
me
like
I'm,
essentially
saying
you're
doing
the
right
thing.
Marco!
E
Please
do
that
just
keep
in
mind
and
make
sure
if
these
places
want
to
if
these
places
want
to
get
marketing
support
from
the
cncf,
whether
it
be
a
blog
post
interview,
whatever
make
sure
you
point
them
our
way
and
then
we'll
take
it
from
there.
F
Okay,
yeah,
that's
something
that
we
can
do
that
way.
We
can
also
maybe
like
document
it
if
someone
wants
to
reach
like
to
donate
resources
like
in
the
stock
track
way
that
they
can
check
out
the
link
that
Arnold
shared
in
one
of
slack
threads
I.
Don't
have
it
handy
right
now
and
it's
like
we
try
to
like
measure
that
in
the
guidelines
and
that
yeah
folks
want
to
migrate
like
we
migrate
secret,
we
can
work
with
the
Google
thing
with
that.
A
I
think
the
one
thing
we
need
to
we
need
I
need
I.
Think
I
want
to
be
clear
about.
This
is
com
ownership,
and
currently
it's
not
the
case.
I'll
pick
like
in
many
cases
those
icons,
slash
environment,
company
owned,
so
move
those
credentials
outside
of
like
I,
would
say:
Google.
For
example,
it's
a
potential
security
Bridge,
that's
why
we
want
to
rotate
them
because
those
are
controlled
by
companies
and
it's
kind
of
difficult
to
reach
out
to
people
inside
companies
trying
to
get
those
sequence.
A
A
We
have
many
platform
Wheels,
but
all
most
of
them
are
home
by
at
least
the
same,
like
I
would
say
one
password.
For
example,
we
reached
out
to
the
company
it
gave
us,
but
we
informed
cncf
that
we
use
that
and
even
fastly
that's
the
case.
We
reach
out
to
pass
it,
but
we
inform
CNC,
but
we
use
that.
The
main
point
here
is
community
ownership.
A
F
C
F
A
There's
no
currently,
we
don't
I,
don't
want
to
say
Rush,
but
there's
no
urgency
to
achieve
the
brow
job
migration.
Currently,
the
fact
that
we
just
started
migration
is
a
good
thing.
This
was
like
over
the
last
year.
We
didn't
do
anything
right
now.
It's
just
to
move
forward
and
the
email
teams
stand
last
time
was
specific
about
one
thing:
currently,
we
focus
on
job,
not
relying
on
cloud
assets
So.
A
Currently,
if
a
pro
job
apps
like
nothing,
requires
a
reset,
a
secret
or
anything
tied
to
something
outside
of
the
community
infrastructure,
we
should
do
the
migration
and
we
can
revisit
the
remaining
problems
specific
configuration
later,
but
right
now
we
move
things
easy
to
move
and
we
do
another
iteration
revisit.
What's
remaining
make
a
decision
based
on
that.
A
A
F
A
Okay,
okay,
we
add
aft
of
time
and
anything
at
the
last
minute.