►
From YouTube: wg-k8s-infra biweekly meeting 20200527
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
A
B
B
D
Bart,
you
maybe
wanna
share
the
report
here.
Looking
at
first,
the
books
I've
like
showed
the
screen
so
Green
is
the
logging
cost
we
assumed
it
was
node
logging
from
clusters
that
were
stood
up
for
end-to-end
tests,
so
I
set
up
something
to
default,
their
logging
to
you
off
for
clusters
that
are
stood
up
by
our
jobs,
and
that
seems
to
you,
they've,
gotten
rid
of
most
of
the
logging
noise
I'm,
not
sure
what
pleasure
Dale
I'm
the
stuff
that
is
there
but
I
think
we're
looking
at
all
projects
here.
B
B
A
I've
played
with
monitoring,
so
we
have
the
service
account
created,
which
I
connected
the
stack
driver
to
the
graph
owner
instance,
which
we
for
tests
have
right
now
and
I
just
started
to
digging
out
playing
a
little
bit
with
star
driver.
Actually
I,
don't
have
a
lot
of
experience
with
it
and
it's
kind
of
tricky.
So
if
you
have
any
suggestions
or
advice
is
open
to
do
it,
it's
connected.
It's
working
so
I'm
just
trying
to
create
some
dashboards
helpful
for
us
and
that's
the
update
about
the
motoring.
A
A
There
is
like
a
not
much
progress
to
see.
I
mean
I'm,
not
very
confident
about
this,
yet
so
I'm
just
playing
and
did
not
next
topic
is
not
perv
migration,
as
we
saw
the
last
time.
There
was
a
person
who
bit
and
we
already
have
it
deployed
in
the
Trib
like
Laster.
It's
working
but
I
have
a
problem
to
contact
this
person,
and
probably
team
could
help,
because
everything
is
working
and
I
need
to
have
an
answer.
If
we
can
fully
switch
the
subdomain
and-
and
these
person
is
not
responding,
who.
A
B
A
D
D
Attempts
to
escalate
internally
got
attention
as
well,
either
way
what
we
got
back
in
return
was
a
hundred
projects
which
will
serve
us
for
now.
However,
if
this
is
the
latency
we
can
expect
when
we
ask
for
projects
again,
because
we
will
need
more
projects,
maybe
we
should
just
go
ahead
and
do
a
bigger
task.
Now
the
ballpark
estimate
I
had
for
what
we
would
need
in
total
was
about
325
350.
D
That
was
assuming
that
each
of
our
like
160
issue,
repos
asks
for
a
staging
sub
project
as
a
proxy
for
sub-project.
So
that's
one
in
sixty
and
the
night
ballpark
looked
at
all
of
the
projects
that
were
used
by
Bosco's
guess
the
max
number
there
and
that
led
me
to
assume
you
probably
need
325
max.
So
then,
if
we
want
some
more
breathing
room,
maybe
we
want
like
400
in
total,
and
since
we
now
have
a
quota
200,
that
would
mean
asking
for
200
more.
F
Yeah
I
want
to
mentioned
it
I've
requested.
So
sorry,
Aaron
I
noticed
your
github
command
the
issue
after
I
requested
and
a
major
request
for
350
projects
illness.
It's
fine,
I
sort
of
check
the
docs
there
disappeared
augs.
They
mention
that
like
if
you
are
not
satisfied
with
your
current
code
of
request
again
the
same
so
I
meant
I
made.
Any
request
would
like
to
wait
for
the
response
from
them.
F
B
Have
an
open
bug
internally,
just
I,
don't
think
the
process
is
working
for
this
I
understand
that
we
were
a
special
case
in
that
the
amount
of
projects
and
the
spend
that
we're
looking
at
actually
probably
should
warrant
having
a
sales
contact,
but
because
we're
special,
we
don't
I'm
trying
to
figure
out
if
we
can
patch
the
crack.
That
is
apparent
in
the
system.
I.
D
And
for
what
it's
worth,
I
think
this
this
was
blocking
for
it
looked
like
this
was
blocking
for
staging
projects
which
I
have
since
merged
the
PRS
for
and
moved
on
them,
although
one
of
them
was
at
te
admin,
and
it
was
actually
unclear
to
me
whether
that
was
an
ask
for
a
GCS
bucket
or
if
it
was
asking
for
staging
project
and
GCR
repo
as
well.
I
haven't
had
time
to
look
at
the
issue.
You
see
if
somebody's
commented
this
morning.
G
Hey
guys
I
just
wanted
to
give
a
quick
update
on
the
status,
so
just
to
recap,
so
the
flip
happened
a
while
ago,
but
it
was
reverted
or
rolled
back
since
then,
we've
been
looking
at
various
dependencies
inside
at
Google
working
with
teams
to
get
it
done
again.
A
second
time
we
are
like
in
the
final
stretch
right
now,
I
have
a
meeting
later
today
to
discuss
an
internal
bug
that
was
fixed
with
another
Google
engineer.
G
Hopefully,
with
that
meeting,
I
can
give
a
better
clearer
picture
on
when
we
can
attempt
this
next
I
was
hoping
to
do
this
by
next
or
next
Monday,
but
it
might
be
that
the
bug
that
was
fixed
won't
get
like
fully
rolled
out
until
at
the
end
of
this
week,
which
is
what
the
engineer
was
hinting
at
last
time.
I
spoke,
but
I
will
update
the
community
again
after
today's
meeting.
A
A
D
A
H
B
H
H
E
We
run
triage
party
on
kubernetes
force
at
manager
at
the
minute,
I
think
yeah,
definitely
starting
with
just
once.
It
would
make
sense.
The
only
other
thing
I
can
think
of
that's
maybe
problematic,
so
that
we
need
to
have
a
github
token
for
it
to
be
able
to
do
things,
it's
got
unless
something's
changed
recently.
E
E
Think
think
of
it
like
that
of
kind
of
state
which
is
kind
of
what
they
call
the
cache
file,
so
I
think
in
some
the
deployment
examples
they
begged
that
into
a
docker
image
or
you
could
theoretically
mount
that
in
which
effectively
allows
it
to
almost
fast
forward
to
that
point
in
terms
of
state
even
on
certain
manager
it
does.
It
takes
about
10
to
15
seconds
to
start
up
there.
So
a
repository
the
size
of
Cuba
Nettie's,
it's
probably
about
10,
20
times
bigger,
so
I.
H
About
bassists
and
you
can
starting
in
the
Postgres
deployment
in
cloud
SQL
instance,
and
about
the
token
you
need
just
in
a
gear
up
to
can
be
read
access
only
so
you're
right.
It
may
take
like
I
try
on
my
personal
cheek,
a
cluster.
It
took
me
like
six
minutes
to
pull
everything
every
teen
KK
report.
D
D
D
D
Exactly
how
to
move
the
crowd
control
plane
since
I
feel
like
moving
the
jobs
is
enough
of
a
heavy
lift.
Excuse
me,
I've
scoped
out
some
of
the
following
work
from
the
initial
stuff
that
create
a
build
cluster.
So
I
have
a
PR
out
to
address.
Setting
up
green
house
green
house
is
our
basal
cash.
It's
about
having
green
house
in
place.
We
can
run
a
bunch
of
jobs
related
to
kind
and
a
bunch
of
pre
submits.
D
Basically,
what
I'm
trying
to
do
right
now
is
to
do
whatever
it
takes
to
get
each
of
the
job
on
the
released
master
blocking
dashboard
to
run
on
our
crowd.
Build
cluster
I
have
11
out
of
18
done.
The
tricky
issues
now
are
some
of
the
jobs
assume
that
they
have
write
access
to
Google
Cloud
Storage
bucket
called
kubernetes
released
dev.
This
is
something
we're
going
to
have
to
work
in
concert
with
the
release.
Engineering
team
upon
kubernetes
release.
D
A
D
A
It's
tricky
because
it
is
of
course
possible,
but
it
is
like
kind
of
not
easy
to
maintain
cold
right
now,
so
I
decided
to
do
this
refactoring,
even
if
it's
taking
like
a
little
bit
more
lines
or
give
me
more
changes
than
it
should
have.
I
feel
like
much
easier
to
write
now
at
least
understand
the
words.
A
Then,
why
is
I
mean
but
I'm,
not
you
know
like
emotionally
attached
to
it
and
if
we
decide
it's
unnecessary
to
have
a
three-factor
I
will
just
do
you
know
some
hugs
and
to
change
the
script
itself,
because
I
am
not
anymore.
Also
that
much
confident
it's
about
this
refactoring
I'm
talking
about
the
cold
and
about
the
the
way
of
but
I'm,
not
kind
of
confident
it's
needed.
D
A
B
I'm
happy
to
take
a
look.
I
haven't
had
a
chance
in
the
last
week
or
two
to
get
back
to
this
topic,
I'm
happy
to
take
a
look.
If
you
think
refactoring,
it
will
help
in
terms
of
maintainability
like
I'm
all
for
it.
It's
been
a
while,
since
I
touched
that
for
all
bash
scripts
go
through
this
life
cycle
right
where
they
start
off
small
and
then
they
get
big.
And
then
somebody
goes.
Oh,
my
god.
What
are
we
doing?
B
A
Was
that
it
was
the
doctor,
dependency
was
hard-coded
and
it
was
tricky
to
use
the
automation
so
I
abstracted
it
a
little
bit
away
and
move
the
doctor
actually
to
the
may
commands
for
our
local
use.
So
from
that
perspective
it
defiantly
is
much
easier
to
maintain
but,
as
I
said,
feel
free
to
keep
your
opinions.
I'm,
like
it's
great
cool.
A
D
Alright,
this
was,
do
you
need
the
keys
that
dims
already
created
and
handed
off?
Do
you
need
that
from
him?
So
we.
A
Really
don't,
but
if
something
will
change
the
flow
will
change
or
anything
will
change.
It
will
be
easier
to
have
it
consistent,
especially
when
there
is
only
three
of
those
already
created,
at
least
that's
my
opinion,
but
I
can
proceed
me
welcome.
That's
not
water
or
anything.
I,
just
think
it
for
the
consistency
purposes,
it's
good
to
have
them
there.
A
D
D
Okay,
this
is
a
high
I
went
through
and
did
it
manually
for
a
pool
of
five
projects
just
to
test
it
out,
and
it's
a
small
enough
quota
bump
that
it
got
approved
automatically.
It
took
me
a
while,
but
if
I
look
at
the
capacity
that
we
have
over
in
google.com
Bosco's
instance,
we
have
a
pool
of
40
projects
set
up
to
handle
presubmit
testing,
which
will
be
a
fun
day
of
copy
pasting
and
clicking.
If
that's
what
we
got
to
do.
D
It
also
sounds
like
the
scalability
team
is
concerned
that
they
will
lose
troubleshooting
and
debugging
visibility
into
the
set
of
scalability
projects
that
they
currently
use.
So
I'm
working
with
them
to
establish
like
what
I
am
role,
they
think
they
need
attached
to
this
account
and
what
privileges
it
should
have.
My
default
was
going
to
be
sure.
Give
me
the
names
of
the
people
from
sake
scalability,
who
support
these
tests
and
we'll
make
a
google
group
for
them
and
we'll
give
it
project.
Viewer
access.
D
I
already
walked
through
these
master
blocking
jobs.
Setting
up
greenhouse
its
up.
We
have
the
nodes.
Provisioned
I
have
the
pull
request
out
ivory
basted,
since
the
other
one
landed.
I
just
need
to
double
check
that
everything's
working
correctly
here
so
I
hope
to
have
this
closed
up
by
in
the
week.
D
A
E
D
D
B
D
A
D
A
Built
as
an
image
couple,
they
think
which
is
coupled
is
the
image
building,
which
is
right
now
down
inside
the
testing
infra.
So
this
can
we
can
just
be
couple.
You
know
who
take
to
just
build
this
as
an
image,
but
as
far
as
I
see
the
tool
is
not
coupled
to
a
work
only
on
the
testing
frog,
depository.
D
D
D
F
A
D
A
Think
that
it
was
related
that
I
didn't
understand
how
we
should
move
the
project,
we
use
the
DNS
and
everything
get
right
right
now.
It's
I
know
how
to
do
it
and
how
many
steps
it
needs
to
be
done
to
the
project.
We
used
a
DNS
to
create
the
ingress
to
create
a
certificate
or
services,
and
this
result
I
think
I
did
this
definitely
need
to
be
written
down.
D
E
Just
catching
back
up
that
this
issue,
yeah,
just
basically
making
me
easier
to
self-service
I
think
when
it
comes
down
to
is
just
setting
that
annotation
on
the
ingress
resource
that
they
themselves
control
to
point
it
to
the
name
of
like
their
own
ingress
resource
that
they're,
editing
or
adding
the
editing
place
annotation.
Actually,
if
they're
manually
under
then
it
we
don't
need
Engrish
shim.
E
So
it's
a
case
of
basically
specifying
and
annotation
on
the
certificate,
the
name
of
the
ingress
resource
that
should
be
edited
to
insert
the
rules,
and
it
is
a
particularly
it's
a
weird
area
of
the
cert
manager
API
right
now,
because
you
define
an
issuer
with
a
name
on
the
like
to
issue
the
actual
issue,
a
resaw.
Sorry,
you
define
the
name
of
the
ingress
to
edit
on
the
issue
of
resource,
which
is
effectively
a
default
for
those
that
don't
override
it
explicitly
with
another.
But
I.
Don't
think
that's
really
a
problem.
E
E
A
D
A
D
G
D
D
D
B
Sorry
this
is
this
is
about
the
vulnerability
scanning
of
container
images
and
how
to
publish
the
vulnerability
findings.
It's
unfortunately
highly
linked
like
it
requires
people
being
in
a
service
account
and
I
was
unhappy
with
that
and
I
want
to
dig
into
it
further.
But
I
have
not
had
a
chance
to
do
this,
but
like
I,
imagine
all
of
the
hosted
providers
will
want
that
info
stream.
So
they
can
know
that
you
know
if
they're
pulling
for
anything
from
upstream
repositories,
windows
for
all
their
abilities.
D
B
Staging
admin
group
right
so,
if
you
own
any
staging,
you
can
be-
should
be
able
to
see
the
vulnerability
reports
for
basically,
we
want
to
turn
vulnerability
scanning
on
on
the
main
repository,
not
on
the
staging
repositories,
because
vulnerability
scanning
is
a
paid
service.
So
we
don't
want
to
scan
the
same
images
twice
just
cuz.
B
We
copy
them
to
a
different
repository,
I
think
so,
if
we
just
give
everybody
access
to
the
vulnerability
findings
on
the
main
repository
for
now
at
least
the
staging
owners
can
say:
oh
no
I
have
a
vulnerability
in
my
thing,
but
ultimately
we
want
this.
If
you
have
more
public
thing,
I
think,
but
we
need
to
actually
talk
with
the
project.
Folks,
okay,
so.
B
The
vast
majority
of
the
vulnerabilities
are
in
base
images,
not
in
leaf
images.
So
it's
already
public
information
like
the
database
that
we're
getting
is
this
information
from
isn't
secret?
Ok,
so
again,
this
is
where
I
like
I
would
defer
to
the
the
security
committee.
What
is
secret
information?
What
isn't
my
assumption-
and
it
may
be
bad-
was
that
this
isn't
a
secret
information.
D
D
D
D
Setting
up
gh
proxy
so
that
the
gh
proxy
might
help
with
triage
party.
If
token
usage
is
a
real,
crazy
problem,
but
it's
not
something.
Setting
up
is
not
something
I
plan
on
juggling
this
week,
so
I've
used
to
see
how
we
get
along
with
triage
party,
as
is,
but
if
we
find
that
token
usage
is
still
a
problem,
gh
proxy
may
be
useful.
D
D
Crowd
job
that
like
when
we
change
one
of
the
resources
they
just
automatically
get
deployed
to
the
cluster
in
question.
This
can
be
done
now
with
proud
jobs
that
run
on
our
trusted.
Cluster
just
need
to
do
it,
it
would
save
me
pain,
is
every
time
I
make
a
change.
I
have
to
manually
look
for
it
right
now,
so
maybe
I
will
be
the
person
motivated
to
do
this.
There's
there's
prior
art
for
how
something
like
this
would
work
in
the
testing
for
repo.
If
anybody
is
interested.
D
I
open
the
nation
a
while
ago
to
propose,
let
me
stop
using
get
crypts
entirely
and
you
seek
advantage.
Er
I
did
a
proof
of
concept
to
use
secret
manager
just
for
our
crews,
reconciliation
thing,
ok
dim,
says
a
comment
that
I
used
to
touch
up
to
read
me
Bart
I
feel
like
there
are
some
secrets
related
to
you,
slack,
infra
and
stuff
that
still
living
get
current.
Is
that
correct?
D
A
A
A
D
D
E
E
So
yeah
this
one
came
up
seekers
keto
IO
went
out
so
now
everyone's
asking
questions
about
how
we
do
stability
of
images
so
from
the
SERP
manager,
side-
I,
don't
like
we
pretty
much.
It
will
cost
us
too
much
money
to
ourselves.
Put
things
on
to
juicy.
Are
we
on
I?
Think
cluster
8,
the
I
oversee,
have
a
preference
to
have
everything
for
cost
or
API
ship
through
GTR,
but
otherwise
we
were
going
to
investigate
some
kind
of
way
to
mirror
like
from
key
to
I/o,
to
docker
hub
or
some
other
free
service.
E
B
I
just
well,
we
were
discussing
and
I
was
poking
into
the
billing
stuff.
I
noticed
that
the
staging
CSI
had
a
significantly
higher
bill
like
30
times
higher
than
the
next
closest
staging
repository,
and
so
I
just
dropped
them.
A
quick
note
as
to
why-
and
the
answer
was
this
this-
this
is
quite
problem.
You
know
the
Americans
say
quietly
right,
James
and.
B
B
E
Basically,
that
I
agree
with
your
sentiment
entirely.
It's
not
something
that
I'd
have
thought
to
have
asked
before,
because
well
for
exactly
that
reason,
because
it's
a
slippery
slope
and
putting
on
the
Katyn
for
a
hat
like
hosting
random
other
projects.
It's
it's
very
generous
yeah.
It's
not
really
our
business,
but
yeah
I'm,
not
too
sure
about
the
cluster
API
side
of
things
or.
B
B
Think
something
like
Stewart
manager
like
there
is
some
principle:
abstractly
I,
don't
know
what
how
to
express
it,
but
there's
some
principle
that
says
certain
manager
is
important.
We
should
probably
make
sure
that
it
doesn't
go
away,
but
I,
don't
know
what
that
principle
is
so
I,
don't
know
what
other
cases
would
fall
into
that
yeah
and
very
concretely,
we
do
host
core
DNS
so
like
we
were
actually
already
on
the
slippery
slope.
Well,
the
accordion
s
is
CN
CF.
So
it's
there's
some
shadow
of
a
principle.
There
yep
yep.
E
I
mean
as
a
I
suppose,
there's
a
stopped-up.
We
can
do
things
like
pushing
stock.
How
about
anything?
That's
too
difficult
from
our
point
of
view,
but
it
doesn't
solve
the
ultimate
for
a
month.
You
know
it
actually
being
managed
and
controlled
by
Kate's
info.
Would
what
do
you
think
the
best
way
to
go
about
defining
a
criteria
could
be
because
it
doesn't
seem
like
an
easy
question.
D
This
is
it's
just
really
tricky
because-
and
you
can
correct
me
if
I'm
wrong,
I-
think
the
reason
like
FTB
and
Cordy
and
ask
at
this
is
because,
like
there
isn't
a
congruent
at
ease
without
those
like
a
Cuban
a
today
certified
inform
and
kubernetes
cluster
kind
of
needs
to
have
what
kubernetes
comes
with
out
of
the
box
right.
It's
at
CD
in
Tour,
DNS
kubernetes
without
DNS
is
not
really
a
functional
human
Eddie's.
I
can't
necessarily
say
the
same
thing
about
certain
manager.
I.
E
B
And
they,
you
know
at
CD
and
core
DNS
are
both
CN
CF.
So
we
can
at
least
sort
of
close
one
eye
to
it
and
say
well
they're
there
in
the
family,
whereas
certain
manager
isn't
to
say
that
it's
less
open,
I,
don't
wanna
get
into
the
politics
of
what
is
open
and
not.
But
just
the
billing
account
is
allocated
to
CNCs.
B
E
It's
I,
don't
think.
I
can
answer
that
accurately
because
it'd
be
half
if
this
would
be
more
specifically
cluster.
Eight,
the
ISO
usage.
If
you're
talking
about
the
overall
usage,
you
don't
want
to
be
picking
up
that
tab,
because
I
think
it's
something
like.
According
to
the
Quai
image
statistics,
it's
like
a
million
image
pools
a
day
which
doesn't
go
down
well,
I.
Think
that's
people
setting
up
things
like
harbor
to
actually
mirror
things
across.
E
G
B
And
unfortunately,
GCSE
and
GCR
are
fairly
poor
in
their
ability
to
deliver.
Statistics
like
that,
so
I
can
tell
you
on
a
cost
breakdown,
how
much
each
staging
project
cost,
but
most
people
aren't
serving
out
of
their
staging
project
they're
serving
out
of
the
main
repository,
which
is
currently
mixed
in
with
a
whole
bunch
of
other
stuff,
and
once
we
finish,
the
vanity
flip
we'll
just
see
a
huge
spike
in
the
cost,
and
we
still
don't
have
really
good
rate.