►
From YouTube: Kubernetes SIG Network meeting 20210218
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
B
Howdy
duty,
so
I
did
some
initial
triage
pinged
a
few
of
the
ones
that
hadn't
really
been
updated
since
two
weeks
ago,
and
so
I
have
a
whopping
five
for
us
to
look
at
today
and
I
also
have
a
tab
open,
a
pull
request,
which
has
let's
say
somewhat
more
than
five
for
for
us
to
look
at
if
we
have
the
extra
time
today
so
starting
with
the
most
recent
ipvs
something
about
terminating
and
having
black
holes,
while
pods
are
being
recycled
through
the
back
ends,
they
you
user
says
it
works
great
with
ip
tables
totally
bombs
with
ipvs.
B
This
was
a
fun
one
to
read:
they.
They
claim
that
they
have
two
three
containers
within
a
pod
that
talk
to
each
other
on
localhost
and
when
they
install
a
network
policy,
then
their
localhost
connections
fail,
which
plug-in
they
did
not
say.
So
that
was
my
question
and
I
thought
I
would
just
bring
this
up,
because
I
thought
it
was
a
really
fun
bug
report.
B
It's
one
of
those
that
can't
possibly
happen,
but
it
probably
is
and
we'll
see
what
happens
by
next
time.
B
We
we
have
a
a
flake
that
has
been
open
for
a
while
catching
up
with
the
end
of
it.
It
looks
like
it
was
a
reference
to
external
traffic
policy.
Local
jay.
Are
you
here.
B
C
Yeah
this
is,
I
was,
I
was
working
with
white
teach
and
we
found
that
a
lot
of
tests
were
waiting
for
pot
running
just
in
santa
barbara,
so
they
were
waiting
for
running
and
they
need
to
wait
for
running
and
ready
and
we
were
changing
a
lot
of
this
condition.
So
assign
it
to
me
because
it's
already.
B
On
you,
okay
and
then
this
one
is
a
really
old
one,
but
I
left
it
in
the
queue
anyway
because
I
thought
it
was
maybe
worth
discussing
today.
If
we
had
time
this
is
the
old
replace
update,
does
not
work
for
services
because
of
the
immutable
fields
and
cluster
ip,
and
it's
an
old
bug
report
that
we've
heard
many
many
times,
though
you
know,
there's
a
there's
a
reasonable
argument
from
for
maybe
we
should
fix
it.
B
B
And
it's
kind
of
a
long
one
to
read,
so
I
won't
read
it
all
here:
go
ahead.
B
No,
I
mean
this
particular
issue
is,
I
guess,
helm
is
using
put
instead
of
post
and
when
they
put
twice
it
fails,
because
because
the
the
way
we
do
allocations,
we
don't
we
don't
we
don't
backfill.
The
ip
address
into
a
subsequent
put
is,
is
the
issue,
so
we
could
basically
turn
put
into
like
a
small
patch
by
saying
this
was
an
allocated
field
so
we'll
bring
it
forward
and
apply
it,
but
we
don't.
We
don't
currently
do
that.
B
Okay,
I'm
happy
to
talk
more
about
it,
but
that's
it
for
issue
triage.
There
were
not
very
many
issues
filed
in
the
last
cycle,
so
let's
do
whatever
is
on
the
agenda
and
then,
if
we
have
time
left,
we
can
come
back
to
these
or
or
pr
triage.
A
All
right,
we
do
have
some
agenda
items
now,
so
lockheed
you're
up.
D
Oh
okay,
yeah!
No,
I
was,
I
just
did
a
quick
take
of
stock
because
I
was
looking
in
121
release
and
there
are
10
actually
tracked
enhancements
coming
out
of
sig
network,
which
is
fantastic
and
I
know
they've
anybody.
Who's
worked
on
enhancements.
They've
changed
the
process
a
little
bit
in
121
and
made
the
production
readiness
review
mandatory
now,
which
adds
a
little
bit
more
work.
D
So
thank
you
to
everybody,
who's
reviewed
things
and
got
that
in,
but
that's
15
of
all
enhancements,
only
second
to
node,
which
always
has
a
big
barrage
of
them.
But
I
just
you
know
I
just
was
flabbergasted
by
the
amount
of
work
happening
in
cignet.
So
I
was
like
you
know:
congratulations
that's
head
and
shoulders.
I
think
above
most
other
sigs
and
it's
across
all
areas.
Load
balances.
D
End
point
slice
going
to
stable
dual
stack
going
to
beta,
there's
a
whole
bunch
of
things
in
there,
which
is
great
progress,
kudos
to
sig
network,
and
this
is
a
lovely
well
oiled
sig.
I
attend
a
lot
of
sigs
there
so
glad.
B
To
you
all,
I'm
I'm
surprised
to
hear
you
say
that
we're
lovely
and
well
oiled,
because
I
spent
a
bunch
of
time
today
trying
to
figure
out
how
to
get
what
I
feel
like
the
sand
out
of
the
machinery
it
feels
like.
We
could
be
a
lot
more
well
oiled
than
we
are
so
first.
I
want
to
echo
yes
tons
of
awesome
great
work.
That
happened.
B
I
was
very
busy
reading
caps
in
the
last
few
weeks,
and
so
thanks
to
everybody
who
was
bombarding
me
with
them,
they
all
look
really
good
and,
and
I'm
very
happy
with
them.
I
I
should
put
on
the
agenda
I'd
like
to
talk
about
what
we
want
to
do
as
a
sig
for
like
procedural
stuff,
whether
we
want
to
use
project
boards
and
what
hoops
are
we
willing
to
set
up
for
ourselves
to
jump
through
in
order
to
get
the
well-oiled
machine
to
be
more
well-oiled,
ear.
F
Yeah,
this
is
not
particularly
new.
I
just
this
seemed
like
a
good
time
to
actually
cover
it
and
see
if
there
were
ideas
out
there.
If
you've
used
a
kubernetes
cluster
with
endpoint
slice,
controller
enabled,
and
you
have
a
non-trivial
amount
of
endpoints,
you
have
probably
seen
an
event
on
services
that
is
failed
to
update
endpoint
slices
and
this
event
is
annoying
and
it
it's
there.
There's
open
source
issue
related
to
this.
It
does
not
necessarily
mean
anything
broke,
but
I
would
love
to
find
a
better
solution
to
this.
F
The
endpoint
slice
controller
works
very
similarly
to
the
endpoints
controller,
in
that
there's
a
sync
service
command
or
loop.
Basically,
that
runs
through
anything
any
anytime
anything
around
a
service
changes.
It
updates
any
endpoint
slices.
It
thinks
needs
to
change.
The
problem
is
all
of
those
updates
are
based
on
the
informer
cache
and
the
informer
cache
can
get
out
of
date,
and
so
what
we're
seeing
when,
when
those
events
come
through,
is
that
the
endpoint
slice
controller
tried
to
update
stale
endpoint
slices
based
on
its
copy
of
the
informer
cache.
F
I
linked
to
apr
where
I
tried
to
kind
of
delay
those
things
until
we
sort
of
kind
of
thought
we
were.
We
had
an
up-to-date
version,
so
you
basically
watch
for
endpoint
slice
updates
to
come
in
in
the
informer
cache
that
match
what
you
previously
tried
to
do,
but
sometimes
those
can
get
merged
together,
and
so
you
don't
always
get
all
the
events
you
expected
to
get.
F
I
I've
talked
to
various
people,
api
machinery,
etc.
It
doesn't
seem
like
there's
an
amazing
answer
here,
but
unfortunately
that's
left
us
in
a
place
where
there's
no
amazing
answer.
So
we
have
no
answer
and
it
feels
like
we
should
try
to
do
something.
I
just
don't
know
what
it
is.
The
the
error
message.
The
event
is
annoying
it
doesn't
I
don't,
as
far
as
I
can
tell,
nothing
actually
is
broken
here.
F
It's
just
endpoint
slice
controller
is
doing
more
work
than
it
needs
to,
because
it's
update
trying
to
update
before
it
has
a
complete
picture
and
then,
in
addition
to
that,
users
are
seeing
these
events
that
seem
problematic
and
our
answer
is
well.
No,
just
don't
worry
about
them,
which
is
not
great,
so
any
ideas
at
all
would
be
great
I'd
love
to
get
some
kind
of
fix
in
for
the
121
cycle,
but
I
I've
been
struggling
to
find
a
good
one.
F
F
I
think
what
that
would
mean
is
you'd
be
doing
once
every
you'd
be
doing
one
sync
based
on
the
cache
and
then
one
sync
based
on
api
call
a
lot,
so
you
basically
be
bypassing
the
cache
a
lot
of
the
time
based
on
how
frequently
the
but
which
is
better
than
surfacing
an
error
every
time
I,
but
I
don't
know
yeah
I
need
to.
I
need
to
think
about
that
more.
F
Yes,
yeah,
so
I
we
respond
to
lots
of
different
events,
so
we
we
trigger
a
sync
service
anytime,
an
endpoint
slice
changes
that
we
didn't
expect
or
service
changes
or
more
specifically,
if
a
pod
changes,
so
what's
likely
going
to
be
happening,
is
a
pod
backing
a
service
triggers
a
a
sync
service
call.
It
basically
throws
that
service
in
the
queue,
and
that
does
not
mean
that
our
we
have
an
up-to-date
version
of
endpoint
slices
in
the
cache.
F
I
I
think
it's
just
it's
just
not.
It
has
not
been
reflected
in
the
cache
yet
so
it
could
be
that
so
the
so
one
option
is
just
to
delay
the
frequency
with
which
we
can
sync
a
service
like
a
specific
service.
But
what
could
happen?
Is
you?
You
update
all
endpoint
slices
for
a
service
and,
as
that's
happening,
a
pod
update,
happens
and
so
right
away
that
service
gets
queued
again
and
those
endpoint
slice
updates
have
not
made
it
to
your
copy
of
the
cache.
D
Yeah,
I
think,
scaling
you
you're
just
going
to
have
problems,
putting
watches
everywhere
and
going
around
the
informer,
and
basically
it's
not
scaling
putting
more
load
on
the
api
server.
Yeah.
B
F
B
H
F
And
there's
this:
do
it
again
right
exactly,
and
so
this
eventually
resolves.
I
I've.
I
have
yet
to
see
a
case
where
this
is
constantly
in
an
error
state,
but
it's
relatively
frequent,
especially
in
big
rolling
updates,
where
lots
of
endpoint
slices
are
changing.
You're,
calling
that
sync
service
a
lot
where
you
see
this
event
over
and
over
again,
and
it
will,
it
eventually
solves
itself,
but
it's
not
pretty.
A
B
D
D
Yeah,
if
end
points
are
consistently
changing
and
I've
seen
it
like
in
ml
workloads
like
massive,
where
you
just
shout
it
out
and
then
get
you.
You
know
and
you're
just
going
like
this
over
and
over,
and
that's
why
they
did.
I
guess
if
you
lock,
step
intervals
and
caches,
maybe
your
controller
with
the
informer
you
could
somehow
at
least
snapshot
at
a
given
interval
where
they're
in
lock
stepped,
because
I
know
in
the
big
services
they
started.
D
B
Probability
in
this
case,
where
it's
not
even
as
bad
as
that,
though
right
there's,
not
two
controllers,
there's
only
one
controller,
it's
just
we're
not
writing
back
through
our
own
cache,
so
we're
writing
and
then
we're
the
the
the
response
to
that
right.
The
event
comes
into
our
watch
queue
but
we're
meantime
processing
other
events
which
could
be
in
conflict
with
the
thing
that
we
just
wrote.
So
if
we
were
able
to
write
through
the
cache,
then
we
would
at
least
see
our
own
updates.
That's
potentially
a
problem.
F
F
F
Let
me
back
up
and
say
the
the
fix
I
initially
proposed
for
this
is
is
ugly,
but
what
it
was
is
we
already
have
a
tracker
that
looks
at
the
endpoint
slice
resource
version
that
we
think
we
should
have
and
compares
it
to
events
coming
back.
So
when
we
see
an
endpoint
slice
update,
does
this
match
what
we
already
wrote
and
if
it
does
we're
good,
and
so
basically,
I
waited
until
we
received
events
for
endpoint
slices
to
basically
for
every
resource
version
we
expected
for
service
and
then
triggered
the
sink.
G
If
there's
a
big
question
mark
around
writing
back
to
the
informer
cash,
could
you
keep
the
informer
cache
intact,
but
have
a
cache
of
just
content,
you've
written
so
instead
of
having
to
hit
the
api
server?
If
you
see
a
conflict
from
the
informer
cache,
look
in
the
cache
of
the
content
you've
written
at
that
point,
I
think,
if
you
have
a
conflict,
it
does
mean
that
something
else
has
changed.
It.
F
G
B
B
F
I
I've
spent
a
while
looking
at
this
error.
Obviously-
and
it
happens
in
most
controllers-
it
just
happens
in
endpoint
slice
controller
way
more
frequently
than
other
controllers,
because
it's
just
there's
so
many
things
that
could
trigger
this
and
there's
so
many
resources,
it's
managing
but
yeah.
It
does
happen
everywhere.
B
So
we
should
probably
take
this
to
the
mailing
list
or
to
an
issue
rob
do
you
want
to
explore
that
last
idea?
It's
not
terrible.
I
wonder
what
daniel
and
api
machinery
folks
would
do
to
us
if
we
propose
that.
D
B
I
imagine
what
we'd
ideally
want
is
daniel's
approval
on
like
this
isn't
a
terrible
idea
and
then
we'd
go
do
it
and
if
it
turns
out
it
was
even
better
than
terrible
better
than
not
terrible.
Then
we
could
generalize
aim
and
high.
D
So
so
one
what
what's
the
net
effect
here
error-wise,
because
I
think
that's
the
main
user
facing
thing:
are
they
going
to
end
up
with
two
errors?
Hey,
I
looked
at
my
cash
and
it
was
different
to
your
cash.
So
therefore,
I'm
going
to
update
with
my
cash.
What
are
you
going
to
throw
back
to
users
because,
as
as
it
sounds
like
that,
error
message?
Isn't
that
useful
at
the
moment?
D
F
Yeah,
I
think
I
think,
that's
a
really
good
point.
I
think
we
still
want
to
error
where
or
bubble
up
some
kind
of
event
when
we're
unable
to
update
an
endpoint
slice.
I
think
we
just
want
to
dramatically
reduce
how
frequently
that
happens,
and
so
any
kind
of
hopefully,
this
concept
would
almost
eliminate,
maybe
entirely
eliminate
this
issue.
B
E
So
cal
made
an
update
to
the
api
testing
recommendations.
I
thought
was
really
interesting
that
we
should
definitely
point
out,
given
how
many
enhancements
are
coming
out
of
the
sig
right
now,
which
is
to
say
you
should
not
make
an
assumption
about
when
you're
say
going
from
alpha
to
beta.
Don't
make
any
assumptions
about
your
gate
being
on
or
off
be
explicit,
because
that's
how
you
being
explicit
is
how
you
avoid
unpleasant
surprises.
E
I
don't
know
about
you,
but
in
prediction
systems
I've
run
in
the
past
definitely
had
unpleasant
surprises,
based
on
unexpected
assumptions
that
were
different
than
someone
else's
unexpected
assumptions
that
were
unstated
so
anyway.
I
thought
that
was
really
interesting
and
just
a
very
small
update
to
the
sig
architecture
documentation
there,
but
something
we
should
all
think
about
for
the
features
that
we're
pushing
forward
as
to
exactly
how
the
feature
flag
makes
assumptions.
B
And
as
we
review
each
other's
changes,
making
sure
that
the
that
the
flag
is
explicit
as
part
of
the
test
when
the
flags
are
active,
absolutely.
E
B
D
E
B
We
have
a
very
loose
procedure
in
terms
of
what
we
do
in
in
triage.
You
know
we
we
do
this
every
couple
of
weeks
and
in
between
several
people,
look
at
it
and
update
their
issues,
but
we
don't
have
a.
We
don't
have
anything
really
tracking
this.
If
somebody
wanted
to
come
in
and
understand
how
many
open
issues
do
we
have
and
what
states
are
they
end
up
being
worked
on
like
we
don't
we
don't
have
a
way
to
answer
that
not
really
likewise
for
pull
requests.
B
We
don't
have
a
clear
state
of
like.
Are
we
waiting
for
feedback?
Are
we
waiting
for
changes
who's?
Whose
court
is
the
ball
in,
and
nor
do
we
really
have
a
process
for
caps
which
caps
are
attached
to
sig
network
and
which,
which
states
are
they
in
right
now?
Now,
caps
are
probably
the
best
of
all
these,
because
at
least
there's
some
tooling.
B
That
has
been
worked
on
cap
cuddle
to
help
us
query
these
things,
but
there
isn't
a
dashboard
that
shows
us
all
the
things
that
are
in
flight,
which
this
past
cycle
really
kind
of
hurt
me
as
I
was
trying
to
keep
track
of
everything
that
was
going
on
in
the
last
week
of
the
kept
freeze
and
trying
really
hard
not
to
drop
the
ball
on
anybody,
and
so
I
talked
a
little
bit
with
the
enhancements
folks
about
the
the
number,
a
the
number
of
hoops
that
are
needed
to
jump
through
for
enhancements
and
we're
gonna
try
to
figure
out
if
we
can
reduce
the
number
of
hoops
there,
but
what
other
sigs
are
doing
in
these
same
sorts
of
problem
areas,
and
so
they
pointed
me
to
some
of
the
things
that
other
other
sigs
are
doing
and
again
project
boards
came
up.
B
So
I
spent
some
time
today
pulling
up
project
boards
and
really
trying
to
understand
how
they
work
with
github
and
what
the
limitations
are,
and
it's
better
than
I
thought
it
was,
but
it's
still
not
completely
automatic,
and
so
I
guess
the
the
question
that
I
wanted
to
just
open
here
is
like
how
how
much
pain
do
other
people
feel
from
the
lack
of
these
sorts
of
dashboards
versus
what
other
sort
of
procedural
hurdles
are
willing
to
undertake.
So,
for
example,
we
could
change
our
triage
process.
B
To
first
thing
we
do
is
import
all
of
the
new
issues
into
a
project
board
and
then
triage
from
the
project
board
right.
B
Being
a
lazy
and
forgetful
git
is,
I
will
often
forget
to
do
those
state
changes,
and
then
we
have
things
in
the
wrong
states
which
maybe
isn't
the
end
of
the
world
right
anyway.
I'm
talking
a
lot.
I
wanted
to
open
the
floor
for
what
people
think
about
this
and
what
have
they
done
in
the
past
and
honestly,
if
anybody's
got
deep
experience
with
project
boards
like,
I
would
love
to
hear
the
stories.
I
Yeah,
since
other
groups
are
using
it
isn't
there,
some
automation,
that's
going
to
be
constructed
around
these.
B
I
don't
know
like
I'm.
I
was
surprised
to
find
that
there
isn't
a
way
in
github
to
say
automatically
add
all
issues
with
a
label
to
a
project
board.
Just
there
isn't
a
way
to
do
it.
There
are
like
third-party
github
actions
that
you
can
install
and
run,
but
there's
nothing
baked
into
github.
For
that.
B
Useful,
the
one
that
I
looked
at
this
more
today
and
I
don't
know
github
actions
very
well,
so
I
didn't
fully
process
it,
but
the
one
that
I
looked
at
was
you
know
it
used
some
label
information
to
move
things
into
a
project
board.
B
It
wasn't
clear
to
me
whether
it
runs
periodically
or
if
it
only
runs
on
on
on
first
creation
or
if
it
runs
on
every
delta.
I
just
don't
know
github
actions
well
enough
to
know
like
what
the
life
cycle
would
be.
A
If
people
forget
to
do
things,
even
if
things
are
in
the
wrong
state,
you
know
most
of
the
time
somebody
would
catch
that
within
you
know
a
week
or
two
like
it's
not
that
hard
to
also
do
a
quick
run
over
the
issues
and
make
sure
that
the
state's
updated
and
if
people
remember,
to
update
the
state
as
they
go
through
we'll
also
gradually
train
people
to
do
that.
You
know
over
time.
A
I
B
Yeah,
I
saw
some
limitations
that
were
baked
in
like
2500
issues
in
a
single
column.
God
help
us
if
we
hit
that,
but
looking
at
what
like
signaled,
set
up
like
having
more
than
50
cards
in
a
column
is
untenable,
like
you
can't
really
do
anything
with
it.
E
J
B
We
don't
spend
enough
time
on
this
stuff,
maybe
and
anything
that
we
did,
that
forced
us
to
spend
more
time
on
this
would
help,
and
this
could
be
one
of
those
things,
but
is
there
a
dumber
thing
we
can
do
that
would
allow
us
to
spend
more
time
on
it
like
I
don't
fundamentally
have
a
problem
with
the
way
we
do
triage
for
issues
now
like
it's
not
so
bad,
and
as
long
as
we
do
it
regularly,
it
doesn't
get
too
backlogged
what
so,
I
guess
the
question
I
I
would
ask,
maybe
is
what
questions
are
we
trying
to
answer
via
a
change
here
right?
B
B
I
have
the
general
question
of
whether
someone's
owning
the
overall
direction
and
of
all
the
stuff
that's
being
done
for
for
coop
proxy,
because
there's
lots
of
issues
in
caps
and
they
kind
of
cross
streams,
and
I'm
not
really
that's
like
a
concrete
question
that
I
had.
Maybe
this
would
help
with
that.
Maybe
it
wouldn't
would
it
make
sense
to
have
I
mean
we
can
have
as
many
project
boards
as
we
need
right?
B
Maybe
it
makes
sense
to
have
a
separate
project
board
that
is
just
cube
proxy
and
have
like
one
column
for
caps,
one
column
for
issues
and
one
column
for
pr's,
and
then
you
could
like
for
that
one
component.
You
can
get
a
view
of
everything,
that's
in
flight.
B
B
Yeah
I
mean
we
for
sig
windows.
We
had
this
problem
sort
of,
and
I
was
I've
been
doing
some
stuff
with
them
lately
and
I
was
what
we
started
doing
is
meeting
up
30
minutes
before
the
meeting
for
people
who
want
to
help
manage
that
stuff,
and
so
there's
kind
of
like
a
meta
sig
meeting
where
you
hang
out
for
like
15
minutes,
and
you
just
do
that
stuff,
so
it
just
gives
us
extra
time
to
commit
to
it
to
it,
but
but
yeah
sub
maintainers
would
probably
solve
it
too.
B
There's
probably
a
lot
of
ways
I
mean
we
don't
have
separate
meetings
for
like
a
cube
proxy.
We
don't
have
a
proxy
working
group
meeting
or
something
right
do
we
do
we
need
it?
Yes,
that
would
be
a
solution.
I
was
just
asking
andrew
what
he
thought
about
that
this
week
of
what,
if
we
just
created
a
subproject
for
coop
proxy,
I
don't
know
if
anyone
else
we
both
kind
of
thought.
I
think
it
might
not
be
a
bad
idea.
We
just
I
don't
know.
K
Yeah,
I
think,
adding
to
that.
You
know
we're
seeing
a
lot
of
cni's
in
the
ecosystem
like
implementing
their
own
service,
proxy
and
so
jay
had
some
ideas
of
if
there,
if
we
can
have
like
a
consolidated
or
unified
api,
that
all
the
proxy
people
implement
and
they're
sort
of,
like
kind
of,
like
conformance
where
you
can
define
like
what
is
what
you
need
to
implement
from
a
data
perspective,
to
do
the
proxy
properly.
And
so
I
don't
know
like.
K
Maybe
that's
one
thing
we
can
do
if
you
kicked
off
a
working
group,
but
I
don't
know
how
important
people
feel
that
is.
C
We
are
missing,
something
important
is
nobody
is
taking
in
paying
attention
to
the
tests.
I
mean
I'm
practically,
managing
all
the
monitoring,
all
the
jobs
and
have
jay
and
other
people
chiming
in
but,
for
example,
ipbs
is
the
the
the
job
that
the
quality
is
very
much.
We
have
packs,
we
have
things
failing
the
windows
proxy,
I
mean
they
have
something
hardcoded
that
is
for
udp.
B
B
C
G
C
So
we
need
to
stabilize
and
have
a
control
of
what
we
are
running
and
you
know
know
that
we
are
healthy
and
then
we
can
start
to
to
break
down
in
different
groups,
but
I
mean
for
another
projects,
experience,
that's
what
is
going
to
happen.
Every
group
is
going
to
do
their
own
testing.
They
are
all
just
their
own
thing
and
that's
a
disaster
for
food.
C
I
mean
this
stability.
Is
this
only
one
solution?
Looking
at
results
and
asking
us,
I
mean,
I,
you
know
the
result
of
people,
they
know
a
lot.
You
have
voyage
lead
it
coming
and
then
is
how
we
need
to
learn,
and
this
is
the
best
way
to
learn.
Kubernetes.
Do
you
start?
I
mean
I
didn't
know
anything
about
tps
or
nothing
two
years
ago,
and
now
at
least,
I
can
pretend
that
they
know
when
the
people
is
talking.
So
we
should
have
work
on
that
before
moving
to
full
development.
B
Okay,
we've
got
just
a
few
minutes
left
today,
so
we
should
probably
see
if
there's
anything
else,
that
people
want
to
go
over.
I'm
gonna
keep
thinking
about
how
to
or
if
to
use,
project
boards
or
some
other
mechanism.
B
E
B
We
can
do
it
across
the
any
repo
within
the
org.
I
think
I
was
reading
the
limitations
this
morning.
I
think
you
can
set
up
250
repos
in
a
single
project
board
all
right,
yeah.
B
A
E
B
All
right:
well,
I'm
not
sure
really
what
we
should
do
here,
there's
120
open
pr,
119
open.
Is
it
we're
obviously
not
going
to
read
through
them
all?
There
are
many
that
do
not
have
an
assignee,
so
maybe
it's
worth
projecting
and
just
seeing
if
anybody
says,
oh,
that
should
be
assigned
to
me
or
or
something.
E
B
All
right
window.
B
All
right,
so
these
are
in
forward
order
or
backwards,
or
rather
most
recent
first,
I'm
just
looking
for
ones
that
have
no
assignee,
so
sct
sctp
support
beta,
that's
probably
just
docs
updates.
Anybody
want
a
volunteer,
holler
and
we'll
see
if
there's
just
anything
egregious
in
the
future.
I
I
B
B
All
right,
what
number
is
that?
Nine,
nine
one,
eight
nine,
oh
we're
gonna,
hit
100
000
this
week.
L
B
B
C
B
A
A
What
touches
us
is
it
just
tests
at
least
tests
like
refactors,
agn,
host
image,
pod
usage
and
tests?
That's
one
that
third
commit
in
there.
B
Come
on
close
wait
another
one
from
justin,
closely
test.
B
C
I
had
to
talk
with
him
because,
and
I
modified
the
different
functions,
I
think
that's
all
right.
C
B
Oh,
it's
approved
there,
we
go
okay,
I
should
just
filter
for
approved
next
time.
Delete
words
forget
it
director,
logging,
migration,
modified
docker,
shim
and
networking
parts.
So
what
should
you
do.
B
They
are
piece
by
piece
trying
to
convert
to
a
structured
logging
format,
structured,
not
formatted,
logs
and
so
they're.
Just
asking
people
who
own
various
sub
packages
to
read
and
review
make
sure
that
the
the
new
log
structure
makes
sense.
B
So
these
sorts
of
changes
are
generally
pretty
easy
to
review.
There's
just
so
many
of
them.
B
B
The
ultimate
goal
is
that
once
everything
is
converted,
then
we
will
simply
make
the
the
switch
and
then
we
can
talk
about
whether
we
replace
k
log
with
something
else.
But
first
we've
got
to
get
all
of
the
call
sites
switched
over.
B
B
I
B
Oh,
it's
assigned
to
youtube
okay
cool,
so
we
have
a
lot
of
open
pr's,
although
not
so
many
recently,
so
I
guess
that
implies
that
there
are
a
lot
of
really
old
ones.
Maybe
it's
worth
coming
back
through
from
the
other
end
next
time
and
seeing
how
many
of
these
old
pr's
we
should
just
be
closing
out,
but
they're,
not
actually
that
old,
the
oldest
we've
got
is
only
a
year
and
a
half
two
years.
E
B
C
B
Okay,
we're
at
time
anybody
have
anything
else.
They
want
to
throw
out.