►
From YouTube: 20210210 SIG Arch Prod Readiness
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
B
Hello,
everybody.
This
is
the
kubernetes
which
one
is
this
production
readiness
subproject
meeting
for
february,
10th
2021,
let's
get
started,
be
nice
to
each
other,
so
feature
freeze
was
yesterday
and
we
had
a
lot
of
reviews.
It
looks
like
we
divided
almost
exactly
each
equally
between
the
three
of
us,
basically
22
each
because
that's
pretty
good,
some
of
mine,
I
have
to
admit,
were
like
non-issues
because
they
were
like
tooling
caps
or
something
but
hey.
B
I
had
some
hard
ones
too,
and
so
I
figured
the
the
only
thing
I
had
on
the
agenda
today
was
like
a
retrospective.
How
to
go
for
folks.
What
can
we
do
better
and
maybe
there's
been
enough
time
to
think
about
it
yet,
but
you
know
here
we
are
one
thought
I
had
was
that
I
put
it
on
the
agenda.
Sorry,
I
don't
think
I
think
too
many
tabs
open,
which
applies
to
all
of
us.
B
Yes,
okay,
so.
B
Yeah,
the
tooling
missing
caps,
that's
a
good
one!
Okay,
let's
just
take
you
in
order.
Oh,
I
should
just
just
spray
display
this
now
I
lost
back
soon
window.
B
B
B
All
right
there
we
go
so
this
is
she's
been
out
there
for
a
while,
and
I
just
thought
folks
here
should
look
at
it.
We've
known
that
there
would
be
caps
that
are
different
sort
of
a
different
category,
and
so
I'm
suggesting
process
feature
and
tooling
types
for
caps.
A
B
Definitely
is,
I
just
thought
the
folks
on
this
call
should
know
about
it,
and
so
I'm
putting
it
out
there.
That's
all.
B
I
agree
yeah
process
ones,
probably
the
freeze
doesn't
apply
to
and
also
sometimes
the
two
only
ones
the
freeze
doesn't
really
apply
to
you.
If
it's
not
production
code
like
it's
not
as
critical
that
it
follow
the
schedule
the
same
way,
but
you
also
don't
want
to
disrupt
the
schedule.
Describe
disruptive
tools,
yeah.
A
I
mean
if
there's
a
big
ci
change,
for
example,
that
should
probably
be
synced
with
a
release
schedule.
So
I
guess
it's
hard
to
say
it's
like,
maybe
that
one's,
maybe
a
maybe
yeah
exactly.
E
You
know
I
will
say
I
only
had
one
where
this
was
questionable.
There
was
one
that
came
in
where
it
was
hey.
We
want
to
add
a
verify
test
and
it
referenced
like
they
wanted
to
use
something
that
was
a
real
code
change
and
that
one
went
through
a
separate
cap,
but
the
split
made
it
real
easy
to
approve.
Do
we
have
enough
of
them
to
make
it
an
issue,
though
that'd
be
like
five
percent?
For
me,
all
the
others
were
definitely
worth
looking
at.
B
I
think
there
were
three
or
four
on
my
list,
some
of
which
didn't
go
through
yet,
but
there
were
three
or
four
that
there
were
two
that
were
just
purely
tooling
changes.
B
There's
one
that's
about
the
kep
process
itself:
there's
not
a
ton
of
them
agreed,
there's,
not
a
ton
of
them,
but
you
know
there's
just
some
some
percentage.
I
don't
know
like
if
it's
a
bigger
question,
but.
A
Thank
you.
It's
been
strongly
suggested
to
me
by
a
number
of
people
that
not
just
on
this
call
that
I
might
be
a
good
fit
for
the
team,
and
so
I
asked
day
job
I'm
already
doing
a
lot
of
kep
review,
but
like
there's
a
lot
of
extra
work,
do
you
want
me
to
do
this
and
they
said
sure?
So
if
if
y'all
will
accept
me,
I
would
be
happy
to
step
up.
I
mean
one
of
my
concerns
when
we
rolled
this
out
for
this
release
was
the
team
is
too
small?
A
It's
going
to
block
things
it's
going
to
be
miserable
for
that
team
and
at
least,
like
I
guess,
having
me
join,
hopefully
addresses
some
of
that
like
I
don't
want
to.
You
know,
point
out
problems
and
not
be
willing
to
help
fix
them.
I
will
say
I
think
it
would
be
great
if
we
could
expand
the
team
even
more,
but
I'm
not
sure
what
that
would
look
like.
B
Yeah,
we
have
one
person
from
vmware
who's
interested,
but
I
haven't
seen
him
around
lately,
so
I'm
not
sure
what's
happening,
but
yes,
we
definitely
want
to
expand
the
team
and
your
feedback
was
super
helpful.
Actually
because
we
went
and
one
better
documented
it
and
we
we
still
got
that's
something.
That's
something
on
the.
B
B
If
we're
going
to
introduce
additional
changes
in
the
process,
we
need
to
do
a
better
job
of
advertising
it,
but
your
your
feedback
there
did
get
us
to
document
it
better.
Provide
example
prs.
I
think
it
made
people's
lives
a
lot
easier
made
our
lives
a.
F
So
I
think
that,
regarding
documentation,
I
think
there
are
still
people
who
didn't
know
that,
like
the
kept
template
change
and
so
on,
I
think
there
will
always
be
people
that
don't
really
follow
what
is
happening.
So
the
question
is
rather
how
many
such
people
there
were
or
not.
If
they
were
right,
so
I
will.
E
Say
one
thing
I
took
is
as
good
news.
I
have
not
gotten
a
flood
of
you
didn't
get
to
my
my
pr
in
time
and
now
I
need
you
to
help
me
file
an
exception.
I
have
gotten
actually
zero
of
those.
I
don't
know.
I.
B
So
I
would
success
yeah
yeah
exactly.
I
think,
that's
a
good
thing
we
were.
We
were
not
that
we
were
not
the
bottleneck
that
atlanta
worried
about,
but
but
it
was,
you
know
a
lot
of.
A
I
mean
it
looked
painful.
I
was
very
appreciative
that
you
were
not
a
bottleneck
and
I
know
you
know
we
were
like
looking
at
turnaround
times
and
whatnot
and
derek
was
saying
for
sig
node.
So
prr
has
like
a
five
day
turnaround
right
now,
so
you've
got
to
request
it
early
and
I
think,
there's
also
tooling
improvements
that
could
be
made
there
as
well.
E
Yeah,
we
also
had
an
issue
where
we
required
valid
yammel,
which
seems
very
reasonable,
like
surely
you
can
describe
this
yemo
correctly,
but
john
found
like
five
pr's,
that
he
was
going
through.
It's
like
hey
david.
This
one
doesn't
have
anything
and
I'm
kind
of
busy.
What
are
you
doing.
B
Yeah,
so
so
exactly
so
the
the
tooling
and
didn't
fail
ci
for
invalid
the
ammo,
and
on
top
of
that
pro
query
relies
on.
D
D
A
A
B
F
I
think
the
call
received
process
will
will
help
a
lot,
because
we
will
have
like
a
strong,
much
stronger
validation.
What
is
going
into
it
to
a
particular
release?
You
don't
have
to
like
cross-check
with
like
some
spreadsheets
and
so
on
and
so
on.
So
I
think
that
that
will
hopefully
change
a
lot
here.
Yeah.
B
Yeah,
what
I
ended
up
doing
is
running
the
tool
and
having
it
spit
it
out.
When
you
saw,
I
think
boy.
Take
you
sorry.
I
guess
both
of
you
did
david,
too
spitting
it
out
and
putting
it
in
a
spreadsheet
and
then
going
over
iterating
over
that
spreadsheet,
because
the
tool
takes
kind
of
a
long
time
to
run,
and
I
can't
put
notes
in
the
tool
so
when
I
run
the
tool.
So
that
worked
better
for
me,
but
if
there
was
a
way
to.
B
A
E
A
Yeah
mine's
a
mess
too,
but
it's
not
it's,
certainly
not
that
bad.
That's
how
I
tend
to
filter
these
things,
at
least
like
I'll
look
at
all
of
the
open
pr's
with
the
label
for
like
the
sig
that
sig
and
what's
assigned
to
me
versus.
What's
not
because
a
lot
of
the
ones
like
I
will
remove
myself
as
an
assignee
if
it's
not
relevant.
A
A
F
Yeah,
I
think
that
once
we
have-
or
we
already
have
like
a
validation
if
the
prr
file
that
the
broad
release
broadband
in
esra
is
created.
So
if
we,
if
we
would
be
assigning
prs
based
on
that,
I
think
that
would
be
helpful,
like
the
page
with
prs
assigned
to
me
is
definitely
like
manageable.
For
me,
as
opposed
to
like
github
notifications,
yeah
right.
B
Awesome
see
that
is
it
what's
your.
B
C
E
That
was
my
biggest
thing
from
looking
through
and
trying
to
figure
out.
You
know
what
do
people
most
have
trouble
with.
There
were
occasional
contributors
who
who
didn't
really
think
about
watching
how
new
apis
they
were
introducing
may
fail.
So
there's
some
some
friction
with
the
no
really
we
we
need
a
metric
for
this.
Yes,
a
little
bit
of
friction
with.
E
No
really,
you
need
a
feature
gate
right,
even
though
you
have
a
flag.
No
one
knows
that
flag
is
alpha.
If
you
just
put
the
flag
there,
they
have
no
way
of
knowing
right
and
putting
the
flag
there
or
having
the
feature
gate.
There
helps
people
know
that,
but
the
last
one
was
I
would
get
answers
back
about
how
like
a
cluster
admin
can
make
sure
feature
was
installed.
E
But
if
you
look
at
something
like
say,
a
windows
container
windows,
privilege
container
the
person
who
actually
creates
that
pod
needs
to
know
whether
it
worked
or
not
too,
and
there
were
a
couple
of
enhancements
that
had
difficulties
around
was
a
user's
intent
fulfilled
right.
There
were
a
couple
with
storage
that
came
back,
had
difficulty
and
the
windows
containers.
I
think
it's
I
I
saw
it
on
at
least
maybe
three
or
four
there's
about
25.
A
I
also
definitely
saw
a
lot
of,
but
it's
a
command
line
flag.
We
don't
need
a
feature
flag
because
you
know
you'll
know
if
you
specify
the
flag
and
a
thing
that
I
ran
into
a
little
bit
of
difficulty
with
when
I
saw
that
sort
of
thing
is.
I
didn't
have
documentation
to
point
to
saying
this
is
how
you
generally
need
to
life
cycle
like
adding
features
and
like
it
has
to
have
a
feature
flag,
and
then
this
happens
when
it
goes
to
beta,
and
this
happens
when
it
goes
to
graduated
like.
A
E
I
think
a
dock
if
we
wrote
it
would
end
up
having
to
be
nuanced.
I
I
can
be
convinced
that
say
a
new
flag
in
cube
control.
You
know
it
have
an
alpha
feature:
gate
on
a
flag
to
cube
control
that
controls
a
query
that
they
send
out
right.
Think
about
how
we
did
dry
run
in
the
past
or
everyone
came
in,
didn't
need
to
have
a
feature,
gate
and
cube
control.
So
there
needs
to
be
a
line
drawn.
B
B
B
To
but
you
shouldn't
be,
you
shouldn't
be
doing
metrics
for
user
error
in
in
my.
E
A
That's
another
thing
that
we
really
don't
have
a
lot
of
document.
I
mean
there
exists
a
sig
instrumentation
guide
for
this,
but
it's
not
really
like
being
constantly
maintained.
There's,
certainly
no
one
enforcing
it.
If
that's
going
to
start
happening
through
like
prr
review,
I
have
been
trying
to
find
more
people
to
work
on
this
sort
of
thing,
but
I
don't
think
there's
anybody
evangelizing
best
practices
for
like
what
do
you
do
with
metrics?
What
do
you
do
with
events?
What
do
you
do
with
vlogging?
That
kind
of
thing.
A
I
mean
there's
a
thing
in
the
kubernetes
community
repo.
The
problem
is
that
it's
not
necessarily
a
hundred
percent
up
to
date.
E
A
You
know
that
your
metric
is
useful
or
something
like
that,
which
is
a
big
issue
that
I've
seen
with
a
lot
of
like
the
the
metrics
incoming
they're,
like
kind
of
useless.
So
it's
like,
oh
great,
you
ticked
the
box
that
you
have
an
sli,
but
it
doesn't
tell
me
anything
like
I'm,
never
going
to
use
that
as
a
cluster
operator
and
now
we're
saying
they
have
to
add
metrics
but
like,
if
nobody's
going,
to
look
at
it
and
it's
another
metric.
A
It's
just
you
know
yet
another
time
series
adding
load
on
your
prometheus
for
questionable
benefit.
A
I
am
looking
up
the
see
if
I
can
pull
up
the
link
for
instrumentation.
A
F
Yeah
from
from
mine
prs,
I
think
the
matrix
is
the
is
the
biggest
thing
like
I
think
the
matrix
part
was
problematic
for
in
like
half
of
the
prs,
or
something
that
I
I
was
doing
yeah.
The
second
thing
was
was
basically
about
like
how
can
the
rollout
fail,
which,
like
most
of
people,
probably
even
more
than
a
half,
were,
were
initially
saying?
It
can't
fail,
yeah
yeah,
which
is
and
and
most
of
like
I,
for
example,
I
don't.
F
B
The
the
metrics
thing
and
that
yeah
and
the
the
thinking
through
rolling
out
re-enabling,
disabling
and
then
re-enabling
in
a
large
cluster,
that's
full
of
active
workloads.
I
think
it
was
a
struggle
for
a
lot
of
folks.
E
That's
where
that's,
where
a
lot
of
my
like
is
the
intent
working
or
not
came
from
where
it
would
be
a
case
of
like
well,
it
was
on
and
then
you
start
turning
it
off
or
like
it's
half
on,
because
we're
rolling
out
and
the
nodes
are
on,
and
sometimes
when
you
get
scheduled,
it
works,
and
sometimes
when
you
get
scheduled
it
doesn't
is,
was
a
challenge.
E
I
I
do
grant
some
slack
for
that,
because
it
doesn't
happen
very
often
right,
like
you're
not
going
to
be
in
that
state.
Usually
you
hope
very
long
as
long
as
you
can
recognize
it
with
an
event
or
or
like
your
actual,
create
failing
it's
okay,
you
know
you
alana
or
elaine
elena.
I
feel
bad.
I
interviewed
you
atlanta,
so.
E
You
you
work
with
with
thignode
right.
One
of
the
things
that
that
came
up
a
few
times
was
there's
no
like
capability
checking
for
like.
Does
this
node
support
the
feature
that's
trying
to
be
used,
so
I'm
not
sure
like
how
a
scheduler
can
make
a
proper
decision
in
a.
A
Cable,
I
have
no
idea.
If
that's
I
mean
I
guess
that
would
be
a
signaled
thing
yeah,
if
there's
anything
like
that
in
the
cards.
No
note
has
many
problems,
of
which
I
guess
that
could
also
be
one
of
them.
D
A
A
I
would
see
people
like
coming
up
with
metrics
that
were
really
easy
to
measure
and
then
sort
of
trying
to
pigeonhole
them
into
slos
after
I
would
like
to
see
for,
like
maybe
question,
updates,
putting
the
slo
stuff
first,
forcing
them
to
think
about
how
do
they
actually
want
their
feature
to
be
used
and
then
try
to
figure
out
like
what
metrics
we
would
use
to
like
determine
that
it
was
actually
working
correctly
or
that
it
was
healthy,
because
I
was
definitely
like,
because
I
was
seeing
a
lot
of
well.
A
I
added
this
metric
because
it
was
really
easy
to
measure,
but
it's
not
necessarily
a
useful
metric.
I
think
that,
like
maybe,
if
we
switch
the
order,
it
might
help
like
prime
people's
thinking
to
reduce
that
a
little.
A
A
E
I'll
have
a
stab
at
that
since,
since
it
was
one
that
I
observed,
okay,.
B
B
Do
we
need-
and
maybe
some
of
these
documents
exist,
but
I
wonder
if
it
would
be
useful
for
people
to
have
either
links
out
to
or
some
reference
what
we
can
describe
like
when
we
talk
about
enablement,
disablement
upgrade
what
does
the
upgrade
process
look
like
in
a
typical
large
cluster?
You
know,
like
I'm
sure,
we've
got
that
documented
somewhere,
but
I
don't
want
people
to
read
three
pages.
I
want
them
to
read
one
paragraph
and
and
like
maybe
we
can
summarize
some
of
that
so
that
it's
like
it's.
B
The
mindset
like
it's
like
imagine
that
it's
friday
and
you've
got
the
pager
and
you
want
to
go
home.
Are
you
gonna
go
and
enable
this
feature
like?
What
are
you
gonna?
What
are
you
gonna
look
at
to
make
sure
that
you
know
you
can
leave
kind
of
thing
like
it's
just
I
just
think
people
aren't
used
to
thinking
that
way
and
and
a
little
blurb
writing
it
up,
might
help
them
think
through.
Okay,
you
know
we're
gonna
upgrade
the
masters
and
then
we're
gonna.
I
should
say
the
control,
plane,
notes
and
then
we're
gonna.
E
I
know
that
a
lot
of
people
who
do
deployments
like
to
have
plus
minus
one
right
ability
and
be
able
to
do
that
in
a
structured
way,
but
it's
not
I
don't
know
if
everybody
does
maybe
like
minimum
bar,
would
be
think
about
how
this
works.
In
an
aha
environment,
where
half
your
qbi
servers
are
updated
and
half.
E
B
B
The
dns
begins
returning
the
secondary
address
family
immediately,
but
you
don't
have
end
points
and
you
don't
have
ipf
ip
tables
wired
up
on
that
address
family.
So
all
of
a
sudden
you've
got
10,
000
clients
and
potentially
half
of
them
just
because
of
the
way
dns
works.
How
frequently
they
look
it
up
might
start
going
to
to
the
new
family
and
absolutely
crush
the
one
little
endpoint
out
there.
That's
that's
actually
been
wired
up
right,
so
it's
a
transition
period.
B
B
E
Yeah,
I
guess
it's
worth.
I
guess
you
write
five
sentences
and
we
write
five
sentences
and
we'll
see
if
our
five
sentence
upgrade
story.
It's
close
enough.
Yeah
yeah,
I
think
h,
a
h.
A
control
plane
is
probably
like
minimum
threshold
yeah
that
we
can
probably
all
agree
on.
E
And
then
basically
I
guess
you
know,
we
see
like
a
certain
number
of
nodes
drained
with
time
as
each
one
is
drained.
They
get
upgraded
and
then
roll
out
and
then
continue
on
there's
like
a
sort
of
canary
stage
and
then
a
bulk
stage
and
then
a
cleanup.
D
A
A
B
E
That's
all
I
had,
I'm
glad
feature
fees
was
yes
or
cap
freezes.
Yesterday.
B
G
Yeah,
so
I'm
currently
working
at
aws
in
the
eks
team
and
I'm
just
curious.
I
never
joined
this
meeting
before,
but
I
just
got
curious
about
the
sikh
architecture.
So
I'm
here
I'm
just
trying
to
listen.
G
I've
worked
at
with
envoy
a
bit
I'd
lift
before
for
the
last
two
three
years
and
now
I'm
at
because
trying
to
contribute
to
kubernetes
so
yeah.
That's
great
awesome.
Welcome.
A
The
official,
like,
I
think,
wider
sig
sig
architecture
meeting
is
tomorrow
and
I
don't
know
what
time
zone
you're
on.
But
if
you
are
on
pacific
time,
like
me,
yeah.
G
B
B
Release
was
the
first
release
where
we
like
really
made
people
do
answer
these
questions
in
their
features
and
and
then
we
go
through,
and
you
know,
try
and
make
sure
that
that
they've
answered
them
well
and-
and
you
know,
make
changes
if
they
don't.
So.
You
know
yeah
welcome
tomorrow,
like
atlanta
said,
is
the
bigger
stick
architecture
community
meeting,
which
you
can
find
the
agenda
and
everything
on.
A
So
I
had
a
quick
question
about
recruiting.
I
don't
know
I
I
haven't
been
looking
at:
basically
any
caps
that
were
not
in
either
node
or
instrumentation.
A
Are
there
other
people
who
have
been
kind
of
doing
some
of
the
prr
review
that
we
could
reach
out
to
to
ask
about
interest
in
joining
the
team,
because
I
think
it
would
be
ideal
if,
like
every
sig,
had
somebody
on
the
prr
team
that
they
sort
of
designated
as
like?
Yes,
this
person
is
good
for
like
network
prr
review
or
so
like.
I
wouldn't
be
able
to
do
prr
for
a
network
issue
very
well
necessarily.
So
I
don't
know.
E
Like
so
you've
experienced
running
clusters,
even
if
you
don't
have
a
networking
specific
background,
wouldn't
it
wouldn't
you
want
to
have
confidence
that
they
managed
to
explain
it
in
a
way
where
you
can
understand.
Okay,
I
know
what
it
does.
I
know
how
to
measure
what's
working,
whether
it's
working
or
not,
and
I
know
how
to
turn
it
off
if
it
doesn't
work
anymore.
F
But
I
think
actually
that
there
is
a
value
of
someone
from
outside
the
sick,
doing
the
pr
for
that
sick,
because
if
you
are
part
of
the
stick
you
are,
you
are
taking
part
from
all
on
all
the
discussions
about
the
particular
feature
you
are
already
kind
of
biased
towards
some
thinking
so
having
someone
from
completely
outside
actually
have
benefits
for,
in
my
opinion,.
A
B
F
B
B
So
we've
tried,
I
think
it's
just
some
evangelism.
We
need
to
do
over
the
next
when
I'm
in
a
meeting.
B
A
Can
try
to
evangelize
if
you
will
in
both,
I
think,
like
there's
a
pound
sre
channel
in
the
slack
that
might
be
a
good
place
to
advertise
absolutely
yeah
potentially
externally.
So
my
other
question
is,
as
a
like,
I
guess,
an
sre,
or
something
like
that.
One
thing
that
I
also
consider
as
part
of
production
readiness
is
security
considerations.
A
That's
currently
not
part
of
this
review
process.
Is
that
something
we
want
to
think
about
in
the
future
and
who
would
we
get
involved?
If
there's
anybody
we'd
need
to
poke
about
that
sort
of
thing.
E
E
I
guess
this
is
more
from
my
point
of
view,
like
a
summary
from
from
the
kep
author,
or
you
know
the
sig
to
say
like
hey.
This
is
how
we
suggest
you
debug
it.
E
B
A
Mean
maybe
this
is
not
right
for
this
group,
but
like
right
now
we
don't
really
have
at
the
enhancement
level.
We
don't
really
have
a
well.
If
you
add
this
for
alpha,
is
this
introducing
a
big
ugly
security
hole
because
certainly
there's
been
some
discussion
in
sig
node
meetings
about
you
know
the
cubelet,
taking
like
a
user
specified
input
of
an
arbitrary
binary
and
then
executing
that
and
like
there
are
certainly
security
implications
there.
A
But
you
know
it
wasn't
like
a
large
part
of
the
review
process
and
there
was
a
lot
of
oh,
but
it's
just
alpha.
Well,
even
if
it's
just
alpha,
if
it's
turned
on,
you
can
potentially
use
it.
So.
E
I
think
we
generally
have
trusted
the
cigs
to
engage
the
right
people
and
done
a
pretty
good
job,
though
the
ones
that
I've
seen
have
gone
away
from
people
have
been
mostly
around
sig
scheduling
and
dosing
concerns
where
you
can
do
all
sorts
of
like
weird
platform
denial
stuff,
because
of
the
way
that
the
scheduling
has
affinity
to
nodes
and
less
affinity
to
namespace
sort
of
things.
E
I
don't
know
right
now
where
I
would
come
down
on
a
stand
for
that
because
of
the
way
someone
is
opinionated
around
like
well,
we
deployed
like
this
and
we
don't
care
or
if
you
deploy
like
that,
it
doesn't
matter,
and
those
considerations
are
real.
A
Yeah,
I
just
added
a
note
saying
I
I
think,
there's
agreement
that
this
is
maybe
not
the
right
forum
for
it,
but
hopefully
that
it
is
valuable.
I
don't
know
if
maybe
that's
something
that
like
seek
security
can
have
input
on,
but
it's
I
I
feel
like
it's
a
bit
of
a
gap
right.
A
B
Yeah
we
did
fold
scalability
into
into
this
process.
You
know,
but
I
I
I
I'm
kind
of
right
about
where
david
is,
I'm
not
sure
if
I
don't
think
it's
probably
the
right
place,
and
I
don't
I'm
not
sure
if
it's
a
good
idea,
yeah.
F
Yeah
we,
we
definitely
would
also
need
like
another
set
of
people.
I
don't
feel
like
competent
to
review
any
security
related
stuff,
so
I
had
actually
two
cases
where
I
directly
reached
to
jordan
for
his
thoughts
and
like
in
one
case
it
resulted
in
significant
changes
to
the
cap,
but.
E
Yeah,
I
imagine
that
would
be
sort
of
a
probably
a
sig
arch
question
and
I
guess
I'd
have
to
prepare
an
opinion.
I
know
for
sure
john
would
have
to
prepare
an
opinion.
B
B
All
right
any
last
items.
B
A
B
E
Let's
assume
we
want
a
resume
sort
of
like
we
do
for
the
contributor
letter
and,
let's
see
if
I
pull
the
number
out
of
somewhere.
E
Oh,
how
do
you
feel
about
like
six
john.