►
From YouTube: 20210714 SIG Arch Prod Readiness
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right,
hello,
everybody.
This
is
kubernetes,
sick
architecture,
production
readiness,
sub
project
meeting
july,
14th
2021,
let's
get
started.
We
have
the
first
item
on
the
agenda.
Mr
david
eads.
B
Yeah,
so
I
had
a
look
at
what
it
was
going
to
take
to
well.
My
first
thought
was:
can
I
just
revert
the
changes
back
to
where
we
were?
The
answer
is
no.
They
there
are
many
changes,
they're
stacked
on
top
of
each
other.
The
code
is
gonna,
require
some.
B
B
Can
you
speak
up,
please?
Oh
sorry,
it
had
not
occurred
to
me
to
do
that.
I
had
thought
that
this
tool
was
ours.
A
Oh,
no,
this
tool
started
life
as
something
else,
and
then
I
I
shoehorned
in
the
query
thing
which
I
think
is
fine,
but
it
the
the
way
the
query
thing
is
built
is
kind
of
awkward
like
I,
it
was
built
around
searching
the
directories
which
isn't
really
our
main
thing.
We
want
to
search
the
prs
more
than
the
directories,
and
so
it's
kind
of
backwards.
The
way
that.
A
The
way
it
iterates
through
everything,
so
anyway,
that's
that's
the
rather.
I
think
the
query
was
in
there
for
something
else,
and
then
we
I
added
the
pr
in
so
like
we
we
just
may
want
to
step
back
and
and
either
rework
the
whole
query
function
or
change
to
use
to
have
a
specific
one
for
us
that
just
does
the
functions
we
need
not
even
it
doesn't
need
to
be
a
specific
tool,
but
a
specific
command.
A
That
might
be
helpful,
but
I
have
to
say
just
having
a
spreadsheet
since
we're
already
tracking
things.
The
idea,
I
believe,
ultimately,
for
the
enhancements
team
was
that
they
would
have
tooling
that
could
replace
the
spreadsheet
eventually,
but
I
don't
know
how
much
progress
is
being
made
on
that
I
haven't.
A
I
haven't
made
it
to
that
meeting
in
a
little
while
so
anyway,
that's
some
context
for
you
to
think
about
whether
how
much
effort
you
want
to
put
in
here,
I
found
the
spreadsheet
to
be
super
useful
and
even
more
than
like
grounding
the
tool
so
yeah.
B
I
am
I'm
willing
to
admit
that
this
is
probably
a
back
burner
project
for
me
and
we
will
plan
to
use
the
spreadsheet
next
release
and
see
how
I
like
it.
Just
because.
B
A
Yeah,
what
what
I
would
probably
what
might
be
better
to
do
is
feed
requirements
to
that
enhancements
team.
If
we
think
that
we
can
do
if
we
think
that
that
the
tool
can
work
better
than
the
spreadsheet
or
if
the
tool
eventually
replaces
the
need
for
a
spreadsheet,
because
process
changes
that
would
be,
my
thinking
is
sort
of
those
would
be
the
two
reasons
the
approach
to
take
for
now.
Okay,.
B
I'll
go
ahead
and
write
up
a
few
bullet
points
on
what
I
found
when
I
was
looking
through
the
code
and
what
I
see
as
some
of
the
current
shortcomings
and
what
I
was
going
to
try
to
fix
and
and
I'll
just
leave
that
in
the
notes
here.
A
Okay
sounds
good,
thank
you,
so
so
there's
nothing
more
on
that
we'll
go
to
the
second
item,
the
pr
survey.
So
so,
a
little
while
ago
we
had
like
20
responses,
which
was
nowhere
near
enough.
So
I
availed
some
people
with
larger
twitter
followings
to
to
spread
the
word.
And
now
we
are
at
I'm
happy
to
say.
A
153
responses,
which
is
more
than
which
is
actually
double
what
we
had
for
our
first
survey,
so
pretty
happy
about
that
I've
not
done
an
analysis
of
the
results.
Yet
this
is
annoying
that
people
filled
in
things,
because
this
is
like
one
of
the
ways
we
I
didn't
realize
we
put
a
you,
can
fill
it
out.
A
Looks
like
you
know,
mostly
small
number
of
clusters,
but
and
obviously
not
surprisingly,
a
relatively
most
people
with
relatively
small
numbers
of
nodes,
but
you
know
not
bad
up
to
a
thousand
and
look
at
this
one.
I
mean
that's
eight
with
10,
000
or
more
nodes.
That's
more
than
way.
More
than
last
time,.
A
I
I
hadn't
really
prepared
to
do.
I
hadn't
done
any
analysis
on
this
yet,
like
I
said,
but
if
we
want
to
go
through
this
here,
we
can
looks
like
119.
A
A
Is
where
where
most
people
are
so
we
asked
this
last
time
and
we
didn't
ask,
that's
a
you
know:
28
had
to
roll
back
a
minor
version
in
production,
28
of
the
people
who
answered
not
28
of
the
population,
so
we
have
to
so
that's
actually
only
11
of
155..
A
B
B
A
What's
it
called
bigquery,
and
then
I
run
a
bunch
of
I
have
some
there's,
like
some
data
analysis
tools
that
we
have
that
I
run
on,
and
then
I
can.
I
can
slice
it
that
way.
A
Yeah
you're
right,
probably
eight
of
them-
are
that
the
the
ones
match
either
a
lot
of
clusters
or
a
lot
of
nodes
of
those
who
wrote
back
specific,
failing
components.
By
far
the
reason
and.
A
A
That's
this
okay,
I'm
recorded
here
so.
A
B
I
did,
but
I
mean
I
I
like
to
think
of
myself
as
slightly
more
knowledgeable
in
a
feature
like
that
than
most
of
the
people
running
clusters.
B
B
It
was
like
one
of
those
like
david
personally
puts
his
reputation
on
the
line
to
enable
this,
because
he
personally
approved
the
prs
to
put
this
in,
and
I
think,
there's
like
order
of
five
people
in
the
world.
Who
could
do
that
for
that
feature
right.
So.
A
B
A
Can
someone
take
a
note
of
that
in
the
in
the
agenda
of
things?
We
should
any
questions
come
up
as
we
do
this,
take
a
note
of
it
that
way,
we
can
go
back
and
look
it
up
when
we
get
the
look
at
the
raw
data.
C
A
B
That
one's
interesting
too,
I
wonder
if
that
number
will
drop
or
the
arguab
beta
features
after
122,
when
the
great
beta
cliff
comes.
A
Well,
I
I
also
wonder
if
we've
moved
a
lot
of
things
to
ga
that
were
sitting
in
beta
for
a
long
time
over
the
last
couple
releases
and
if
those
are
now
available
in
ga-
and
they
were
things
people
needed,
they
might
be
able
to
tighten
their
policies.
B
That's
definitely
what
I'm
hoping
and
that,
with
time
limits,
things
won't
linger
as
long
so
yeah.
This
number,
I'm
hoping
will
drop,
hopefully
to
the
point
where
no
one
uses
beta
in
production
anymore.
A
I
think
realistically,
if
it's
enabled
a
lot,
you
see
that
when
we
get
to
the
free
form
you'll
see,
a
lot
of
people
are
like
if
it's
enabled
by
default
they're
allowed
to
use
it.
So
there's
still
going
to
be
a
lot
of
organizations
that
don't
take
the
time
to
create
that
kind
of
policy
and
enforce
that
kind
of
policy.
A
C
A
Have
to
do
something
in
your
client
request
to
sort
of
actively
say
I
want
to
be
able
to
use
beta.
I
want
to
be
able
to
use
alpha
apis.
It
only
works
for
apis,
but
that
might
make
the
decision
more
explicit
than
just
copying
a
manifest
off
the
internet
that
happens
to
have
beta
in
the
version
which,
I
think
is
what
happens
a
lot
these
days.
B
There's
actually
a
way
to
disable
all
beta
apis
in
bulk
when
launching
the
qaps
server,
which
is
in
bulk
I'll
grant
you
it's
not
a
selective
lens,
but
I
think
I
think
that
a
lot
will
adjust
once
their
realization
that
beta
actually
means
beta.
It
will
go
away.
It
hasn't.
B
It
starts
not,
can
it
will
when
it
starts,
it
has
an
end
date,
a
lifespan
that
ends
six
releases
after
it
gets
introduced.
A
Okay,
okay,
so
the
troubleshooting
section.
A
So
so
still
like,
I
think
we
saw
last
time
that
logs
was.
A
We
saw
a
similar
pattern
last
time
where
people
still
rely
pretty
heavily
on
logs,
but
I
think
this
is
probably
dictated
by
the
number
of
clusters
they
have
under
management.
More
than
anything
else,.
A
B
I
I
want
to
compare
to
the
previous
numbers.
I
think
I
think
yeah
that
doing
that
and
as
a
deep
dive
in
a
future
meeting
would
be
very
interesting
to
me.
But
my
memory
is
that
kubernetes
dashboard
used
to
have
a
higher
number
on
it
and
prometheus
used
to
have
a
lower
percentage
on
it,
and
I
am
this
is
obviously
a
huge
standout,
yeah
yeah
and-
and
I
like
I
like
that
progression.
If
my
memory
is
correct,
it
has
been
like
a
year.
A
Yeah,
it's
okay,
we're
just
going
over
the
over
the
survey
results.
I
was
saying
we
got
over
150
responses
this
time,
that's
great,
which
is
awesome,
double
what
we
got
last
time,
and
I
I
hate
twitter,
I'm
sorry,
but
I
do,
and
so
I
don't
usually
use
it,
but
this
time
I
tweeted
it,
and
then
I
asked
people
who
are
more
active
on
twitter,
like
tim
and
a
few
other
folks
to
retweet
it,
and
that
seemed
to
make
a
huge
difference.
C
A
All
right
what
reasons
to
not
adopt?
We
got
103
responses
to
this,
which
are
pretty
good.
A
A
All
right,
I'm
gonna,
move
a
little
faster
through
these,
because
it's
a
lot
of
them
like
we
like
well
we're
saying
a
lot.
We,
I
haven't
done
any
analysis
on
any
of
this
yet,
and
what
I
did
last
year
was
was
take
the
raw
data
and
put
it
into
bigquery,
and
then
we
can
run
some
things
on
it,
where
we
can
slice
it
by
like
number
of
nodes
under
management
number
of
clusters
under
management,
which
tends
to
have
different,
pretty
different
patterns.
A
So
the
question
is
what
what
this,
what
with
this
coming
up
over
and
over
again
is
you
know,
raises,
is
what
can
we
do
to
make
that
easier?
I
know
david,
you
guys
in
api
machinery
have
already
done
a
number
of
things.
I
think
that
hopefully
will
make
122
smoother
so
there's
I
think
better
metrics
around
it
and
then
there's
the
warnings.
B
B
There
are
warnings
for
every
individual
client
that
is
receiving
it.
Those
have
existed
for
a
year
at
the
point
in
time
in
which
we
release
at
least
three
releases.
The
majority
of
the
apis
we
have,
we
are
removing,
went
g
had
ga
versions
available
in
116,
so
the
ga
replacement,
instead
of
only
being
available
for
a
short
period
of
time,
has
been
available
for
nearly
18
months.
B
So
and
in
fact,
there
are
like
entire
features
that
don't
work
using
beta
apis.
So
we
are
hoping
that
people
have
tried
to
keep
up
within
the
past
18
months
to
get
themselves
up
to
date
and
at
this
point,
the
theory
being
that
this
first
one
might
be
bumpy.
A
I
remember,
I
know
you've
sent
out.
I
think
a
couple
emails
already
right
about
122.
yep,
but
I'm
trying
to
remember
the
content.
Do
we
have
something
in
there
that
sort
of
gives
people
a
playbook
of
here's?
How
you
identify
you
know:
here's
where
those
metrics
are
here's
how
you
figure
out
what
what
resources
are.
You
know
still
using
the
old
apis,
anything
like
that.
That
goes
beyond
warning
people
that
this
is
coming
and
tells
people.
This
is
what
you
need
to
do.
B
There
are
variances
per
provider
for
determining
the
exact
client
actually
making
use
of
these
old
apis
right.
The
the
only
place
where
that
personally
identifying
information
is
available
is
not
in
something
like
metrics
right,
it's
in
the
autobot,
and
so
there
are
audit
log
scrapers.
There
are,
you
know,
per
cloud
provider.
There
are
different
tools
to
use.
You
can
use
cloudwatch
person
through
your
logs,
for
instance,
and
openshift
has
an
api
for
being
able
to
retrieve
it.
So
there's
some
variance
between
providers.
B
The
metrics
that
were
available
were
described
when
we
introduced
warnings.
I
guess
I
don't
know
that.
I
have
reiterated
the
metrics
beyond
linking
to
here
are
the
metrics
from
the
announcement
emails.
The
announcement
email
was
re-linked
from
twitter.
It
was
mentioned
in
the
community
call.
It
has
been
sent
out
to
all
the
lists
that
we
can
think
of.
A
Oh,
I'm
sure
it
will.
Okay,
no
that
that
I
mean
I
know
like
right.
Each
of
the
cloud
providers
is
going
to
send
out
emails
to
their
customers
too,
but
it'll
still
come
as
a
surprise
to
people.
It's
there's
two
there's
too
many
too
much
inbound
traffic
for
people
and
they
can't
they
miss
these
kind
of
things.
The
okay,
that's
great.
I
just
I
know
it's
totally
on
your
radar,
but
let's
just
make
sure
we
keep
the
pressure
you
could
talk
to.
A
B
B
But
yes,
any
any
other
ways
you
have
to
expose.
It
would
be
great
and
I
would
be
appreciative
but.
A
Okay,
okay,
so
so
some
some
kind
of
analysis
of
here,
maybe
categorizing
these
into
a
few
buckets
of
like.
A
B
C
A
That,
usually,
okay,
so
that's
kind
of
the
broad
overview.
Let's
let's
leave
it
at
that.
I
I
don't
know
when
I'm
gonna
have
time
to
analyze
this
I'm
kind
of
crammed
for
the
next
several
weeks,
at
least
so.
If
there's
anybody
who
has
spare
time,
I
can
spend
some
time
and
show
you
I.
I
assume
that
the
stuff
I
have
I
can
share
like
you
can
share
docs.
A
I
don't
know,
I
don't
want
bigquery
that
well,
but
I've
got
some
queries
and
some
stored
procedures
that
from
last
time
that
take
the
data
and
lunge
it
up
a
little
bit.
Although
it's
going
to
be
the
data's
changed
a
little
bit
so
they're
not
going
to
be
completely
reusable,
but
we'd
want
to
try
and
do
similar
comparisons,
but
I
don't
know
off
hand
when
I
will
have
time
to
do
that.
A
I'd
love
to
put
put
together
one
a
similar
deck.
A
I
wanted
back,
but
I
wonder
if
I
have
a
link
to
it
somewhere
similar
analysis
to
what
we
did
last
time
that
cooked
some
of
this
and
then
a
comparison.
It's
a
fair
bit
of
work.
I
can
also
see
if
some
folks
might
be
willing
to
excuse
me
to
to
help
out
with
it
here.
A
If,
if
there's
no
volunteers,
I'm
on
the
call
right
now,
because
I'd
like
to
put
that
together
and
then
we
should
present
that
in
sig
arch
in
general,
we
never
actually
presented
the
analysis
last
year.
I
think
because
of
my
negligence,
so
it
would
be
nice
to
present
it
this
year
and
present
be
able
to
present
the
year-over-year
comparisons.
B
Personally,
it
would
be
a
little
easier
for
me,
I'm
still
ruled
by
the
school
year,
so
it
would
be
a
little
easier
for
me
if
we
could
try
to
slice
it
after
the
school
starts
in
mid-august.
A
Oh,
I
I
am
for
sure,
because
I
won't
be
the
earliest
I
could
get
to
this.
If
I'm
gonna
do
it
myself
would
be
mid-august
based
on
just
demands
of
of
my
you
know,
other
parts
of
my
job
so
yeah
that
that's
absolutely
fine.
I
think
if
we
shoot
for.
A
I
think
it's
too
late
to
do
a
maintainer
track
thing
on
it.
I
guess
I
don't
know
if
maybe
you
guys
aren't
going
to
keep
going
anyway,
but
if
we
shoot
for
yeah
around
september
ish
for
being
able
to
do
a
a
readout
on
it.
That
would
be
great.
I
think.
B
Okay,
I
think
in
mid-august
I
will
be
back
with
hopefully
a
little
bit
more
time
and
okay,
I'm
interested
in
trying
to
learn
this
new
technology
called
bigquery.
A
Have
you
ever
used
this
this
this?
Not
so
new
technology
called
sql.
A
I
certainly
at
the
very
least,
need
to
work
sit
with
you
for
a
session
and
we
need.
I
need
to
figure
out
how
to
share
access
to
the
I
have
it
I
think
in
a
you
know,
particular
google
cloud
project
and
I
would
have
to
share
somehow
access
of
that
with
that
project
and
figure
out
how
to
do
that,
so
that
you
can
get
access
to
those
data,
sets
and
run
queries
and
then
there's
a
on
top
of
bigquery,
which
just
does
the
sql
you've
got
this
data
analysis
tool.
A
A
Okay,
cool.
Thank
you.
How
much
time
do
we
have
flats?
Okay,
next
item?
What
is
this
sla?
I
don't
know
what
these
rest
of
you.
C
Remind
because
I
think
you
weren't
here
for
the
meeting
that
we
discussed
this
so
when.
A
C
The
retro
one
of
the
things
we
talked
about
was
like:
how
do
we
ensure
that
people
are
like
actually
getting
their
pr
stuff
filled
out
and
done
in
a
reasonable,
timely
fashion
and
like
what
slas
do
we
want
to
give
them?
Because
we
analyze
the
data
like
we
ended
up
doing
most
of
the
prr
like
way
close
to
the
deadline
and
that
kind
of
sucks
for
everyone
involved,
and
it
would
be
better
to
space
it
out.
C
So
the
idea
was,
you
know
we
would
send
like
a
kdev
email
and,
possibly
like
you
know,
some
channel,
pings
or
whatever
being
like.
You
need
to
have
this
filled
out.
We
will
take
x
amount
of
time
to
like
actually
get
to
it.
So
if
you
don't
have
it
filled
out
until
like
x,
amount
of
time
before
the
deadline,
we
can't
guarantee
it's
going
to
get
done
right.
A
C
Sounds
good
because,
right
now,
there's
kind
of
like
you
know
the
there
was
like
a
large
group
of
people
who
you
know
basically
didn't
fill
it
out
until
like
48
hours
before
the
deadline,
which
did
not
leave
us
with
very
much
time
to
actually
look
at
the
questionnaires.
A
Well,
it
doesn't
leave
time
for
back
and
forth
in
particular,
so,
okay,
so
it
sounds
like
you're
volunteering.
B
C
Yeah,
we
need
to
actually
decide
on
what
the
sla
would
be.
I'm
fine
with
doing
that
so
from.
If
you
scroll
down
to
last
meetings
notes.
We
have
some
notes
about
this,
and
so
the
release
team
is
going
to
make
sure
that,
like
they're,
going
to
be
checking
if
you
scroll
down
a
little
bit
further,
there's
some
action
items
yeah.
C
So
the
release
team
is
gonna,
make
sure
that,
like
they're
actually
checking
the
enhancements
to
make
sure
you
know
they
have
a
little
bulleted
list
and
they're
gonna
make
sure
that
the
right
things
are
filled
out
for
the
right
release
and
they're
gonna
post
poke
people
about
it.
But
we
also
need
to
figure
out
like
what
our
sla
will
be
and
then
how
to
communicate.
That,
because,
like.
C
Hours
is
unreasonable
so,
like
should
that
be
a
week
like
what
do
we
want
to
set
that
to
be.
C
A
C
Like
how
how
long
after,
like
their
thing,
is
initially
up,
how
much
notice
do
we
expect
them
to
give
us,
and
we
will
guarantee
it
will
get
reviewed
at
that
time?
Yeah,
because,
like
48
hours
is
not
enough
and
48
hours
is
like
unfair
and
uncomfortable
and
just
you
know
leads
to
crunch
and
other
bad
behaviors.
It
would
be
great
if
you
know
like
the
thing.
Is
it's
not
that
it's
going
to
take
us
a
week
to
do
each
and
every
single
one,
for
example?
But
if
we
say
right
that.
A
C
C
A
C
Cool,
so
I
can
take
an
action
then,
to
work
on
templates.
C
Well,
I
think
it's
it's
similar
in
that
sense
to
like
the
docs
review
right,
like
the
final
docs
deadline
is
blah
date
but
like
the
pr
is
expected
to
be
open
by
like
initial
date
and
then
they're
expected
to
have
a
draft
ready
for
view
for
review
by
the
next
date.
But
the
final
deadline
is
like
the
hard
deadline,
and
I
think
it's
going
to
be
similar
to
that.
A
But,
like
you
know
you,
you
have
to
realize,
if
you're
putting
in
that
review,
that,
like
you're
asking
for
time
from
some
particularly
busy
people
and
particularly
complicated
reviews,
a
lot
of
the
time
and
and
like
if
you
do
that
right
at
the
last
minute,
you
have
to
expect
it's
not
gonna
get
in,
and
that's
just
basic
to
me.
I
guess
it's
just
basic
open
source
development,
knowledge
and
we're
being
a
little
more
explicit
about
our
expectations.
A
A
B
A
Yeah
I
ran
into
this
one
too,
where
it
was
sort
of
like
there's.
We
talked
about
I'm
trying
to
swap
back
in.
We
talked
about
there's
a
difference
between
knowing
the
operator,
knowing
that
a
feature
is
working
and
the
user,
knowing
that
their
feature
is
working.
A
From
a
metric
standpoint,
for
example,
we
don't
care
about
user
error
like
I
don't
want
a
metric
that
tells
me
that
this
these
these
pods
can't
come
up,
because
some
user
puts
something
stupid
in
the
config,
but
you
need
you
need
an
event
for
that,
potentially
depending
what
it
is
that
I
want
metrics
around
things
where
it's
system
errors,
not
user
errors.
A
C
B
That
came
up
there
because
we
already
added
the
question
about:
how
does
how
does
the
user
know
versus
how
does
an
operator.
C
Know
there
were
many.
B
Where
it
was
essentially
figured
out,
and
then
there
were
some
where
it
said
like
yeah,
you
asked
the
station,
I
know
and
figured
out,
but
there
are
fleecing
agents
to
do
it.
It's
like
well,
okay,
well,
which
one
do
you
use?
Can
you
demonstrate
to
me
one
that
actually
does
this
as
opposed
to
it,
is
theoretically
possible
that
one
could
exist
and
whether
there
is
a
level
or
a
bar,
we
think
is
reasonable.
B
For
instance,
do
we
think
that
is
it
c
advisor
that
the
node
uses
or
that
that
we
we
use
in
openshift
to
fleece
nodes?
I
think
so
so
you
know
is.
Is
that
a
thing
where
someone
comes
in
and
says
well
give
us
give
us
an
example
like
c
advisor
or
c
advisor
itself
and
that'll
be
good
enough,
or
do
we
just
want
to
say,
okay,
ssh.
C
B
C
But
like
theoretically,
if
the,
if
there
are
stances
well
the
only
way
you
can
find
out
if
this
thing
is
enabled,
is
you
know
if
you
ssh
to
the
node?
It's
not
like
something,
that's
just
inherent
to
the
version
or
something
like
that
like
we
know.
Oh,
it's
ga
and
therefore
it
must
be
on
in
this
version,
like
that's
a
reasonable
way
to
know
if
it's
there,
but
if
it's
like
the
only
way
at
runtime,
you
can
tell
this
thing
on
is
like
ssh
into
the
node,
not
really
a
good
story.
B
Yeah
one
that
came
up
for
me
was
xf
xfs
quota
remaining
and
I
don't
actually
know
if
that's
reported
by
c
advisor
somewhere.
I.
C
Don't
think
it
is
but
like
so,
for
example,
they
could
say
well,
you
know
like,
as
as
agents
the
node
is
not
actually
solution
there,
but
they
could
say
well,
if
you
run
like
say
node
exporter
then
like
maybe
that
exports
this
as
such
and
such
metric
like
that
would
be
very
helpful.
Just
you
know,
I
don't
know
like
how
are
people
verifying
these
things
that
they're
working
that
they're
working
on.
A
Some
agent,
that's
reporting
a
metric
or
something
is
is
great,
so
I
think
there's
two
there's
there's
enabled
and
then
there's,
which
is
what
you
were
mentioning
and
then
there's
like
whether
it's
working
so
like.
If
I,
if
I
run
a
pod
with
some
special
feature,.
A
I
don't
know
like
you
know,
yeah
username
space
or
something
like
like
how
do
I
know
that's
working
yeah
I
mean.
C
Specifically
came
out
huge
page
medium
storage
size
because
that
one
did
not
go
through
pr
for
anything
other
than
ga,
and
then
you
know
they
really
struggled
like
how
do
how
do
I
know
if
this
is
working?
Or
I
mean
I
guess
it's
ga,
so
it's
always
enabled.
But
how
do
I
know
if
it's
working
and
then
suddenly?
C
That
was
like
a
big
question
mark
it
wasn't
something
that
you
know
had
been
kind
of
thought
about
previously
as
something
that
was
needed
to
be
exposed,
and
so
I
ended
up
having
to
work
with
the
author
of
that
kept
to
like
help
them
with
the
question
because,
like
they
could
not
like
a
lot
of
these
people.
Are
developers
they're,
not
thinking
about
how
somebody
is
going
to
actually
like,
monitor
and
use
this
at
scale?
And
I
wonder
if
that's
something
that
like
maybe
we
need
more
help
available
for
people.
A
Well,
I
guess
so
what
like
like?
Let's
take
something
like
that
or
we're
like
or
let's
say
something
where
we're
setting
some
kernel
parameters
or
something
right
like
did.
They
take
effect
because
you
can
try
and
set
kernel
parameters
and
they
don't
take
effect
for
whatever
reason
right,
sometimes,
and
and
is
that
something
we
want
people
to
be
able
to
detect
through
metrics
or
is
that
something
that's
low
level
enough
that
we're
like
effort,
we'll
just
you
know
you
got
to
go
like
it's.
A
You
eventually,
there's
going
to
be
some
application.
Failure
associated
with
it
like
sure
would
be
nice
to
know
that
you
know
you're
trying
to
set
the
the
the
udp
buffer
size
or
something
that
and
it's
not
taking
effect,
which
I've
had
happen
right
like,
but
the
only
way
to
do
that
would
be
whatever
code
calls
to
set.
It
then
has
to
check,
did
it
actually
get
set,
because
sometimes
you
don't
even
return
an
error,
and
then
you're
like
is
that?
A
B
B
C
Specifically
for
this
one
like
this
is
the
example
that
I
mentioned
so,
like
the
you
know,
they
didn't
do
prr
in
the
first
round,
so
they
had
to
go
back
and
fill
a
bunch
of
stuff
in
and
you
know
like
how
can
an
operator
determine
if
it's
in
use?
Well
in
this
case,
like
you
know
it's
something
that
you
specify
in
a
pod,
so
you
can
look
at
the
pod
spec
and
see
like
it's
ga,
so
there's
no
more
like
on
off
flag
right.
C
What
are
the
like
slis
so
specifically,
I
like
went,
and
we
said,
okay.
Well,
let's
go
and
look
at
like
what
cube
state
metrics
gives
us
like:
let's
not
go
and
add
a
bunch
of
new
metrics,
let's
go
see
what
existing
metrics
we
can
use
and
then
like
call
those
out
so
that
people
can
come
back
and
look
at
this
and
be
like.
I
see
I
don't
have
to
like
dig
through
a
bucket
of
like
30
billion
metrics.
C
I
know
that
this
is
what
the
author
intended
and
then
similarly,
you
know
like
how
do
you
set
slos
and
like
what's
an
example
of
this,
and
so
specifically
we're
like?
Okay,
I
don't
know
anything
about
huge
pages,
but
like
here,
is
a
guide
for
like
how
to
tune
huge
pages.
C
It's
very
linux
kernel
specific,
it's
not
kubernetes
specific,
but
like
at
least
here's
some
documentation
that
talks
about
how
it
should
be
used
and
what
is
considered
performant,
because
otherwise
nobody
has
any
context
until
we
included
that
and
and
so
on
and
so
forth,
and
then
I
think
we
also
talked
a
little
bit
about
like
you
know,
you.
B
C
Measure
workload
performance
using
things
like
node
exporter,
and
I
think
the
only
thing
that
we
talked
about
that
like
wasn't
like
an
n
a
here,
is,
let's
see
yeah
so
like.
How
does
one
like
tune
this
or
like
troubleshoot?
If
you
know
it's
not,
meeting
slo
is
like
well,
here
is
how
you
could
potentially
tune
this
so
like
trying
to
make
sure
that
all
that
stuff
is
filled
in.
C
I
don't
think
that
people
like,
especially
for
some
of
these
things
that
have
already
gone
to
beta
and
they're,
going
to
ga
now
and
they
didn't
go
and
do
any
of
that
prr
stuff.
I
think
it's
going
to
be
kind
of
painful
for
them
to
fill
these
sorts
of
things
out,
and
so
this
is
the
sort
of
like
because
they
didn't
go
back
and
like
have
that
sort
of
stuff
in
mind
on
the
first
round.
So
just
maybe
a
thing
to
be
aware
of,
like
I
think
it's
something
that
we
can
help
them
with.
C
I
didn't
want
to
be
like
an
unnecessary
blocker,
but
you
know
this
is
not
something
that
really
a
lot
of
people
have
done
before.
I
think.
A
Okay,
I
guess
for
now
this
sounds
good.
We
start
with
this
and
let's
keep
an
eye
on
it,
see
what
new
cases
come
up
and
whether
I
agree
that,
like.
A
C
A
Yeah,
I
I
think
so
too.
I
think
we
have
we
we
point
to
example
pr's
I
don't.
I
haven't,
had
a
problem
with
people
doing
it
wrong
as
much
anymore,
since
we
have
example
prs
for
it.
I
think
if
we
were
in
the
directory
that
people
would
would
put
it
in
the
wrong
place,
that
would
do
any
good.
A
Okay,
we
just
have
two
minutes
left
and
I
have
one
o'clock.
So
is
there
anything
else.
A
Okay,
well,
thank
you
all
very
much
and
see
you
in
a
couple
weeks.