►
From YouTube: 20210701 SIG Architecture Community Meeting
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
All
right
well
welcome
everyone
to
the
july
first
sega
architecture.
Bi-Weekly
meeting
got
a
relatively
light
agenda
today.
So
had
one
item
proposed
around
the
k-log
deprecation
does.
B
C
It
says
in
the
notes:
they
can
only
attend
the
second
half,
so
we
may
have
to
just
punt
it
since.
C
D
A
Yeah,
making.
C
I
have
a
topic
I'll
throw
out.
I
have
my
the
kept
on
revising
the
meaning
of
status,
which
is
seems
like
most
people
are
agreeing
to
at
this
point.
C
C
D
C
D
G
Just
for
the
context
for
the
recording
and
the
meeting
minutes,
what
cap
is
this.
A
So
tim,
I
feel
like
this
came
out
of
some
of
the
in
place
resource
resizing
discussions
around
cubelet
and
checkpointing.
The
one
thing
that
I
guess
I
saw
was
some
discussion
today
in
the
slack
sig
api
machinery
channel
is.
A
There
was
a
period
of
time
where
I
remember
speculating
with
clayton,
maybe
daniel
a
few
folks
on,
like
the
benefit
of
like
a
caching
read-through
proxy
that
could
run
co-local
to
cubelets,
so
that
nodes
that
were
disconnected
may
have
been
rebooted.
They
could
have
restarted
their
workload
because
they
would
have
had
at
least
that
local
previous
checkpointing
state
that
the
cubelet
had
would
have
seen
from
that
maybe
co-located
api
server.
E
A
On
this
stuff,
right
now,
a
little
bit
is
as
we
as
if
there
be
any
unintended
consequence
of
a
cubelet
needing
to
write
in
order
to
start
something
in
the
future.
But
I'm
not
so
strong.
D
So
when
we
did
the
so
the
the
resource
scheduling
resource
metrics
cap
clarified
some
of
like
I
was
basically
it's
like
going
back
to
pod
safety
like
we've,
never
really
fully
defined
the
guarantees
we
offer
for
pods.
A
couple
of
caps
have
come
close.
I
actually
did.
We
went
through
a
little
bit
of
a
review
saying
the
cubelet
does
not
require
that
status
be
updated
before
starting.
A
container
in
that,
which
is
this,
which
is
a
summarization
of
our
existing
state.
D
I'd,
be
happy
to
go
and
firm
that
up
in
another
cap
or
in
a
broader
one.
It's
it's
in
the
cap
as
like
lays
out
the
work,
the
resource
model
and
kind
of
defines
the
naming
a
little
bit,
but
I
think
we
could
take
it
to
another
cup,
maybe
combine
it
with
the
volume
one.
If
we
can
get
agreement
on
that,
we
don't
require
it
today.
D
A
C
H
D
No
matter
what
there
was
a
discussion
last
year
on
workloads
that
wanted
to
only
so
it
was
basically
another
variation
on
fencing,
which
is,
in
fact,
if
I
lose
the
lease
from
the
node.
I
don't
want
that
workload
to
be
running,
which
is
a
different
kind
of
a
guarantee
than
what
we
provide
today
on
the
api
server
level.
We
didn't
actually
formalize
that
or
get
that
into
the
cap,
but
that's
basically
like
it's
another
form
of
fencing
like
if
I
haven't
been
able
to
report
status.
D
If
I
can't
hold
the
lease
stop
me
and
then
start
me
again
when
you
reacquire
at
least
that
would
be
a
feature
request
down
the
road,
but
I
don't
think
again.
That
would
be
opt-in.
There's
no
way
that
we
could
make
that
required,
because
that
would
completely
change
the
workload
profile
of
every
cluster
in
the
world.
C
C
D
Side
like
you
would
have
to
say
that
you
tolerate
this
much
api
server
connectivity
loss
from
the
node.
Like,
however,
we
like
those
it's
a
mechanism
but
like
it's
a
workload.
Specific
thing
like
this
workload
is
different
from
this
node.
You
can't
blanket
change
the
behavior
of
workloads
in
queue
without
breaking
behavior.
D
A
Just
I'm
sorry,
jordan,
concrete
example
that
came
up
in
cygnus
a
few
months
ago
was,
I
think,
a
particular
sdn.
A
If
it's
not
calico,
I'm
sorry,
but
I
think
calico
is
what
came
to
my
mind
where
there
was
some
need
to
be
able
to
read
something
from
pod
status
during
a
a
pre-start.
E
A
Or
a
post
start
hook
that
the
keyboard
did
not
make
any
guarantees
about
writing
back
to
and
so
that
those
types
of
intersection
points.
I
was
a
little
concerned
on
if
we
have
to
start
making
guarantees
on
keyboards
we'll
make
right
backs
somewhere
in
the
normal
pod
startup
lifecycle.
That
would
be
problematic,
but
I
still
unhappy
with
the
resource
resizing
design
and
the
loosening
of
the
concept
here.
It
was
just
a
thing
that
we
have
to
be
careful
about
for
other
areas.
C
When
you,
when
you
were
saying
like
derek
when
you
were
saying
writing
before
starting
the
the
main
place
I
had
seen
that
was
on
the
node
status
as
like
a
fencing
thing
so
like
before.
If,
if
some
central
controller
thinks
this
node
has
like
lost
its
mind
and
has
put
a
change
or
condition
or
something
on
it
says
all
right,
I'm,
like
all
your
workloads
are
dead
to
me.
C
I'm
gonna
go
like
re,
whatever
all
your
workloads
like
having
having
thinking
forward
to
how
the
node
on
startup
before
it
like
starts
re-running
api
sourced
pods,
like
does
something
to
its
own
status,
to
say,
like
hey,
I'm
alive,
to
make
sure
that
there's
it
gets
a
conflict
or
get
observes
like.
Oh,
the
central
thing
thought
I
was
dead
like
it
was
that.
D
C
Some
sort
of
node
level
fencing
or
status
or
right
like
it.
I
think
we
already
may
do
that
like
the
node,
the
cubelet
already
waits
until
it
observes
its
own
api
labels
and
taints
and
stuff
before
it
plays
api.
B
B
A
A
I
had
a
power
loss
and
I
expect
the
function
to
still
keep
working
in
that
local
site
when
there
was
a
network
disruption
and
that's
a
very
unclear
subtlety
to
a
lot
of
people
and
then
what
that
has
led
to
is
folks,
potentially
looking
at
running,
more
single
node
or
small
or
remote
edge
clusters,
where
they
don't
have
those
network
cops.
A
But
I
I
wouldn't
be
surprised
if
the
broader
world
is
unaware
of
the
fact
that
they
would
potentially
you
know
vulnerable
to
this
type
of
thing
and
by
the
way,
I'm
it's
fine
as
it
is
now.
I'm
just
making
you
aware
that
others
have
been
confused
eric.
A
Yeah,
so
we
had
a
race
condition:
jordan,
filming
worm
loss
where
the
node
hadn't
reconciled
its
status
object
and
so
that
he
locally
understood
labels
where
tolerations
were
not
understood.
So
we
had
to
make
sure
that
on
cublet
restart
the
node
informer
has
synced
at
least
once
when
it's
expecting
to
re-establish
running
the
local
workload.
Does.
C
We
start
running
pods
from
the
api.
We
want
to
admit
them
based
on
node
info
also
from
the
api
yeah.
I
figured
that's
what
it
was.
I
just
wanted
to
check
tim
looking
at
your
kep,
it
looks
like
there
are
two
main
points,
the
one
about
status
being
durable
and
like
dropping
the
like.
Could
we
reproduce
it
from
observation?
I
think
that's
totally
inbound
like
we
already
do
that
all
over
the
place
and
like
we
rely
on
that
heavily
in
many
many
many
places
that
one
that
bit,
I
think,
is
totally
fine.
C
The
like
subdividing
authentication
or
subdividing
access
to
status,
I
think,
is
a
good
question
and
something
we
should
give
guidance
on
that
to
me.
That
doesn't
seem
exactly
the
same
like
I
don't
know
if
it
belongs
in
the
same
one,
if
there's
more
content
around.
D
C
C
C
That
using
status
because
of
our
back
like
it
gets
into
authorization
examples,
and
I
I
just
wasn't
sure,
are
you
sorry,
are
you
looking
at
the
cap
or
are
you
looking
at
the
yeah
I'm
looking
at
your
like
2527,
the
pr.
D
D
I'm
not
the
the
paragraph
up
above
on
line
71
to
me
seemed
pretty
clear,
like
it's
a
description
of
what
you
can
do,
and
so
it
is
a
it's
more
of
a
it's
not
like
a
normative
thing.
It's
a
advisory
thing.
D
B
D
D
C
Of
paragraph
two
was
simply
to
say:
when
I
write
the
doc,
maybe
it
makes
more
sense
to
actually
look
at
the
document.
Pr,
which
is
this
one
too
many
windows
where'd,
you
guys
go
there.
You
are.
C
I
wrote
so
the
the
at
line-
16
24
wrote
a
little
bit
about
when
it
makes
sense
to
use
different
patterns
or,
and
what's
been
done
in
the
past,
it
is
not
intended
to
be
normative.
It's
just.
These
are
your
options,
and
these
are
some
of
the
the
the
trade-offs
and
apparently
there's
a
typo
appropriate
I'll
have
to
fix
that.
C
So,
if
you're
concerned
about
it,
I
I'm
happy
to
split
it
up.
It
felt
like
they
they're
part
of
the
same
conversation
to
me
when
I,
when
I
have
the
conversation
with
people
about
don't
use
status
for
this
and
yes,
derek
it
didn't
start
with
the
in
place.
Updates
was
like
well,
we
can't
use
status
for
that
because
the
rules
say
well
now
the
rules
don't
say
so.
Here's
when
it's
appropriate
and
when
you
might
want
to
think
about
doing
it
differently,
yeah,
okay,
I'm
I
don't
really
care
how
we
update
it.
C
If,
like
I
don't
care
about
breaking
this
apart,
I
just
you
would
say
sounded
like
david.
Maybe
you
had
an
objection.
I
don't
know
if
it
was
to
that
bit
or
to
what
I
was
no
mine,
things
that
were
not
contentious,
but
david's
objection,
I
think,
was
mostly
it's
really
nice
when
status
is
purely
from
observation,
and
I
think
we
all
agree
that
it
is
nice,
it's
just
not
always
possible
or
reasonable
to
require
it.
I
a
good
a
good
way
to
frame
that
might
be
like
if
it
is
not
a
one-way
thing
from
observation.
C
You
have
to
define
what
happens
when
the
recorded
status
and
the
observation
differ
like
if
it's
no
longer
purely
flowing
from
observation.
Now
you've
got
reconciliation
of
status
and
the
real
world,
and
you
you
have
to
define
like,
and
that
gets
into
a
per
api
thing,
but
so,
like
the
cube,
has
the
today
with
pod
status,
container
status
right
like
it,
it
has
sort
of
a
monotonic
state
machine
guarantee,
like
I
never
transitioned
from
this
state
to
that
state,
and
so
it
has
to
deal
with
situations
like
well.
D
Yeah
and
the
original
goal
of
spec
status
was
to
separate,
like
the
primary
author's
intent
from
the
realized
outcome.
There's
nothing
about
realized
outcome.
That
means
it
doesn't
have
inertia
or
like
there's
an
infinite
amount
of
use.
Cases.
Spec
and
status
are
like.
We
divide
the
world
into
two
parts
where
spec
is
what
you're
mostly
dealing
with
and
status,
is
telling
you
how
the
outcome
went.
D
D
C
B
C
Cool-
let's,
let's
stop
on
this.
If
people
want
to
jump
on
the
words
in
the
cap
or
in
the
the
pr
I'm
happy
to
adapt
or
if
they
everybody's
happy
with
it,
jordan
will
eventually
get
around
to
approving
it.
But
in
the
meantime,
I'll
start
adjusting
my
guidance
to
apis.
B
No,
we
have,
we
have
conformance
people
who
are
here
so
rihanna
can
go
right
now.
F
Thanks
kirsten,
we
had
very
light
attendance
of
the
conformance
meeting
this
week,
so
if
there's
anybody
that
can
give
us
some
eyes
on
the
three
prs
that
I've
got
for
promotion,
those
will
still
get
in
for
this
release,
which
will
take
us
to
17
new
endpoints.
This
release
they've
done
that
two
weeks
on
the
test
grid
they're
ready
to
roll,
and
the
nice
thing
is
that
will
take
us
to
88
percent
coverage
of
the
apps
endpoints,
with
only
seven
apps
endpoints
remaining,
which
we
will
be
targeting
in
the
next.
F
To
review
those
after
the
meeting
else,
thanks
leighton,
I
really
appreciate
it
and
that
is
for
conformers.
Just
a
side
note.
There
is
one
extra
test
with
two
endpoints
for
api
machinery
which
we
dropped
in
the
channel.
It
won't
make
it
for
taste
freeze
this
really,
so
we
we
would
appreciate
some
eyes
on
it
at
some
stage
to
get
it
merged,
so
we
can
get
the
desperate
going
and
we'll
be
an
early
win
for
the
next
three
days.
B
Sure
we
have
alana
up
next,
if
you'd
like
to
go.
G
Yeah
I
just
figured
since
the
conformance
folks
were
on
the
call
I
might
raise
this
one,
because
this
is
something
that
we've
been
talking
about
in
sig
node.
So
signode
has
some
tests
that
are
labeled
node
conformance,
and
I
asked
the
question:
how
is
a
node
conformance
test
different
than
a
conformance
test?
They
are
very
different
and
the
the
node
conformance
tests
from
dimm's
explanation
were
that
they
were
attempting
to
like
provide
sort
of
a
conformance-esque
task
for
the
cri.
G
But
a
lot
of
people
in
node
have
been
trying
to
add,
like
conformance
tests
or
graduate
various
features,
and
then
they
come
back
and
they
ask
well.
Is
it
supposed
to
be
a
node
conformance?
Is
it
a
conformance
like
which
is
it?
So?
I
think
that
we
should
maybe
rename
node
conformance
to
cri,
conformance
and
add
some
docs
and
just
wanted
to
get
people's
takes
on
that
I
see,
hippy
is
giving
me
a
thumbs
up.
A
A
Some
huge
page
testing
would
go
in
there.
The
the
label
doesn't
have
any
particular
meaning
at
this
time,
but
it
it's
definitely
more
expansive
than
the
cri
okay.
G
I
I
know
that
we've
been
wanting
to
at
least
try
to
clarify
this,
so
I
don't
know
if
anyone
else
has
thoughts.
E
I'd
say
once
once
a
release,
at
least,
if
not
twice
we'll
have
someone
popping
in
about
nodes
and
one
and
jumping
into
the
conformance
to
group,
to
try
at
least
a
really
clear
sign
along
the
road
when
people
are
working
on
cri
or
node,
and
those
things
that
the
word
just
having
the
word
conformance
in
there
when
it
has
such
a
strong
connotation
to
the
overall
kubernetes
conformance.
Even
though
it's
not
a
tag,
it's
not
got
the
brackets
around
it.
E
G
G
This
is
helpful
context
because
a
lot
of
the
folks
who
have
been
trying
to
figure
out
what
we
should
do
with
the
node
conformance
test,
don't
have
any
of
the
history
so
we're
like.
I
don't
know
what
this
is
for.
Can
we
get
rid
of
it?
Can
we
rename
it
so
it's
more
accurate?
Should
we
write
some
docs
for
it,
so
this
is
all
history
that
I
like,
for
example,
that
original
issue.
I
don't
think
anybody
in
the
ci
sub
group
is
familiar
with
that.
So
all
right.
A
I
was
an
archivist
there,
just
like
jordan,
so
I
want
to
be
able
to
have
the
label
the
one
area
where
I
do
see
some
of
this
stuff
being
useful
is,
I
know
we
had
an
issue
on
like
tests
that
we
lack
the
infrastructure
to
test
but
are
still
useful
so
like
with
the
the
one
I
just
approved
this
week.
The
multi-page
huge
pages
support
like
you're,
not
going
to
get
one
gig
huge
pages
in
our
test
infrastructure,
but
like
it
is
something
that
like
we.
A
If,
if
you
look
in
the
google
doc,
I
think
when
you
drew
it
first
written
this.
There
was
some
concept
of
like
a
like
a
special
feature
that
might
be
like
infrastructure,
specific
or
hardware
specific,
and
if
anything,
maybe
we
could
go
and
bet
the
existing
tests
that
have
that
label
and
see
if
we
should
think
about
re-classifying.
C
I
I'd
like
to
I've
advocated
in
the
past,
I'd
like
to
continue
to
advocate
that
directionally.
It
would
be
super
useful
to
get
a
distinct
suite
of
tests
that
allow
us
to
test
things
like
one
gig
huge
pages
without
turning
on
an
entire
cluster
like
it
should
be
possible
to
have
just
a
node
oriented
ede
suite
that
just
brings
up
a
vm
runs
cubelet
on
it
feeds
it
some
static,
pods
and
verifies
that
it
does
in
fact
program
one
gig,
huge
pages
correctly
ditto
for
things
like
cube
proxy.
C
I
can
do
all
sorts
of
manipulations
of
cube
proxy.
That
really
don't
make
sense
in
the
like.
I
don't
need
a
whole
cluster
to
turn
them
up
for
and
they're
disruptive
and
etc,
but
but
we
don't-
we
don't
have
that.
So
I
would
like
that
over
time
that
we
end
up
with
our
own
batteries
of
tests,
because
I
think
cubelet
and
cubeproxy
are
just
intensive
enough
that
that's
useful
yeah.
I.
D
Mean
node
e2e
is
still
a
little
opinionated.
It
gets
tied
into
a
lot
of
like
we
never
really
quite
clarified
the
assumptions
it
makes
about
the
environment.
It
runs
in
so
it
it
works
with
a
little
bit
of
hacking
on
a
fairly
wide
range
of
dischosing
environments.
We'd
have
to
make
some
opinionated
choices
there.
Q
proxy,
I
think,
is
a
little
bit
easier,
but
I'm
thinking
about
like
various
problems
with
ipvs
and
ip
tables
over
the
years
from
different
versions.
Would
I
think
I
think
I
agreed
tim
like
I
felt.
D
The
frustration
of
not
being
able
to
get
noted
e
working
in
more
environments
needs
some
attention
for
sure.
C
Yeah
I
mean
just
like
picking
on
cube
proxy.
You
touched
on
one
of
the
main
points
like
we
don't
really
exercise
all
the
different
cube
proxy
modes.
We
sort
of
exercise
the
one
that
is
the
default,
and
we
assume
that
the
rest,
mostly
work
and
people
are
going
to
complain.
If
it
doesn't
right,
it
would
be
nice
if
it
was
easier
for
me
to
say
just
bring
up
a
machine.
C
C
G
I
don't
know
if
anybody
has
anything
else
to
add.
Definitely
this
will
all
be
useful
to
bring
back
to
the
node
ci
subgroup,
so
I'm
oh
and
merrick
just
joined,
so
I
can
wrap
up
my
item
thanks.
Everyone.
E
Yes,
thank
you.
I
just
noticed,
as
I
was
reading
the
kubernetes
community
annual
report,
that
stuff
from
sig
architecture,
as
mentioned
a
few
times,
and
in
particular
the
community
milestones,
there's
a
list
of
them
and
the
first
three
are
some
really
cool
numbers,
a
hundred
thousand
issues
and
pr's.
E
Fifty
thousand
contributors
and
importance
of
the
work
that
my
team
works
with
75
of
api
endpoints.
We
finally
got
there
and
I
actually
took
a
time
to
take
the
picture
and
draw
a
line.
That
is,
that
is
some
hard
earned
work
that
made
made
the
news
and
we
should
rouhan
open
your
favorite
beverage,
have
a
celebrate.
Thank
you
for
for
your
support
on
that
long
term.
Important
initiative
for
the
community
yay.
G
B
Thanks
for
the
update
and
thanks
for
being
here
so
early
6am
is
very
early.
B
E
H
Yeah
yeah.
I
hope
that
I
didn't
cause
too
much
problem,
apparently
open
source
committee
like
to
meet
on
the
exact
same
time
every
every
month,
yeah.
So
let
me
go
into
the
the
the
topic,
because
I
wanted
to
bring
that
one
of
the
let's
say
smaller
proposal.
That's
our
proposal
that
started
it's
really
small,
but
I
think
it's
growing
and
possibly
will
need
to
help.
H
But
I
wanted
to
get
overall
feedback
on
idea
of
introducing,
let's
say
major
changes
to
to
to
to
okay
to
flags
or
logging
features
that
are
provided
on
the
the
kubernetes
components
sites.
So
here
I
meant
the
car
components
like
api
servers,
scheduler
and
controller
manager.
So,
overall,
what
I'm
trying
to
solve
is
that
over
the
time
we
have
aggregated
a
lot
of
features
in
kylok,
the
the
kubernetes
looking
blackberry
and
those
features
did
it
like
over
time,
degraded
in
their
quality
and
maybe.
H
Lack
of
investment
resulted
in
some
features,
even
like
conflicting
or
flux,
being
really
confusing
for
the
users
and
currently
the
the
problem
is
that
this,
the
the
current
state
of
k-log
flux
really
blocks
a
further
development
in
in
in
for
for
logging
for
structure
logging,
which
is
especially
introducing
alternative
logging
formats,
and
here
json.
H
So
I
wrote
down
a
proposal
to
to
basically
unblock
the
development
of
those
of
json
and
other
future
logging
formats
if
needed,
and
improve
the
that
quality
of
logging
and
and
go
take
the
the
flags
that
are
implemented
by
k-log
and
make
them
start
and
flux
in
kubernetes
as
it
was,
as
all
the
remaining
flags
in
kubernetes
were
migrated
to
new
standard
defined
by
working
group
component
standard.
H
Currently
that
the
the
the
proposal
or
I
propose
to
remove
the
most
of
the
the
flags.
But
I
think
the
most
most
impactful
change
is
that
removing
the
the
possibility
of
kubernetes
components
to
write
logs
to
files-
and
there
is
some.
I
think
this
is
a
pretty
big
change.
That,
I
think,
requires
a
discussion
in
this
big
in
this,
like
bigger
group,
and
so
I
wanted
to
get
your
feedback.
H
The
the
current
reasoning
is
that
the
the
quality
of
this
of
the
feature
of
a
clogging
logging
profiles
and
lots
of
flags
that
are
required
to
do
it
successfully,
flexibly
and
customizable
for
user
is,
is
pretty
big,
so
we
have
like
eight
flags
just
dedicated
to
to
frightened
locks
and
this
abundance
of
flags
and
features
resulted
in
in
basically
k-log
maintainers,
giving
up
and
deciding
that
they
cannot
support
or
use
this
flag
force
in
kubernetes
scale
tests,
and
we
basically
wrote
a
wrapper
that
as
an
easier
way
to
maintain
it.
H
I
don't
think
I
can.
I
want
to
take
more.
I
just
wanted
to
to
get
get
to.
They
should
get
your
feedback
and,
and
maybe
start
the
discussion.
If,
if,
if
basically
get
information,
should
it
be
a
cap?
How
big
a
discussion
we
want
how
to
verify
that
we
will
not
break
users
too
much
and
how
to
how
to
make
this
visible.
Basically,
because
this
is
a
big
change,
we
want
to
make
sure
that
people
know
about
this
and
have
a
choice,
a
chance
to
give
their
their
opinion.
C
F
G
I
would
maybe
go
further
to
say:
not
only
are
a
lot
of
people
using
k-log,
but
a
lot
of
people
are
using
those
flags,
at
least
from
what
I
can
tell
scrubbing
bugs
last
week
for
node.
A
lot
of
people
are
filing
bugs
about
the
various
like
logging
to
file
things
and
how
they
expect
them
to
behave
versus
how
they
actually
behave
and
whatnot.
I
think
they're
used
pretty
widely
in
production
right
now
for
all
the
various
components,
at
least
for,
like
you
know,
text
format.
G
H
Yeah,
so
for
deprecating,
I
expect
like
so
what
I
propose
not
to
make
any
differentiation
and
go
through
full
deprecation
process
of
at
least
three,
maybe
four
releases
to
to
to
remove
them
and
make
yeah
let
people
prepare
as
for
use
using
the
using
the
facts.
I
agree
with
that.
I
think
the
feedback
and
I
also
seen
issues
that
using
k-log
flux
is
really
confusing
because
to
maintain
basically
to
maintain
like
backward
compatibility,
we
broke
or
we
didn't
do.
H
C
G
So
one
of
the
things
I
think
dims
had
filed
an
issue
talking
about
possibly
doing
like
a
v3
of
k
log.
Is
that
something
that
we
could
like
schedule
to
coincide
with
this?
Like,
okay,
you
know
v2,
whatever
we're
moving
to
v3
we're
not
going
to
sport
these
flags
anymore,
because
we
don't
care
about
them
like
do
we
have
something
open
for.
C
Yeah,
the
the
move
to
v2
was
actually
really
hard
like
it,
even
moving
to
v2
meant
going
upstream
to
a
lot
of
dependencies
and
helping
them
convert
to
v2
and
then
like
simultaneously
picking
up
all
of
those
updates
and
switching
ourselves
to
v2
and
kind
of
at
the
time
we
said
this
was
terrible.
Never
again,
I
can
only
imagine
it
is
more
pervasive
and
worse
now
than
it
was
then
I
can't
imagine
use
has
shrunk.
C
Let's
also
be
clear,
there's
two
separate
issues:
there's:
should
we
remove
these
options
from
k,
log
and
all
the
people
who
are
using
k
log
for
things
other
than
kubernetes
itself
and
there's?
Should
we
change
the
flags
that
you
specify
to
cube
api
server
and
cube
scheduler
to
remove
those
options?
Those
are
they
can
be
two
distinct
conversations.
D
I
don't
have
anything
productive,
it's
more
the
we're
growing
scope
of
our
logging
library
and
we're
not
a
project
dedicated
to
running
a
logging
so
like
it
might
be
good
as
part
of
this
like
when
we
tighten
like
if
we
tighten
whatever
decision,
we
make.
Let's
put
a
few
bounds
on
what
k
log
is
the
fact
that
our
dependencies
use
it
like
requires
us
to
effectively
move
an
api
through
a
large
number
of
dependencies.
That
means
that
api
needs
to
be
pretty
stable.
D
C
D
K
log
right
and
then
there's
the
is
it
compatible
like
you
know,
just
a
lot
of
like
I
was
thinking
about
this
from
the
perspective
of
like.
Is
it
compatible
with
our
statement
and,
like
I
know,
there's
people
in
the
ecosystem
using
it
is
it
compatible
with
the
goals
that
we
have
for
logging
in
cube
and
our
dependencies?
Is
it
responsible
for
us
to
couple
that
to
a
general
purpose?
Logging
library
is
not
again
something
that
doesn't
have
to
be
coupled
with
this,
but
it
does.
D
C
H
H
Yeah,
so
for
v3
migration,
I'm
also
like
the
the
proposal,
definitely
tries
to
or
work
around
the
issue
of
touching
k,
lock
and
rolling
out
v3,
because
I
expect
more
that
at
the
point
that
we
want
to
make,
we
would
want
to
make
this
decision.
I
would
want
to
be
more
sure
that
we
are
migrating
to
logger
instead
of
and
let
community
say
that,
oh,
we
are
compatible
and
here
ready
implementation.
You
can
already
plug
logger
into
v2
and
it
works
instead
of
defining
a
new
api
and
scoping.
C
Is
it?
Is
it
useful
to
start
the
process
of
for
the
binaries
that
we
produce
not
for
the
library
itself,
but
for
the
binaries
that
we
produce,
which
uses
the
library
to
restrict
which
flags
we're
willing
to
accept
like
it
doesn't
need
to
be
the
entire
subset
of
it
doesn't
even
be
the
entire
set
of
flags
that
cube
api
server
supports.
So
could
we
start
by
saying
we're
going
to
remove
the
also
log
to
steadier
error
and
that's
just
going
to
become
the
default
and
here's
the
migration
plan
for
how
to
do
that?
C
H
So
this
is
so
when
I
try
to
remove
removing
them
like
that.
Basically,
I
got
to
to
file
writing
to
files
like
basically
log
to
stdr.
Also
look
twisted
there.
It's
basically
a
workaround
to
if
you
want
to
write
to
both
files
and
stdr,
and
then
you
all
other
reflects
are
generated
from
this,
like
you
want
to
write
to
this
file,
but
you
want
to
also
write
different
priorities
or
different
verbosity
to
different
files.
H
This
is
all
comp
like.
We
could
basically
try
to
untangle
that
at
the
mass,
but
I
don't
like
I
don't
I
don't
know.
If
that's
that's,
that's
we
will
get.
I
know
much
less
pushback
like
if
we
remove
like,
let's,
if
we
just
say,
if
we
remove
all
of
them
and
say
that
here
there
is
a.
I
know,
a
truck
like
that.
H
The
proposal
is
mentioned:
go
runner,
which
is
a
proxy
that
deems
implemented
to
basically
read
the
api
server
logs
or
other
components
and
write
them
to
file
for
our
k
up
test.
If
we
just
say
here
like
you,
can
configure
all
those
feature
in
this
binary,
if
the
same
way,
instead
of
like
having
part
of
the
api
in
indicate
in
the
components
and
part
of
the
api
in
the.
H
C
C
But
it
does
help
us
get
control
back
of
our
binaries
yeah.
Maybe,
but
it's
not.
C
I
think
that's
what
the
proposal
was
saying
like
what
are
the
logging
flags?
We
want
on
the
core,
kubernetes
components
and
starting
basically
from
nothing
and
saying
what
what
do
we
need
and
so
like
verbosity
and
then
the
the
bit
that
lets
you
say
like
turn
up
or
down
verbosity
on
particular
files
or
components
like
those
two.
C
It
says
it
proposes
like
we
need
these
and
then
everything
else
it's
like
do
we
want
any
of
these.
For
me,
the
the
big
divide
is
mostly
between.
Do
you
give
the
ability
to
write
to
a
file
or
not
like
I
I
can.
I
can
buy
that
like
api
server
distinguishing
between
standard
out
and
standard
error
is
not
particularly
useful
like
having
a
single
output
stream
from
api
server.
C
C
We
could
stop
exposing
through
our
components.
The
do
you
allow
writing
to
a
file
or
not
to
me,
that
is
the
biggest
open
question
and
and
the
distinction
there,
as
opposed
to
just
like,
using
a
redirect
on
on
the
command
line.
G.
If
I
remember
g
log
k,
log
will
do
some
rotation
itself.
Won't
it
right
as
soon
as
you
as
soon
as
you
allow
writing
to
a
file.
You
have
to
basically
bake
in
rotation.
Otherwise,
you
have
created
a
disc
full
foot
gun
for
people.
C
I
mean
the
point
is:
if
you
only
allow
stood
out
or
stood
error,
then
you
have
to
like
log
rotate.
I
think,
has
an
option
to
send
a
signal
to
the
writing
process
that
it's
supposed
to
then
close
and
reopen
the
output
but
like
how
do
you
close
and
reopen
the
output
if
you're
redirecting
to
a
file
that
you
don't
have
a
name
for
right?
So
you
you
fc
to
the
beginning.
I
get
like
every
program
has
to
code
for
that
which
is
just
a
disaster.
C
C
So
I'm
totally
not
versed
on
the
state
of
the
art
around
this
I've
been
burned
in
the
past
and
I'm
trying
never
to
touch
it
again
like
it
worries
me
that
the
library
tries
to
do
its
own
rotation.
When
you
know
there
are
lots
of
external
tools
that
have
much
more
configurable
policies
around
rotation.
It
feels
like
that.
G
So,
in
terms
of
concrete
steps
forward,
are
we
in
agreement
that
if
we
split
this
proposal
into
like
sort
of
two
pieces,
so
piece
one
is
stop
using
these
flags
in
kubernetes
components?
Piece
two
is
maybe
get
rid
of
them
from
k
log
entirely.
Are
we
in
a
position
where
we
can
say?
Yes,
we
want
to
get
rid
of
the
billion
flags
in
kubernetes
components
and
go
through
the
standard
deprecation
process
to
do
that,
and
maybe
for
moving
forward
in
k-log.
We
at
least
want
to
explore
the
potential
of
removing
those
down.
G
Yeah,
that's
I'm
saying
it
sounds
like
yes,
we
definitely
want
to
do
that
and
then
the
like
future
of
k-log
is
much
more
nebulous.
C
Yeah,
I
I
think
my
perspective
is
that
the
like
we
have
like
six
flags
around
output
stream
weirdness.
That
seems
like
a
good
first
place
to
focus
and
say
like
can
we
just
say
if
you're
gonna
write
to
an
output
stream?
This
is
how
it
works
in
kubernetes
components,
and
so
the
first
piece
is
to
say
like
internally.
C
C
It
may
be
there
in
the
code-
and
I
like
we
just
don't
know
about
it-
we're
not
we're
not
managing
it
properly.
Like
you
know,
the
truth
is
I
mean
I
don't
know
about,
you
know
you
mark,
but
I'm
certainly
not
an
expert
in
the
internals
of
g-log
and
k-log.
I
used
it
and
the
published
api,
but
I
never
really
took
it
apart.
H
I
yeah
I
did
just
before
proposal.
I
read
through
some
code,
but
it
was
still
surprising
for
me
what
what
what
something
that,
like
yeah.
B
H
Flashing
like
I,
I
found
out
that
all
kubernetes
components
every
five
seconds
do
a
disk
sync
like
you
sync
up
like
you,
should
write
your
file
logs,
let's
synchron
file
system
yeah.
That's
that's
a
great
idea.
C
Yes,
I
I
remember,
writing
some
of
that
at
the
beginning,
like
the
the
flush
at
the
end,
and
yes,
so
I
think
we
have
general
support
of
like
yes,
let's
make
it
more
sane.
Yes,
let's
cut
off
everything
that
we
can
afford
to
cut
off.
I
I
I
like
jordan's
phrasing,
like
let's
figure
out
what
is
sane
for
kubernetes
components
to
do
and
then
move
forward
from
that.
H
H
C
Yeah,
I
I
don't
know,
go
run.
My
my
recollection
is
go.
Runner
is
mostly
joining
standard
out
in
standard
error
to
a
single
output
stream.
I
don't
think
it
did
file
stuff,
and
so
the
questions
about
like
how
does
this
interface
with
rotators
and
can
you
like,
send
us
a
signal
to
make
it
you
reopen
the
file
handle
like
I
don't
know,
so
we
would
need
to
do
that.
H
Yes,
okay,
so
if,
but
it's
still
like
diligent
if
we
prepare
and
make
a
proper
like
make
it
something
that
have
a
set
of
requirements
for
it,
is
it
a
still
saying
or
not
like?
Is
it
that
direct
fallback
that
will
allow
us
to
to
to
move
forward?
If
we,
if
we
we
have
set
of
requirements
and
we
fulfill
it,
we
know
how
it
behaves
in
those
file,
rotation
situations
and
there
is
a
documentation
for
users.
C
I
don't
I
think
it
could
be.
I
I
don't
know
I
would
okay,
it's
unfortunate
isn't
here.
He
was
very
involved
in
go
runner
and
k-log,
and
a
lot
of
the
work
that
went
into
that.
So
I'm
probably
defer
to
people
who
have
more
knowledge.