►
From YouTube: Kubernetes SIG Node 20230425
Description
SIG Node weekly meeting. Agenda and notes: https://docs.google.com/document/d/1Ne57gvidMEWXR70OxxnRkYquAoMpt56o75oZtg-OeBg/edit#heading=h.adoto8roitwq
GMT20230425-170542_Recording_1500x1120.mp4
A
A
C
Hey
hey
yeah,
so
first
thing
yes,
I
did
put
my
PR
in
the
agenda.
That's
because
someone
at
the
contributor
Summit
said
this
was
the
best
way
to
get
an
approver,
so
I
snuck
it
in
there.
But
that's
not
that's
not
the
most
substantive
thing
that
I
have
to
that.
I
wanted
to
get
a
few
of
the
room
for,
but
yes,
please,
if
someone
could
have,
could
look
at
that.
C
That
would
be
brilliant
cool
moving
on,
though
yeah
I
wanted
to
get
a
feel
in
the
room
for
the
behavior
of
startup,
probes
and
Readiness
probes,
with
container
and
healthy
events
before
committing
to
anything
more
serious
right
now.
Obviously,
a
startup
probe
and
a
Readiness
probe
is
kind
of
expected
that
they'll
fail
a
few
times
before
going
through
it's
when
they
do
this.
Obviously
they
emit
container
and
healthy
events,
just
like
any
other
probe,
would
and
at
least
to
Uber
we've
been
finding
that
sometimes
that's
been
eating
away.
C
The
entire
budget
of
25
events
per
object
or
near
enough
of
it
that
it
became
painful
I
guess
the
meta
first
here
is
first
off.
Do
folks
think
that
people
find
these
events
valuable
or
should
we
turn
it
the
other
way
and
yeah?
That's
basically
I've
written
down.
Most
of
my
thoughts
on
the
document
really,
but
yeah
I
just
wanted
to
start
a
discussion
really.
C
D
Mean
so
I
think,
could
you
in
your
setting
and
then
particularly
the
comment
on
the
25
events
per
resource
budget?
What
was
the,
what
were,
what
are
your
goals
to
get
from?
Looking
at
the
events,
do
you
actually
have
machine
processes
that
are
looking
at
the
events?
Are
they
are
you
driving
away.
C
Oh,
you
know
well,
we'll
just
wait
until
the
we
get
the
amount
of
events,
that's
equal
to
the
failure
threshold
and
move
on.
Obviously
that
doesn't
work
if
your
failure
threshold
is
above
25
minus.
However
many
events
you
have
on
the
resource
itself,
so
that's
pretty
annoying.
C
The
other
meta
thing,
though,
is
that
once
it's
once
the
startup
provable
Readiness
probe,
has
eaten
all
of
these
events
up
then
any
other
event
that
happens
on
that
resource
is
also
just
not
going
to
appear
on
the
API
server
anymore,
and
that
can
be
annoying
at
some
times.
It
means
that
it
means
there's
an
imperfect
field.
Of
course,
events.
Events
in
kubernetes
are
kind
of
a
best
effort
delivery,
but
extremely
best
effort.
I
guess
is
the
way
to
put
it.
D
Foreign,
let
me
try
to
repeat
back
then
what
I
heard
that
you
had
service
owners
who
you're
suggesting
should
respond
to
a
particular
event,
and
so
does
that
imply
that
there's
like
a
a
machine
process?
That's
watching
for.
C
We
look
at
those
two,
but
it's
I
think
the
best
way
for
it
is
we
try
and
like
tie
the
two
together,
so
your
pods
died
and
the
reason
it
died
is
because
this
probe
failed.
That
sort
of
thing
so.
B
D
I
guess
I
was
just
trying
to
see
if
the
dependency
on
the
event
was
a
symptom
of
something
missing
on
the
API
resource
itself,
or
if
there
was
some
other
correlation
that
you're
performing
it
sounds
like
the
latter,
so.
C
Matter
for
me,
when
I
was
looking
at
this
was
the
event
saw
in
my
head:
pretty
expensive,
you
only
get
25
of
them
and
you
get
enough
one
every
five
minutes
and
to
spend
them
on
saying
by
the
way
the
startup
probes
reported
the
containers
of
the
health
fee,
which
is
kind
of
what
we
expect
and
doesn't
actually
lead
to.
Any
change
in
the
system
feels
a
bit
odd,
but
maybe
I'm
wrong
here.
A
C
Think
earlier,
I
said:
liveness
probes
is
is
expected.
No
sorry,
I
I,
messed
I'm,
just
a
sweat.
I
said
the
wrong
word.
But
yes,
that
can
be
particularly
annoying
if
the
Readiness
probes
fail
so
many
times
then
go
through
and
then
the
likeness
profiles,
but
guess
what
there's
no
event
for
that
foreign.
A
So
I
try
to
summarize
all
the
improvements
we
we
will
be
making
in
props
and
like
what
is
planned
in
terms
of
like
bugs
and
feature
requests,
so
I
split
it
into
multiple
buckets
and
this
bucket
is
likely
about
troubleshooting.
So
from
troubleshooting
perspective,
we
have
few
problems
like
first
problem.
Is
we
messed
up
dimension
for
metric
a
little
bit?
A
So
now
we
don't
point
a
specific
port
and
we
point
to
like
Port
from
replica
set,
so
we
cast
now
out
of
some
Randomness
out
of
metric
Dimension
and
it
makes
metric
less
usable
in
some
cases
and
more
usable
in
other
cases,
but
it's
like
in
the
middle,
it's
not
like
one
or
another.
So
this
needs
to
be
cleaned
up
this
one
yeah.
This
is
what
she's
working
on
the
first
PR
that
was
posted
in
the
in
the
agenda.
A
So
this
is
a
PR
that
Lucy
working
on
and
another
thing
that
we
noticed
that
that
we
have
a
very
good
troubleshooting
for
failed
probes,
but
we
have
very
poor
troubleshing
for
successful
props,
like
what
we
observe
sometimes
is
some
somebody
will
configure
probes
to
be
on
their
somebody
configured
props,
but
then
there
is
no
way
to
see
why
those
props
are
never
failing.
A
So
there
is
no
troubleshooting
and
default
verbosity
from
logs
and
there
is
no
events
on
successful
probe,
so
we
never
know
like
it
may
be,
like
customers
hitting
some
benign
a
page
like
somebody
direct
page
I
mean
not
reject,
reject,
will
be
handled,
but
let's
say
some
205
or
like
something
that
is
not
succeeded,
but
it's
succeeded
enough
that
the
probe
succeeded
so
they're
not
testing
what
they
need
to
be
testing.
This
is
like
another
I
think
that
needs
to
be
improved
and
finally
yeah.
A
This
is
like
I
I
think
this
is
from
old
at
least
I.
Maybe
it's
already
a
result.
So
it's
another
thing
that
we
have
a
problem
with
exact
props,
when
invalid
command
of
exact
props
will
not
trigger
a
failure
so
that,
if
command
is
invalid,
then
we
keep
reporting
it
as
a
success
and
some
people
got
confused
by
that.
But
I
think
this
is
a
result.
I
might
be
a
rock,
so
those
are
all
improvements
we
can
make
in
props
and
I.
A
Think
Lucy
you
are
losing
to
that
and
as
I
promote
your
mentioning
here
is:
let's
report
less
over
failures
and
more
of
other
events.
C
Yeah
I
mean
the
matter
here.
Is
that
I
think
we
shouldn't
report
events
to
users
unless
they
cause
a
change
in
the
state
of
the
system
in
my
head,
obviously
yeah.
That's
obviously
my
wild
opinion
there,
but
they're
very
expensive
to
report
and
they're
burning
them
all
on
the
startup
pro
ball.
C
New
Radiance
probe
feels
like
it's,
especially
because
a
lot
of
users,
at
least
users
I,
know
actually
don't
even
know
about
the
25
events
per
object
rate
limit
and
just
become
extremely
confused
when
suddenly,
their
events
stop
appearing
so
yeah
by
the
way
yeah
I've
I've
been
looking
into
a
few
of
those
I.
Just
I
didn't
want
to
raise
PRS
against
some
of
those
issues
because
of
the
pi
I
already
have
up
and
I
didn't
want
to
deal
with
the
merge
conflict.
So.
C
Complicated
I've
got
tomorrow,
I'm
going
to
be
working
on
kubernetes
all
day,
because
it's
my
one
day
of
the
week,
where
I
just
do
k8s
for
the
whole
day,
so
I
can
hit
them.
Then,
if
you
want
just
at
least
get
some
drafts
ready.
A
Okay,
so
I
think
biggest
problem
is
resolving
any
of
this
situation,
like
reports
in
class
of
and
health
events
will
be
understand
that
we
don't
break
anything
record
compatible
from
compatibility
perspective.
So
if
customers
expect
since
some
of
those
events,
we
shouldn't
break
this
experience
so
I.
D
Think,
historically,
survey
we
we
have
been
a
little
more
loose
on
events
and
their
frequency,
and
obviously,
we've
had
to
do
that
as
we
went
from
events
had
an
unlimited
budget
at
one
point
very
early
in
Cube
too
now
I
a
more
constrained
budget
per
resource,
I
guess
what
I'm
trying
to
reason
through
is
one.
Let's
see,
I'll
look
at
the
PR
I
mean
the
comments
and
in
depth
a
little
bit.
D
Maybe
you
could
help
for
the
average
pod
in
your
environment
like
what
are
the
numbers
of
new
events
that
you
see
per
pod?
Is
it
or
how
do
you
see
this
actually
changing
the
typical
average
pod,
like.
C
I
mean
a
practical
example
from
today
is
we
had
a
quite
a
few
people
confused
and
looking
into
it
about
a
service
that
took
a
pretty
long
time
to
start
up
and
didn't,
have
very
and
didn't
have
a
long
interval
or
grace
period
before
checking
its
startup
probes
that
pop
that
pod?
What's
the
word
immediately
burned
all
25
events
on
the
startup
probe,
it
burned
like
seven
on
pulling
in
the
container
on
doing
some
other
stuff
scheduled
blah
blah
blah,
but
then
it
burned
the
remaining
18
events
on
these.
C
It
returned
the
remaining
18
vents
on
the
startup
probe,
and
then
no
more
events
were
emitted
that
caused
what
was
it
I
I
can't
remember
exactly
sorry,
it's
been
a
long
day,
but
that's
a
more
practical
example
from
today.
D
But
none
of
that
is
one
should
expect
fewer
failure
events,
but
one
should
always
expect
one
success
event.
C
Yeah
I
think
that's
pretty
fair,
I.
Think
in
my
head.
It's
just
the
the
usefulness
to
a
user
of
getting
the
same
failure
event
for
a
Readiness
or
startup
probe.
You
know
18
times,
there's
not
much
use
beyond
the
first
time
in
my
opinion,
but
maybe
you're
gonna.
Maybe
you're
gonna
disagree
with
me
there,
but
in
my
head
it's
it's
not
particularly
useful
to
a
user
to
know
that
your
startup
probe
hasn't
passed.
Yet
sorry,
no,
not
that
it's
not
even
that!
It's
that
your
startup
probe!
C
It
is
useful
to
use
the
snow
that
your
startup
probe
has
passed.
It's
not
useful
for
a
user
to
get
to
Startup
probe,
saying
hey
I'm
still
looking
by
the
way,
nothing
yet,
but
I'm
still
looking
18
times
and
then
when
it
finally
does
pass.
If
it
obviously
doesn't
emit
anything
right
now,
but
if
it
was
to
emit
an
event
when
it
passed
because
of
my
PR
that
I've
got
right
now
that
have
been
saying
hey
by
the
way
I've
just
passed,
it
wouldn't
go
through.
Oh
yes,
go
ahead.
D
Yeah
and
then
it
could
be
I'm
also
maybe
blurring
the
original
event
API
with
the
subsequent
replacement.
But
we
used
to
have
a
count
that
incremented
for
the
number
of
times
the
action
had
occurred,
and
so
what
I
was
wondering
is
if
a
lack
of
incrementing
on
that
account
and
I
can't
remember
if
that's
in
the
current
event
API.
So
if
you
could
Refresh
on
what
you're
seeing
that
would
actually
be
helpful,
always
I'll
check
after
the
call,
but.
C
D
The
event
collector
used
to
at
account
for
the
similar
scene,
events
and
so
then
like
if
a
probe
was
happening.
If
you
saw
the
count,
then
maybe
you
could
say-
oh
I
know
I'm
I'm,
actually
being
probed
or
not,
but
either
way.
That's
the
only
thing
that
gives
me
pause.
I
just
want
to
check
that,
but
otherwise
the
general
idea
of
your
PR
seems
to
be
perfectly
sensible.
C
The
PLS,
by
the
way,
aren't
exactly
related
in
our
Soul's
a
different
issue,
but
this
is
just
me
kind
of
feeling
the
water
before
committing
to
actually
like
writing
an
issue
and
writing
a
PR.
For
this.
C
Or
getting
a
feel
for
whether
it's
even
something
that
people
would
want.
A
E
Yeah
I
was
gonna
just
jump
in
here.
The
event
recorder
has
a
max
retry
limit
of
12,
and
so
it's
possible.
We
could
be
dropping
events
here
if
we
don't
send
all
of
them
to
the
server
something
to
keep
in
mind
on
this
PR.
D
Yeah
so
historically
the
the
action
all
the
compaction
and
decision
to
be
met
with
all
client
side,
so
yeah.
That's
why
I
like
historically
the
you
shouldn't,
expect
an
event
to
be
present,
because
there
wasn't
necessarily
a
guaranteed
phone
home
from
the
component
question
to
to
the
server
to
let
you
even
know
what
happened
comes
into
play
here.
D
You
know
so
that
these
are
just
I'm
trying
to
ironically
enough
as
we're
talking
about
caching
events
and
throwing
away
events
I'm,
trying
to
also
cash
into
my
own
brand
here,
a
little
bit.
The
latest
State
on
where
we
were
inventing
so
for
for
this
item,
I
think
getting
a
tracked
in
the
broader
doc
that
Sergey
showed
is
super
helpful
and
then
right
now,
at
least
for
me,
I,
don't
want
to
speak
for
everybody
else
like
getting
redundant.
D
Events
doesn't
seem
to
be
providing
a
ton
of
value,
and
so
minimizing
those
would
be
good,
and
it
was
just
a
question.
Then,
if
people
are
looking
at
the
event
for
a
frequency
to
figure
out
like
if
a
kubernetes
component
has
actually
stalled,
that's
the
only
thing
I'm
trying
to
balance
in
my
head,
but
I.
C
That
is
actually
what
one
of
the
teams
that
Uber
was
trying
to
do
and
then
got
really
confused
by
this
rate
limit,
because
it
stopped
them
doing
it
and
that's
how
this
originally
came
up
last
week,
while
I
was
at
kubecon
and
then
I
just
got
back
to
it.
B
Was
gonna
say
that
yes,
we'll
get
a
bunch
of
events
that
are
identical
really
rapidly,
it's
very
useful,
but
getting
them
once
per
hour
can
be
useful
because
you
know
events
expire
and
you
come
back
several
hours
later
and
ask
a
user
like
hey.
Can
you
is
sometimes
useful
to
have
those
events
repeated
just
because
otherwise
you're
asking
online?
Do
you
have
some
collector
set
up
to
collect
all
these
events
where
they
lock
somewhere
and
if
not,
that's,
okay,
the
events
are
gone.
I,
don't
understand.
Emerald.
A
Yeah
I
I,
I
I
could
even
start
like
all
the
flakes
on
the
residence
props
like
you.
Even
if
rope
has
like
a
threshold
of
three
but
then
you
periodically
need
the
bad
patches
of
not
ready.
You
want
to
know
about
those
bad
patches.
You
want
to
know
that
your
series
may
be
going
down
like
and
it
has
two
residence
events
like
and
two
residency
once
again
so
yeah
knowing
about
this
place
is.
F
C
Yeah
I
mean
in
my
head:
it's
the
useful
stuff
to
me.
Is
the
Readiness
probe
failed
useful.
The
next
X
times
aren't
useful,
but
then
the
Readiness
probe
has
now
succeeded.
Also
useful
and
I
don't
want
that
one
to
be
drowned
out
by
the
repeat
repetitions
that
exist
today.
C
B
I
would
say
it
sort
of
depends
on
the
like
the
sophistication
of
the
monitoring
system
like
if
they're
you
know,
if
all
your
events
are
being
aggregated
and
yeah,
occasionally
some
get
dropped
or
whatever
it
doesn't
matter.
But
if
you
can
go
look
at
them,
but
if
you're,
just
like
a
you
know
brand
new
user
kubernetes,
you
don't
have
any
of
that
set
up
and
you're
just
trying
to
haste
something
broke
last
night.
C
A
C
Yeah
I'm
trying
to
think
would
this:
no,
this
use
case
wouldn't
exactly
be
covered
by
that
because,
while
the
probe
state
would
change
like,
maybe
you
know
the
night
before
the
probe
could
still
be
in
that
Old
State
by
the
next
morning
and
the
event
will
be
gone
by
then.
So
this
doesn't
exactly
cover
this
issue.
C
A
Okay,
thank
you,
but
I
think
everybody
in
the
call
agree
that
some
write
up
is
needed
and
generally
it's
a
good
problem.
So.
A
Yeah
since
I
already
started
showing
this
document,
I
will
just
jump
in
with
my
agenda
item,
and
so
this
document
has
more
things
that
we're
working
on.
There
are
like
a
few
good
good.
First
issues
like
the
usage
hpu
objects,
somebody
working
on
that.
It's
just
pure,
like
program,
programmatic
kind
of
like
refactorings
that
will
improve
performance.
A
There
is
like
synchronized
probes
issue
with
not
respecting
initial
delay
seconds
I.
Think
Matthias
has
a
PR
open
for
maybe
already
a
year,
so
it
needs
to
be
looked
at
and
maybe
you
can
rewipe
and
like
talk
as
much
as,
and
then
there
are
other
things
like
I.
Think
it's
the
biggest
thing,
if
you
fix
it
will
be
great.
Is
this
one
I
don't
know
what
so
the
issue
is
that
when
Kobe
3
starts,
it
results
all
the
prop
statuses
down
to
false.
A
So
if
Port
was
success
like
ready
and
then
we
restarted
cool
blood,
it
will
play
make
like
flicker
the
support
into
not
ready
and
then
to
ready
again
and
it
costs
a
lot
of
problems,
especially
on
the
high
load
environments.
So
I
think
this
is
the
biggest
performance
issue
that
we
can
address
and
if
you'll
do
that
in
this
release
and
a
few
other
things
out
of
this
list,
it
will
be
big
win
for
customers.
A
F
Hey
everyone,
so,
a
couple
weeks
ago,
there
was
conversation
about
finally
dropping
support
for
a
number
of
the
CLI
flags
that
the
kiblet
has
and
a
result
of
this
is
there's
a
feat
there.
We
rely
on
these
flags
in
in
that
in
our
Downstream
and
openshift,
and
so
and
for
a
while,
we've
been
talking
at
least
like
passively
about
having
you
know,
drop-in
configuration
support
like
systemd
does.
F
Cryo
also
has
this
where
there's
like
a
cuba.com.d
and
that
configuration
will
override
you
know
in
Alpha,
lexicographic
order,
override
the
cuba.com
and
so
I
wanted
to
bring
it
up
here.
I,
mostly
to
talk
about
the
approach
of
proposing
the
enhancement.
To
me,
this
feels
like
an
internal
implementation
detail
of
the
cubelet.
It
doesn't
really
need
to
communicate
with
much
else.
It
just
has
to
start
recognizing
a
new
directory
of
files
and
to
me
that
implies
that.
F
Maybe
we
don't
need
a
feature
gate
for
this
feature
and
maybe
not
even
a
cup
but
I
wanted
to
see
what
folks
here
were
thinking
in
terms
of
the
scope
of
such
a
change
and
whether
this
seemed
appropriate.
But
my
idea
would
basically
be
at
some
point
in
the
next
release,
or
so
Cubit
starts.
Additionally,
looking
at
files
in
cuba.com
got
the
and
overriding
the
its
configuration
based
on
those,
but
do
people
think
do.
F
Yeah
I
agree:
I,
don't
know
of
any
other
components
that
have
Behavior
like
this,
with
the
exception
of
cryo,
which
is
you
know,
an
ancillary
project
and
we
largely
totally
overwrite
lists
and
most
of
the
configuration
Fields.
There
are
some
configuration
fields
that
we
special
handle
and
append
and
I
find
this
a
little
bit
awkward.
F
But
it's
just
kind
of
it
makes
it
a
lot
less
clunky
for
those
configuration
Fields
so
like
it
would
be
my
thought
to
have
the
fields
just
override
by
default
and
then
maybe
come
up
with
a
fancier
scheme
in
the
future.
If
we
need
something
more
fine-grained.
F
We
we
use
so
the
one
that
we'll
use
it
for
is
for
the
system
reserved
Theo,
so
we'll
use
a
drop-in
configuration
file
to
override
system
preserved
in
some
cases
and
I
mean
basically
any
configuration
field
that
we're
currently
supporting
in
the
cubelet.
F
We
I
we
either
use
environment
variables
or
CLI
Flags,
both
both
of
which
are
fairly
clunky,
because
we
have
to
basically
add
a
systemd
drop
in
file
to
change
the
unit
configuration
of
cubelet.
But
if
we
don't
have
the
CLI
flag,
we'd
have
to
change,
I
could
say
file
directly,
but
there
will
be
multiple
writers
to
that
configuration
file.
F
So
for
any
of
the
fields
that
we
allow
our
users
to
configure
it'll
make
like
reconciling
that
configuration
more
difficult,
whereas
if
we
were
to
have
the
drop
in
file
support,
we
could
just
you
know,
have
each
of
those
fields
be
an
individual
file
and
just
add
that
file
to
the
drop-in
configuration
directory.
D
D
If
we
prize
The
Next
Step
it's
worth
seeing
if
we
want
to
have
anything
in
Sig
architecture
like
explicitly
documented
blessing
or
not
blessing,
the
particular
pattern,
but
I
I
would
be
a
plus
one
on
going
to
cigarch
saying
that
at
least
in
the
nut
Sig
we're
interested
in
supporting
us
and
if
we
want
to
make
it
a
part
of
like
other
kubernetes
components
like
binary
definition,
maybe
we
could
write
down
some
of
the
initial
rules
out
of
the
site,
but
I
mean
this
seems
totally
sensible
to
me.
A
A
Your
discussions
are
conflict
over
rights
when
there
is
a
central,
config
and
drop-in
configs
like
patches,
on
top
of
it,
so
I
wonder
if
it
will
be
applicable
for
Windows
or
like
is
there
some
window
specific
scenario
for
that.
H
I've
wanted
to
do
this
for
container
D
I
haven't
seen
many
for
the
cubelet,
but
I'll
take
a
look.
I'm
sure
could
be
helpful.
B
H
A
A
No
I'm
just
trying
to
understand
what
the
next
steps
and,
if
you
comment
on
that,
please
go
ahead.
F
Yeah,
that's
that's
exactly
what
I
was
talking
about
so
I
suppose
our
next
step
sounds
like
I'll,
propose
it
to
sick,
Arch
and
see
if
other
components
in
the
ecosystem
have
interest
in
supporting
this,
and
then
we
can
begin
to
bike
shed
process
of
deciding
the
mechanisms
by
which
it
works
and
then
either
and
probably
after
the
conversation
with
sick
Arch
will
just
it
will
kind
of
inform
whether
we
need
a
full
cap
and
feature
gate
for
this.
F
You
know
to
go
across
a
bunch
of
different
fields
or
a
bunch
of
different
projects
or
not
so
yeah
that
helps
out
I
probably
will
open
up
an
issue
as
well
to
track,
and
so
we
can
have
some
ink
say
goodness
conversation
as
well,
so
yeah
talk
to
cigarch,
open
up
an
issue
and
we'll
see
where
it
goes
from
there.
A
Okay.
Next
one
I
wanted
to
raise
awareness
about
sidecar
cap
progress.
So
we
have
this
Uber
issue
that
we
posted
last
released
and
in
super
issue
we
split
everything
into
something
we
want
to
do
before.
Api
change,
sorry,
I
didn't
update
it.
It's
probably
done
something
we
do
is
a
huge
PR
and
something
we
do
after
API
change.
A
Yeah
I
need
to
update
this,
so
we
did
everything.
So
basically,
we
have
a
big
PR.
Now
big
PR
contains
main
functionality
for
sidecars,
basically
whole
life
cycle
and
a
link
to
PRS
here,
like
first
link.
A
So
this
PR
is
ready
to
be
reviewed
and
we
hope
to
get
it
merged
very
early
in
a
release.
Cycle
and
I
have
few
PRS,
as
I
mentioned
in
Uber
issue
after
API
change,
so
like
we
will
do
some
minor
changes
after
this
PR
is
done
like
things
like
ohm
score
adjustment.
We
didn't
want
to
pollute
this
big
PR
because
it's
not
a
main
functionality
and
it
can
be
added
later
so
we
we
want
this
PR
to
be
immersed
as
soon
as
possible.
A
A
Okay,
is
there
any
other
topics.