►
From YouTube: 20210621 SIG Arch Code Org
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
Okay,
so
before
we
get
started
since
there
are
a
few,
since
there
are
folks
who
are
new
here,
the
way
these
this
usually
goes
is
we
take
one
cap.
We
spend
around
10
to
12
minutes
on
that
one
cap
reading
it
and
then
post
that
we
start
discussion
on
it
and
ask
questions
if
any
and
related
things
so
for
this
particular
cap,
I've
also
linked
the
pr
that
was
made
for
the
skip
to
be
merged.
A
So
if,
after
reading
the
cap,
if
possible,
we
can
go
through
that
and
see
what
the
process
usually
looks
like.
So
I'm
going
to
start
a
timer
for
around
10
minutes
post
that,
if
like,
if
you
need
more
time,
then
always
we
can
extend
it,
but
we're
going
to
set
a
timer
for
10
minutes,
and
we
can
guess
that
just
one
second,
okay
and
the
timer
starts
in
three
two
one
yeah.
It's.
A
A
D
Okay,
we
are
looking
at
the
stateful
set,
adding
a
min.
D
E
D
E
This
all
right,
so
this
is
the
cap
at
a
high
level.
What
we
want
to
do
is
we
want
to
add
a
particular
field
called
min
ready
seconds
to
stateful
set
the
problem.
E
So
we
have
this
particular
field
already
available
in
other
workflow
controllers,
like
deployment
and
daemon
set,
but
we
do
not
have
something
similar
for
a
stateful
set
and
as
far
as
the
real
world
applications
go,
I
think
I
gave
one
example.
The
other
example
could
be
a
particular
program
which
is
behind
the
load
balancer
any
software,
which
is
behind
a
load
balance.
We
do
not.
We
want
to
wait
certain
amount
of
time
before
it
actually
gets
taken
out
of
the
load
balancer.
E
So
these
are
like
some
of
the
scenarios
that
we
have
faced
within
openshift
and
we
noticed
that
this
could
be
beneficial
to
the
community
as
well.
So
we
started
working
on
this
feature
upstream
in
122
timeframe.
This
has
been
around
for
a
long
time
like
I
think
people
have
been
asking
for
it,
and
someone
has,
I
think,
mayank
has
started
working
on
other
aspect
of
this
particular
feature.
We
have
something
called
max
unavailable
as
well,
which
is
not
yet
available
in
the
stateful
set.
E
E
D
Yeah
one
question
I
had
right
off:
the
bat
was
like
how
how
does
this
relate
to
all
the
probes
and
all
the
things
that
we
have
for
probes.
E
Yeah,
so
that's
a
good
question
and
I
think
we
have
mentioned
that
in
kubernetes
document
as
well,
so
the
readiness
probe
or
the
liveness
or
the
health
checks
they
will
give
you
certain
amount
of
time
before
which
a
part
can
be
deemed
ready.
But
that
does
not
guarantee
that
it
is
ready
to
take
the
request
in
the
sense
that
you
can
have
an
additional
delay
which
says,
I
would
like
to
make
sure
that
the
pod
is
available
for
certain
amount
of
time
before
it
is
deemed
as
available.
E
So
the
containers,
all
the
containers
within
the
pod
should
be
ready
for
that
much
amount
of
time.
The
the
liveness
checks
or
the
health
checks
are
at
the
container
level
like
technically,
whereas
the
all
the
containers
in
the
pot
should
be
ready.
For
this
much
amount
of
time
is
what
mean
30
seconds.
D
Last
question:
so
when
does
this
specific
amount
of
time
start?
Is
it
after
the
other
probes
are
done
and
they
give
a
thumbs
up
or
before
or.
E
During
the
first
one
is
done
by
the
cubelet
in
the
sense
that
those
health
checks
are
done
by
cubelet.
After
that
say,
the
container
is
ready
and
if
all
the
containers
in
the
pod
are
ready
for
that
much
amount
of
time,
then
it
would
be
considered
for
then
it
would
be
considered
as
available.
So
the
first
level
of
checking
is
done
at
the
cubicle.
B
D
I
think
we
might
have
to
add
more
color
to
the
to
the
cap.
B
So
yeah
I
mean,
I
think,
that's
a
useful
distinction.
My
I
have
a
similar
question
if
I
can
jump
in
this
says,
making
it
ready
but
or
available
sorry
but
available
in
whose
opinion
like
do.
We
prevent
the
end
points
from
being
marked
ready
or
are
they
visible
as
ready
in
the
normal
fashion?
But
we
don't
move
on
to
the
next
item.
If
we're
doing
like
a
stateful
set
update
or
something.
E
The
update
is
at
the
replica
level,
it's
it's
it's
not
at
the
container
level,
but
it's
at
the
elevator.
So
right
once
we
do
once
we
check
for
the
that
particular
plot
to
be
available
for
x
amount
of
time.
We
would
go
on
to
the
which
is
part
of
the.
B
So
I'm
coming
at
this
sorry,
I
can't
hear
if
you're
still
going
no
go
ahead.
B
Sorry,
I'm
looking
at
this
from
the
from
like
the
networking,
because
you
mentioned
in
here
load
balancers,
but
I'm
not
sure
I
see
the
integration
point,
because
a
load
balancer
will
proceed
as
soon
as
the
pod
is
marked
ready
and
there
is
no
additional,
like
ready,
plus
wait
a
little
while
longer
in
all
the
load
balancer
controllers.
So
this
additional
weight
either
blocks
the
actual
setting
of
readiness
like
the
pod
readiness
probes
does
or
it
doesn't.
But
this
doc
doesn't
say
which
one.
E
I'm
not
sure
about
all
the
load
balancers,
but
the
way
I
was
or
the
way
we
were
thinking
about
at
least
an
openshift
perspective
is
we
would
like
to
use
or
the
pot
to
be
available
for
that
amount
of
time,
which
includes
the
mean
ready
seconds
as
well
like
say
in
the
template.
If
you
specify
mean
30
seconds
as
like
10
seconds
or
15
seconds
in
the
load
balancer
configuration,
we
would
increase
the
value
by
30
seconds
by
10
seconds
or
20
seconds,
whatever
the
amount
of
value
that
we
give.
B
Tim
does
that
help
what
you
are
thinking
about?
I
mean
yes,
except
it
calls
out
somewhere
in
here.
Let
me
find
out.
I
thought
I
saw.
B
E
Right
so
that
has
to
be
manual
like
there
is
no
automatic
way
at
this
point
of
time.
We
would
add
that
additional
amount
manually
is
what
we
have.
E
So
when
like
when
openshift
gets
installed,
it
actually
creates
the
load
balance
raspberry
and
the
right
time
would
include
the
min
ready
seconds
that
we
have
for
those
components
like.
B
It's
load
balancers
map
to
services
and
services
map
to
pods,
and
we
don't
know
whether
there's
a
stateful
set
in
between
like
there's
there's
the
chain
of
trust,
doesn't
include
the
workload
controller,
that's
off
to
the
side.
That
is
that
is
right.
So
that's
why
it
is
manual
at
this
point
of
time.
B
E
That
is
the
plan,
but
we
at
this
point
of
time
we
wanted
to
make
it
manual,
but
if
we
get
something
back
that
yes,
we
like
this
to
be
included,
we
can
do
it
like,
for
us
being
static,
is
good
enough
at
this
point
of
time,.
B
So
this
is
the
fun
part
right
like
I
understand
that
it's
good
enough
for
you,
but
I'm
not
sure
that
I
want
to
add
api
that
doesn't
do
much
unless
it
does
stuff
for
everybody
else
like.
I
know
it's
really
minor,
it's
like
one
field
right
and
it
wasn't
actually
clear
if
there's
any
controller
changes
other
than
work
than
the
stateful
set
controller
right.
B
But
if
we're
gonna
add
an
api
I'd
like
to
solve
it
to
I'd
like
to
make
it
worthwhile
to
make
it
make
it
hold
its
own
weight
right
like
as
an
alternative
question.
Why
would
we
not
push
this
all
the
way
down
to
pods
and
then
make
the
cubelet
not
set
the
ready
until
after
all,
the
other
ready
signals
have
been
set?
Plus
this
time,
yeah.
E
E
So
that
is
something
mentioned
in
the
cap
as
well
like.
If
you
look
at
the
issue,
the
open
issue,
I
think
brian
grant,
clayton
and
rest
of
the
folks
were
interested
in
having
it
within
the
power,
but
there
was
not
much
of
effort
in
that
direction
or
someone
has
tried
and
tried
it
in
the
past,
but
did
not
go
through
because
of
some
technical
reasons
that
I'm
not
fully
aware
of,
or
I
cannot
fully
remember
at
this
point
of
time.
E
Yeah,
it
has
been
around
for
a
long
time
and
there
it
was
mentioned
that
we
wanted
to
have
it
within
the
pod
spec,
so
that
everyone
can
actually
use
it.
E
But
in
one
of
the
points
in
the
nonviolence
we
have
explicitly
mentioned
that
it
takes
a
lot
of
change
and
we
want
to
have
consistency
for
all
the
controllers.
That
was
the
reason
we
are
pushing
it
in
this
direction,
but
say
in
future.
We
are
like.
All
of
us
are
fine
having
or
moving
in
the
direction
we
can
do.
It.
B
Funny
that
two
years
ago,
brian
suggested
the
same
thing
that
I
just
came
up
with
sorry,
I'm
reading
I'm
skimming
the
issue
now.
B
The
problem,
okay,
so
I'm
I
agree
completely
with
what
brian
said
in
his
may
31
2019
comment,
which
was,
I
also
think
min
ready
seconds
should
be
added
to
pod,
but
I'm
not
opposed
to
adding
it
to
stateful
set
since
other
controllers
have
it.
Basically,
the
precedent
is
strong
and
I
would
not
be
against
adding
it.
What
I
would
question,
though,
is
if
we,
if
we
set
it
in
the
controller
in
sorry
in
stateful
set,
and
we
later
wanted
to
add
it
to
pod.
E
So
my
understanding
is,
it
could
not
be
like
we
would
remove
completely
from
the
stateful
aspect
and
we
will
have
it
within
the.
B
So,
like
imagine,
I
have
a
stateful
set
where
I
specify
in
my
pod
template
the
min
ready
seconds
and
in
my
stateful
set
min
ready
seconds.
If
I
set
them
both
to
30
seconds
the
obvious
behavior.
Is
it's
going
to
wait
30
seconds
for
cubelet
to
set
it?
Then
it's
going
to
wait.
Another
30
seconds
for
the
workload.
E
Yeah,
so
my
understanding
is
say:
if
it
reaches
the
port
spread
that
will
take
precedence
and
whatever
be
specified,
say
if
you
cannot
remove
the
api.
The
value
that
we
have
in
the
pod
spec
will
take
precedence
when
compared
to
the
value
that
we
have
within
the
workforce.
E
But
there
is
something
that
we
did
not
think
through
here
like
how
how
the
migration
would
work.
B
Does
that
answer
the
question
I
mean
yes,
it's
not
it's
not
a
satisfying
answer,
but
it's
an
answer.
That
is
true.
E
And
to
answer
your
other
question,
I
think
even
the
deployment
controller
was
the
same
like
we
do
not
have
end
points,
controller,
getting
updated
or
getting
involved.
It's
just
the
status,
it's
just
the
deployment
status
that
is
getting
reflected
with
available
graphic
details
and
the
main
available
seconds.
So
I
do
not
know
if
it
needs
a
change
across
the
board.
Just
like
the
30
seconds
within
the
pod
spec.
B
Yeah,
I'm
just
reading
that
the
comment
linked
and
it
doesn't.
I
don't
think
it's
a
particular
problem
in
the
it's
bringing
parody.
If
this
was
net
new
I'd
probably
push
harder
against
it,
so
I
won't,
I
won't
get
in
the
way.
I
do
feel
like
it's
a
it's
a
half
of
a
solution
yeah
so
but
it's
the
same
half
that
deployment
has
so.
E
Yeah
so
like
we
are
trying
to
promote
it
to
beta
next
release.
Perhaps
we
can
have
this
discussion
during
the
beta
again
and
then.
D
Yeah,
but
I
do
want
to
capture
what
we
talked
about
today
and
update
the
cap,
so
we
don't
have
to
like
try
to
remember.
Where
did
we
talk
about
this,
and
when
did
we
talk
about
this
and
do
we
have
notes
and
do
we
have
a
recording.
D
You
know
when
tim
was
reading,
brian's
comments
and
you
know
he
came
up
with
the
same
thing.
So
it
helps
if
we,
if
we
update
the
cap
right
away
with
the
discussion
from
today,
so
we
don't
have
to
re-litigate
one
more
time
sure.
B
Yeah
sorry
mostly,
I
just
would
like
a
few
words
that,
like
this
doesn't
automatically
handle
load
balancers
and
it's
you
know.
That
is
a
separate
issue.
This
really
is
just
bringing
parity
with
the
other
workload
controllers.
That's
the
main
objective,
and
if
I
had
seen
that
as
reading
it,
I
probably
would
have
just
not
even
commented.
E
Okay,
yeah
in
the
normal
section
we
have
mentioned
that
we
wanted
to
bring
it
on
par
with
the
rest
of
the
controllers,
but
I
think
I
can
make
it
explicit
in
the.
B
E
Okay,
I'll
update
the
load
balancer
at
least
awesome,
so
I
need
to
head
out
now
thanks
a
lot
for
having
me.
Unless
you
have
any
questions,
just
let
me
know,
or
you
can
ping
me
on
slack.
D
Yeah
we
can
follow
up
on
the
thread
ravi
on
sick
architecture.
I'm
sure
the
rest
of
us,
the
rest
of
the
folks
will
have
some
comments.
Sure
thank
you.
Take.
A
Okay,
so
before
we
start,
I
just
had
one
question
like
in
general
about
the
process,
so
in
this
particular
cap
I
saw
a
field
called
required
monitoring.
Metrics,
so
is
that
is
that
something
that's
part
of
the
template
and
required
by
all
caps,
or
is
that
like
per
cap
or
it's
okay?
If
that
particular
cap
doesn't
require
it
or
do
features
need
to
have
some
form
of
monitoring
provided
in
their
implementation?.
A
D
Prr,
it's
a
sub
heading,
okay,
okay,
so
basically
the
point
here
is:
when
you
put
this
feature
in
production,
you
you
need
to
get
some
metrics
out.
You
need
to
figure
out
like
how
people
are
using
it
things
like
that
right.
So
we
are
trying
to
look
from
an
operator
point
of
view
and
not
just
like
the
developer
point
of
view,
which
we
typically
end
up
doing
so
that
that's
the
reason
why
we
we
added
the
prr
section
and
then
we
are
making
sure
that
people
are
thinking
about
it.
A
C
C
D
Mother
vito
how
to
use
google
for
timer
for
two
minutes
or
ten.
A
Okay,
two
minutes
are
remaining
for
what.
A
D
D
A
I
don't
think
one
of
the
cap
any
of
the
either
of
the
right
here.
I
I
tried
contacting
someone
from
sick
windows,
but
I
don't
think
they
could
make
it.
F
B
It
says
somewhere
in
here,
there's
an
r
back
a
new
rbac
scope
for
node
logs
as
opposed
to
pod
logs.
B
So
you
can
grant
our
back
to
node
logs
as
a
sub
resource.
Sorry,
who's,
beeping,.
C
So
I
had
a
few
questions
that
I
wrote
down.
I
see
there's
a
giant
discussion
though,
so
I
didn't
read
all
the
comments,
so
this
might
have
been
discussed
so
I'll
rely
on
youtube
to
answer
those,
probably
the
control
plane.
Logs
specifically,
I
know
cloud
providers
lock
down
access
to
the
the
control
plane.
C
B
I
think
that
is
a
great
question
that
isn't
clearly
covered
here
in
the
case
of
managed
control
planes.
Does
this
require
that
you
give
access
to
those?
I
I
don't
think
the
answer.
I
think
the
answer
is
no
right.
As
in
you
you
know,
look
at
a
gke
cluster,
you
don't
see
the
master,
it's
not
there
right,
it's
not
a
node
it.
It
might
be
running
cubelet
or
it
might
not
be.
B
I
mean
it
is,
but
it
might
not
be
right,
and
so
I
don't
think,
there's
a
a
limitation
on
that.
It's
not
they're,
not
control,
plane,
nodes,
they're,
just
an
abstract
control
plane.
So
I
think
that
could
be
clarified
here.
That's
actually
a
really
good
point.
C
Yeah,
that's
the
first
thing
my
mind
jumped
to
was
managed
nodes.
I
wonder
how
that
works.
Okay,
the
next
one
was
the
specific
call
out
for
systemd
was
really
interesting.
Was
that
talked
about
for
limiting
the
scope
to
just
systemd
anywhere.
B
The
it's
funny
I
find
myself
as
the
representative
of
this,
because
I
was
asking
these
same
questions-
is
somebody
taking
notes
that
we
can
forward
on.
Are
we
capturing
this
other
than
the
recording,
oh
yeah,
so
yeah?
I'm,
I'm
writing
down
a
few
notes.
Awesome.
B
So
the
initial
draft
that
I
read
if
my
memory
serves
me,
was
very
journal.
Cuddle
focused
and
the
other
things
were
there.
In
fact,
I
think
it
was
originally
a
flag.
You
had
to
say
whether
you
wanted
something
from
the
journal
or
from
a
file
and
the
heuristic
that
is
now
in
there
was
sort
of,
at
my
behest
to
say
I
bet
most
of
the
time.
You
don't
need
to
specify
the
difference
right
so
only
leave.
It
only
specify
the
difference
if
you
have
to
so.
B
The
heuristic
is
first
we'll
look
in
the
journal,
then
we'll
look
in
some
commonly
named
files
and
we'll
probably
be
able
to
find
it
and
it
if
we
make
it
through
alpha,
and
it
turns
out
that
that's
not
true,
then
we
can
come
back
and
revisit
it.
D
Okay,
one
one
related
observation
I
had
was:
this
heavily
relies
on
our
cubelet,
our
version
of
cubelet,
so
people
have
other
cubelets
too,
so
you
know
how
do
we
end
up
putting
this
in
the
conformance
program?
That
would
be
my
two
cents
on
it.
D
B
Yes,
and
if
I
compile
on
the
same
question,
one
that's
not
clear
to
me
is
the
requirement
for
interleaving
across
nodes
is
that
is
that
going
to
be
client
side
nodes
like
how
is
that
done?
Given
that
you
know
time,
stamps
are
not
always
equally
accessible,
given
different
logging
formats
right,
if
I'm
pulling
from
var
log
something
something
dot.
Log
like
it,
may
not
even
have
time
stamps
how?
How
or
am
I
going
to?
Are
we
going
to
sort
of
loose
parse
those.
D
Yeah,
this
seems
very
red
hat
centric.
To
be
honest,.
B
It
is,
and
now
the
the
the
thing
that
I
was
sort
of
convinced
of
as
I
read
this
is
almost
everybody's
using
systemd
anyway,
and
you
know
my
own
experience
aligns
with
that
as
much
as
I'm
not
so
happy
about
it.
Oh
this
is
recorded,
it
is,
it
is,
seems
to
be
the
state
of
the
world.
So
that's
probably
okay,
and
if
we
end
up
well
we're
gonna
go
through
an
alpha.
If
it
turns
out
to
be
a
big
problem
for
a
lot
of
people,
we'll
have
to
revisit
it.
That
was
my
feeling.
D
I
agreed
I
I
the
one
thing
I
really
like
about
this:
is
it
will
help
windows,
folks,
people
who
want
to
poke
into
windows
and
like
get
locked
back,
makes
their
life
easier
without
having
to
you
know,
go
to
your
infrastructure
and
go
pull
stuff
yeah.
I
definitely
like
that
aspect
of
it.
B
One
of
the
things
that
I
disliked
about
this
is
because
cube
cuddle
logs
was
sort
of
a
terminal
command
before
now
it
becomes
a
both
a
branch
and
a
terminal
it
it's
unclear
whether
those
flags
are
additive.
Like
the
principle,
I
think
that
cobra
espouses
is
only
the
the
flags
add,
as
you
traverse
the
the
tree
right,
and
so
all
the
flags
that
apply
to
cube
cuddle
logs,
don't
apply
to
cube
cuddle
logs
nodes.
D
C
H
C
There's
there's
definitely
some
incompatibility
here.
I
actually
probably
would
have
wanted
to
see
this
as
cube
cuddle
like
node
logs
as
its
own
command,
because,
having
worked
with
cobra,
it's
really
hard
to
introduce
a
sub
command
to
a
command
that
already
exists.
It's
like
really
difficult
to
pull
that
off
correctly.
So
you.
C
B
Yeah
you
should
you
should
jump
on
that.
I
didn't
I've,
never
used
cobra
in
great
detail
outside
of
being
trivial
stuff,
one
of
the
things.
Maybe
it
makes
sense
to
have
a
like
a
dash
resource
or
something
flag
to
it,
but
that
was
my
first
thought,
but
then
you
know:
there's
flags
here
that
are
defined
with
different
defaults
and
having
the
default
set
differently
for
different
resource
types
is
weird.
B
I
understand
the
desire
not
to
have
more
top
level
cube,
cuddle
commands
right
like
in
hindsight.
Probably
this
should
have
always
been
cube,
cuddle
logs
pod
right,
but
it's
not
and
I'm
not
a
cute
cuddle
maintainer.
So
that's
on
you
guys
to
figure
out
what
you
want
to
do
forward.
Nor
am
I
a
great
cli
designer
so.
C
D
B
This
one
like
I,
I
noticed
that
cute
cuddle
logs
has
a
dash
tail
flag
and
this
has
a
dash
tail
flag,
but
they
have
different
defaults.
B
They
seem
to
mean
the
same
thing,
but
one
is
defaulted
to
negative
one,
and
one
is
defaulted
to
zero
and
cube
cuddle
logs
has
a
follow,
but
this
doesn't
and
there
was
not
one
more.
What
was
the
other
one?
I
lost
track
of
it.
H
H
Yeah,
so
general
cuddle
is
actually
very
verbose,
or
rather
there
are
too
many
events
there.
So
if
you
have
a
cluster
with
20
nodes-
and
you
just
get
all
of
the
logs
of
general
kettle
on
your
master,
is
there
some
sort
of
threatening
throttling
involved?
Or
do
you
just
overwhelm
everything
that's
available.
H
D
H
F
Also,
one
thing
which
was
mentioned
in
the
risks
was
what,
if,
like
the
logs,
which
we
are
getting,
they
are
too
big
in
size,
and
it
was
written
in
the
second
line
that
to
mitigate
this,
we
can
document
that
node
lock
should
always
be
rotated.
What
is
rotating
the
logs
mean?
I
didn't
understand
that.
B
F
So
would
that
mean
that
if
we
are
to
support
this,
there
would
be
a
change
in
how
much
data
we
store?
Past
data
restored.
B
D
H
D,
but
yes,
there
are
companies
like
splunk,
who
are
built
for
essentially
handling
very
large
logs
and
general.
Kettle
is
one
of
the
logs
of
that
space.
It
is
not.
D
Yeah,
that
was
the
other
angle
I
was
thinking
about-
was
lots
of
these
log
aggregator
type
companies
and
they
end
up
doing
side
cars
and
to
capture
this
kind
of
information.
So
are
we
like
competing
with
them
to
some
extent,
are
they
doing
something
better
that
we
need
to
learn
from.
G
I
feel
like
they're
just
doing
the
log
forwarding
right
there,
it's
similar
to
how
we're
doing
like
locks
off
any
parts.
So
even
if
we
provide
this
option,
it'll
just
forward
this
to
splunk,
and
I
think
it
would
be
helpful
for
teams
where
there
are
newcomers
and
they
don't
have
to
give
permissions
to
all
these
node
level.
Permissions.
G
I
think
I
would
like
this
so
so
for
in
my
team,
like
so,
we
actually
have
like
a
month
or
two
to
see
the
people
like
how
they
are.
Can
we
trust
them,
and
then
we
provide
access.
So
for
us.
If
we
have
this
option,
we
could
just
let
people
read
from
splunk
and
we
don't
even
have
to
worry
about
the
access
controls.
So
that's
something
that
I
was
interested
in
as
soon
as
I
saw
this.
B
H
F
B
H
B
Cool
one
of
the
things
that
wasn't
clear
to
me
also
sorry
last
parting,
note
all
the
stuff
like
grep
and
interleaving.
It
wasn't
clear
to
me
whether
that's
being
done
client-side
cubelet
side
or
api
server
in
the
middle.
So
it
would
be
interesting
to
figure
out
if
they're
expecting
the
cli
to
do
all
that
heavy
lifting.
How
much
you
know
buffer
does
it
need
to
store
in
memory
or
is
it
going
to
buffer
to
disk
or
if
we're
expecting
the
cubelet
to
do
all
that.