►
From YouTube: Kubernetes SIG Node 20200526
Description
Meeting Agenda:
https://docs.google.com/document/d/1j3vrG6BgE0hUDs2e-1ZUegKN4W4Adb1B6oJ6j-4kyPU
B
Hi
Chris
here,
ok,
so
last
week,
I
think
that
Victor
put
the
question
under
the
topology
manager
what
level
regional
alignment.
So
the
question
was:
is
it
possible
to
compa
combine
the
proposal
from
Alexandra's
presentation
with
this
enhancements
as
an
enhancement,
especially
with
Paul
versus
container
level
alignment?
Can
we
configure
this
preference
in
the
pods
back,
so
it's
it's
hard
right
now
understand
how
it
can
be
combined,
because
the
proposal
from
Intel
guys
lacks
of
the
changes
that
will
be
in
the
topology
manager
and
all
the
other
managers.
B
So
that's
why
I
asked
on
the
slack
about
when
the
presentation
would
be
and
and
how
to
combine
that.
Maybe
similar
idea
would
be
with
the
annotations,
like
Alexander
presented
in
the
his
demo,
where
containers
are
co-located
on
the
same
Numa,
node
or
dislocated
on
different
nodes
based
on
the
on
the
annotations,
as
presented
on
the
example.
A
Iii
apologize
so
I
know
many
of
us
in
the
US
are
getting
out
of
a
holiday
weekend
stupor.
So
I
was
not
personally
thinking
about
kubernetes
this
much
over
the
weekend
to
have
come
up
with
a
brilliant
idea
here.
Yet
I
don't
know
if
others
had
I
guess
what
I'm
curious
is
what,
when
you
say
it
wouldn't
work
with
other
managers
which
challenges.
B
Okay,
because
we
wanted
to
give
the
pot
topology
managers
called
defined
as
the
flag
and
also
the
the
proposal
from
Intel
guys
that
will
give
the
new
algorithm
and
it
mentions
that
it
will.
It
will
require
the
changes
of
the
topology
manager
here.
So
I
was
thinking
what
changes
those
are
and
that's.
Why
I
wanted
to
see
their
proposal
more
briefly,
and
if
we
wanted
to
move
the
move,
the
topology
manager
scope
to
the
desktop
of
the
policy
to
the
to
the
pots
back.
C
I
think
you
are
you
probably
mixing.
Two
independent
topics
with
wet
so
probably
will
pull
requests
what
you
are
talking
about
with
improvements
of
algorithm
for
what
a
provigil
manager
it's
nothing
related
to
like
with
in
Porto
visit
communities
in
container.
It's
it's
more
about
like
how
devices
will
be
chosen
by
the
topology
manager
or
actually
how
any
resource
will
be
chosen
by
the
topology
manager
and
this
annotation
part.
What
what
is
written
example
here,
it's
more
about.
C
This
is
actually
for
topology
manager
to
decide
affinity
of
resources
between
the
containers
with
import
of
the
whole
vision.
Whole
note,
so
consumer
for
those
annotations
might
be
a
topology
manager,
but
probably
it
will
alter
with
mechanism
of
it
will
be
not
a
admit
container,
but
we
all
admit
port
and
front
a
speaking.
I
am
NOT
ready
to
fully
answer
this
question
like
how
it
will
be
affecting
what
apology
manager,
but
it's.
B
C
With
annotations
for
us,
it's
like
this
with
my
water,
what
I
shown
for
us
it's
consuming.
This
share
a
resource
manager,
because
we
have
independent
events.
We
have
create
port
or
create
Sun
box
in
terms
of
CRI.
Api
is,
and
we
have
a
click
create
container.
So,
every
time
when
we
do,
when
we
get
create
container
API
call,
we
can
look
at
where
annotation
and
see
if
we
need
to
find
where
affinity
or
aunty
I,
think
of
resources.
C
Well,
I
mean
okay,
try
to
explore,
explain
this
more
simple
use
case.
So
when
you're,
when
you're
saying
a
heliport
and
I
have
a
containers
like
two
containers
for
data
plane
and
both
of
them
will
do
where
a
PC
using
the
shared
memory.
So
in
this
scenario
you
want
to
have
a
CPU
cores
close
together
because
of
l3
cache,
which
might
be
used
and
when
you
want
to
have
a
memory
controllers
to
be
close
together.
C
B
Do
you
think
this
somehow
similar
idea
could
be
applied
to
the
to
the
scope
in
a
Potts
bag
or
any
other
idea,
because
some
time
ago
we
can?
We
came
with
the
pelagic
policy
per
per
port
that
was
specified
as
the
topology
policy
field
and
I
was
maybe
thinking
like,
maybe
something
like
topology
and
in
the
topology
field
and
in
the
topological.
Maybe
the
scope
field,
or
something
like
that
and
I
wanted
to
ask
for
ideas
here.
Well,.
C
My
idea
at
least
why
I
presented
it.
It
was
about
to
use
web
same
pattern
of
affinity,
an
antiquey,
because,
right
now,
this
pattern
is
used
for
like
web
bigger
scope,
whether
you
can
do
with
node
selection,
saying
like
this.
Will
this
sport
should
be
affinity
to
would
say
like
database
and
I,
don't
know
like
from
time:
tapey
I,
oh,
but
can't
API,
for
which
using
database
needs
to
be
calculated
together.
C
So
it
will
be
similar
part
on
saying
what
prefered,
during
execution
or
required
during
execution
and
when
the
same
structure
of
a
crater,
where
you
can
say
much
by
we're
containing
a
name
or
much
by
a
label
name
or
March
by
sound,
somehow
attribute
so
with
annotation,
which
is
copied
right
now
within
we
meet
meeting
two
minutes,
it's
it's
the
most
simplest
case,
just
just
matching
the
name.
All
right
give
me
a
second
I'll,
probably
try
to
find
a
example
of
like
more
sophisticated
example
of
a
label.
My
drink.
B
So
so
the
question
is
here
that
going
with
the
Athenian
until
affinity
would
be
desired
here.
C
C
B
Right
Franchesca,
my
question
was:
if,
if
more
desired
way
to
give
the
scope
of
the
policy,
it
will
be.
If,
if
we
were
to
do
this
scope
in
the
in
the
pots
back,
what
would
be
the
desired
way
it
if
it
would
be
there
affinity,
anti
affinity
way
or
any
other
way,
because
if
you
want
to
do
this
in
like
the
flag,
this
is
the
way
we
proposed.
E
B
B
E
B
A
Solution,
I
think
rhetorically.
My
question
is
more:
if
we
hadn't
done
container
scope
first
and
we
did
pod
scope
first
or
we
did
both
at
the
same
time,
do
we
think
more
users
on
this
forum
would
desire
pod
versus
container
scope
like
there's
the
default
as
in
what
the
cubed
default
is
and
then
there's
the
which
will
actually
get
more
use,
interaction
from
the
audiences
that
are
concerned
about
the
capability
and
I'm
curious,
if
more
folks
see
their
use
cases
of
lining
with
pod
versus
container
scope.
Okay,
I
understand
them.
So.
A
B
Okay,
so,
in
my
opinion,
the
the
container
as
I
said
container
by
container
the
container
scope
will
leave
the
basis,
but
our
target
here
is,
as
we
mentioned
before,
is
the
5g
work
workflow.
It
is
one
of
the
use
cases
for
that
that
that
we
see,
but,
as
you
can
see
on
the
PR,
the
BG
described
there.
The
whole
5g
workflow
requirement
in
one
of
the
PRS
comments,
and
this
this
is
actually
the
that
desire.
Our
desire
for
the
pot
scope
here.
A
B
F
Sorry
could
I
interrupt
I
think
that
it
will
be
only
a
limited
number
of
nodes
that
alignment,
because
these
notes
are
UPF
notes,
user
of
plane
functions.
That's
are
designated
for,
as
we
said
before,
a
high-performance
packet
processing.
So
yes,
it
will
be
only
a
limited
number
of
nodes
that
that
will
use
affinity.
A
That
I
think,
if
what
you
said
proves
true
I
think
that's
that's
the
type
of
useful
feedback
I
think
we
would
all
like
to
see
because
it
could
end
up
being
on
the
net.
It's
it's
potentially
simpler
to
do
what
is
desired
by
getting
in
the
queue
but
versus
putting
in
the
pots
back,
but
still
nothing.
I.
G
A
That
was
kind
of
the
other
question
I
was
gonna,
have
which
is
like
there's
5g
running
in
some
central
core
in
some
city,
and
then
there's
5g
running
out
like
the
far
edge
of
right,
exactly
small
hole
in
some
small
part
of
the
country.
Where
you
you,
you
could
be
running,
you
know
three
nodes
versus
a
couple
hundred
nodes
and
so
for
getting
to
one
one
size
fits
all
outcome
is
sometimes
difficult
for
sure.
I
agree,
J,
yeah,.
E
I'm
just
gonna
throw
in
invidious
kind
of
use
case
for
for
this
stuff,
where
you
know
we
really
are
in
the
motive.
We
assume
that
we
have
big
serving
machines
with
lots
of
GPUs
lots
of
CPUs
and
we
can
totally
just
dedicate
a
set
of
nodes
with
a
per
node
policy
ship.
You
know
jobs
to
that
node.
That
makes
sense
for
alignment
in
one
policy
and
jobs
to
another
node.
They
need
alignment
in
a
different
policy.
You
don't
have
to
have
as
much
granularity
worth
spot
is
gonna
be
laying
like
this.
D
C
And
I
would
like
to
throw
more
use
case
so
reason
why
they
thought
about
with
containing
an
affinity
and
want
unity
and
why
we
actually
implemented
with
big
scope
of
a
traitor's
is
what
we
have.
A
scenario
like
the
customer
runs
with
data
center
and
we
have
training
drops
which
is
running
during
the
night
time
and
we
have
medium
priority
of
workloads
which
can
be
like
a
bit
slowed
down
because
of
those
jobs,
and
we
had
high
priority
jobs
which
shouldn't
be
slowed
down
so
wet
like
with
I.
Think
can't.
C
A
C
A
Yeah
so
I
feel
like
this
is
a
there's,
probably
no
use
case
that
can't
be
satisfied
by
the
type
of
affinity,
anti
affinity,
semantics,
you've
expressed
Alex
except
similar
to
pod
affinity
and
anti
affinity.
It
becomes
really
hard
for
people
to
understand,
like
when
I
think
about
cube
itself.
I
find
a
lot
of
the
topologically
aware
scheduling
things
to
be
confusing
for
users
in
practice,
but
they
are
very
expressive
and
that
expressiveness
I
I,
like
I,
could
see
how
you
get
that
a
lot
of
interesting
things,
but
I
can
also
see
how
that's
why.
C
A
Yeah
I
think
it's
fair
to
say,
though,
that
a
lot
of
people
get
confused
on
the
cluster
wide
scheduling,
I
fade
in
the
anti-fan
econ,
except
in
practice.
Maybe
not
the
authors
on
this
call,
but
practitioners
I
know
definitely
seem
to
get
confused,
but
I
can
see
how
for
a
use
case
like
Kevin's
at
C,
it's
kind
of
not
a
must-have,
but
it
would
actually
be
easier
for
him
to
just
have
a
dedicated
note
pool
and
then,
if
cesari's
comment
was
accurate,
I
could
see
that
as
well
to.
A
A
With
all
of
these
proposals
correct,
so
if
we
try
to
think
about
dimensions
to
to
measure
them
against
the
node
level,
visibility
problem
is
at
the
cluster
levels
is
still
largely
unsolved
and
for
the
moment
ignored,
maybe
for
good
reason
and
then
the
it
really
just
comes
down
to
the
user.
Experience
of
dedicated
note
goals
versus.
C
It's
a
question
of
what
we
want
to
say.
Is
it
really
reject?
Oh,
is
it
problem
this
particular
container,
because
rejection
or
this
kind
of
error
can
happen
not
only
because
of
the
polar
jurors
it
might
be
because,
like
one
particular
volume,
storage
might
not
be
attached
to
to
this
so
notice,
healthy,
all
the
resources
available,
but
something
happened
during
the
attachment
of
the
storage
and
before
this
failed.
So
we
need
to
have
a
way
to
schedule
or
to
return.
Ones
were
saying
like
this.
Particular
port
is
problematic.
C
A
I
guess
what
I
think
about
on
this
J
was
like
I've,
always
imagined,
kubernetes,
as
traditionally
being
either
one
an
elastic
system
where,
if
a
pod
can't
be
scheduled,
we
just
go
create
a
new
node
on
demand
so
running
in
some
some
cloud
or
it's
some
system.
That
has
that
that,
in
my
mind,
looks
more
like
a
traditional.
A
5G
deployment
in
a
city
which
is
like
I
have
a
couple
hundred
nodes,
maybe
some
excess
capacity
and,
if
I
bounce
from
one
node
to
the
other
one
like
if
the
probability
error
of
a
scheduler
placing
a
pod
on
a
node
that
the
node
ultimately
rejects.
It
is
like
two
or
three
like
big
deal
like
it'll,
just
get
put
on
another
note
and
stick
there
for
the
next
month.
A
F
A
F
A
C
But
again
it's
another
problem.
What
we
soonerlater
we
need
to
solve
so
right
now,
with
the
podium
manager
and
tall
album
manager,
resource
managers
we're
dealing
only
when
were
port
container
is
created,
so
it
cannot
react
dynamically
over
to
two
orion
workloads.
What
we
prototyped
it
is.
Actually
we
do
rebalancing.
So
if
we
cannot
feed,
we
can
move
some
of
our
low
priority
workloads
to
in
our
area,
so
we
can
still
feed
yes.
C
G
C
C
E
We
just
make
sure
we
constrain
the
shape
of
a
job,
and
we
know
that
you
know
the
only
ever
going
to
ask
for
this
many
CPUs
or
this
many
GPUs
and
we
can
make
sure
it
lands
on
a
node,
that's
dedicated
to,
for
example,
only
serving
jobs
that
have
two
GPUs
on
them
and
because
we
know
a
GPU
has
a
machine
has
eight
total,
because
we
only
put
two
GPU
size,
jobs
on
that
node
node.
We
know
that
it
they'll
all
fit
by
the
time
they
land
there.
E
D
F
E
E
But
again
it
assumes
you
have
lots
of
nodes
and
you're
okay
with
a
little
bit
of
delay,
because
you're
doing
something
like
you
know:
machine
learning,
training
and
it's
okay.
If
it
takes
a
little
bit
longer,
because
it's
this
really
long-running
job
anyway.
That
just
needs
to
be
batched
up
and
run
to
completion
at
some
point,
so
it
kind
of
goes
back
to
what
Derek
was
saying
before
about
you
know:
there's
different
use
cases
based
on
how
you
imagine
your
cluster
being
used
and
the
type
of
jobs
that
can
can
land
there.
F
G
E
I
think
it's
a
good
question.
We
you
know
I,
don't
want
it
to
just
sit
here
stalled
because
you
know
lack
of
comments.
If
there's
people
that
are,
you
know,
very
opposed
to
it,
we
should
we
should
know
that,
but
I
feel
like
they
would
have
spoken
up.
Okay
now,
maybe
I
guess
these
are
all
interesting
discussions,
but
yeah.
We
need
to
be
able
to
decide
if
we
move
forward
on
it
or
not.
At
the
end
of
the
day,
I.
G
A
A
A
C
Did
four
lunk?
Well,
we
just
point
of
my
presentation
was
what
we
we
can.
We
can
have
genetic
algorithms,
which
will
solve
extensively.
Various
use
cases
like
from
were
very
restricted
environments,
like
5g
network
stuff,
to
do
with
scenarios
where
you
have
like
very
complex,
like
user
space
like
database
plus
something
applications
or
just
generic
optimisation
related
to
Numa
and
something
like
botrytis
different
applications.
A
question
is
like
what
we
do
next
I
can't
what
we
do
next.
As
you
say,
we
really
have
multiple
options.
C
C
How
to
align
webstore
like
native
resources,
so
I
would
actually
thinking,
but
sorry
I,
don't
have
like
we're
straight
picture
in
my
mind.
Yet
to
propose
at
it
very
carefully
but
well.
Overall
idea
is
what
I
would
suggest.
We
come
up
with
a
list
of
potential
alternatives.
Like
how
we
can
enter
a
number
of
these
things
and
when
right
like
right
down,
pros
and
cons
for
each
of
these
solutions
on
when
we
decide
where
we
go.
A
Please
don't
take
this
wrong.
I
mean
this
in
the
nicest
way
possible
a
lot
of
times,
I.
Think
some
of
the
information
we
put
out
as
a
community
is
contradictory,
and
it's
like
you
had
to
make
some
choice
to
make
one
workload.
One
thing
run
optimally,
but
then
you
can't
do
a
deployment,
because
you're
asked
to
also
make
another
choice.
That
does
something
in
the
opposite
direction
and
then
you
just
kind
of
end
up
with
like
a
frustrated
user
and
so
depending
on
which
voice
any
one
of
us
are
representing
on
this
caller.
A
In
this
form,
whereas
we
like
evaluate
an
issue,
I
sometimes
worry
that
like,
unless
we
are
prescriptive
on
saying
this
is
the
domain,
and
this
is
the
recommended
configuration
for
the
cubelet
and
we're
very
clear
on
it.
Then
we're
just
going
to
end
up
with
a
bunch
of
happy
unhappy
users,
so
what
I
would
really
love
to
see
from
all
these
proposals
is,
like
someone
assertively
say,
like
I
recommend
when
you're
running
this
particular
type
of
CNF
function
and
this
type
of
5g
deployment.
A
You
always
configure
the
cubelet
with
this
profile
and
like
as
a
as
a
body
of
knowledge
in
the
community,
we
can
all
say:
yeah
I,
agree
with
that.
That
makes
sense,
and
is
the
latest
state
of
the
art
for
bura
or
in
the
Nvidia
use
case.
If
you
have
excess
capacity
and
you're
running
multiple
GPUs
and
yadda
yadda
and
see
these
things
go
on,
like
I,
want
to
see
more
recommendations
tied
to
actual
designs
of
practice
and
not
like
theoretical
sounds.
A
I
asked
like
who
is
ever
gonna
run
the
Container
policy.
If
you
have
the
pod
policy,
it's
like
seriously,
should
we
deprecate
container
policy
because
18
months
from
now
whoever
is
still
sitting
around
is
gonna,
be
thinking
what
do
these
two
things
do
and
when
and
how
would
I
use
them,
and
so
I
really
want
to
know
like.
A
A
C
G
Like
that,
it's
just
asking
that
we
get
better,
you
know,
sort
of
best
practices
and
and
guidance
in
the
documentation
around
all
of
these
different
sort
of
flavors
of
workloads
and
how
to
configure
cubelet
and
the
topology
manager
and
all
the
device
plug-ins
in
a
way
that's
optimal.
For
that
particular
workload.
Without
making
users
heads
go.
E
Specific
in
this
specific
case,
I
think
the
reason
that
the
topology
manager
even
kind
of
defaulted
to
having
container
level
scope
to
start
with,
because
that's
how
the
CPU
manager
was
designed
and
even
then
the
CPU
manager
doesn't
really
have.
You
know
a
whole
lot
of
I
think
you
know
motivation
as
to
why
they
did
it
on
a
container
scope
versus
pod
scope,
because
who
might
understand
most
of
the
use
cases
for
CPU
manager
lining
on
just
CPUs
alone.
E
C
Shaking
in
my
friend,
because
CPU
manager
was
implemented
primarily
because
of
his
network
function
of
functionality.
So
again
it's
a
PDK
application.
It's
application,
which
requests
exclusive
core
each
runs,
were
100%
to
a
CPU
load
busy
loop,
because
it's
polling
with
PCI
devices.
So
that's
why
we
see
you
measure
was
mostly
thinking
about
only
guaranteed
class,
only
exclusive
allocation,
only
one
container.
E
A
C
C
At
least
my
group
is
currently
we
are
looking
at
the
whole
node
scope
because,
as
I
give
a
reference,
what
like
we
have
at
devices,
sorry,
we
have
a
workloads
which
is
like
higher
priority
with
everyone
which
have
a
low
priority
like
one
of
the
examples,
also,
we
say
V,
X,
512
negation
so
like.
If
you
end
up-
and
we
note
scenario,
what
like
some
of
a
workload
starts
to
use
where
heavy
AVX
instruction
set
and
we
started
to
trot
over
base
frequency.
C
C
A
A
Bottom
with
we
have
millions
and
millions
of
dollars
of
engineering
Minds
here
sitting
together,
trying
to
find
an
outcome
and
that
that's
the
type
of
thing
that
drives
me
insane,
then
right
because
the
more
we
we
add,
then
the
more
we
need
to
support
and
the
harder
it
becomes
to
grow
people
here.
But
for
the
core
question
J,
though,
for
where
I'm
at
right
now,
like
I'm,
not
hearing,
why
I
want
to
just
run
pod
moving
forward
and.
G
A
G
It'll,
probably,
you
know
be
used
in
practice
by
some
very,
very
specific
use
cases
in
in
the
5g
area,
but
other
than
that
problem,
probably
not
so
anything
from
a
configuration
or
configurability
perspective
and
user
experience
perspective.
That
will
hide
that
level
of
granularity
in
a
way,
that's
makes
it
easier
to
use
while
giving
the
flexibility
to
those
5g
deployers
that
are
configuring,
this
kind
of
stuff
at
that
level
of
granularity
I
would
side
with
that
type
of
solution.
What.
G
D
G
E
Know
in
our
set
up
we
have
we
have
an
innate
container
that
pulls
in
all
of
the
basically
requests
the
exact
same
set
of
resources
that
our
app
container
will
ultimately
request
so
that
it
can
do
some
sort
of
prerequisite
check
to
make
sure
that
yeah
it
actually
got
what
it
was
supposed
to
and
if
it
doesn't
kills
the
container.
It
doesn't
happen
in
practice,
but
we
want
that
extra
level
of
check
there
before
it
actually
starts
the
app
container
and
then
we
also
have
a
logging
sidecar
that
sits
there
and
alongside
it.
E
C
A
A
F
A
F
F
F
F
F
F
F
A
Think
every
every
major
deployer
of
kubernetes
sets
a
reservation,
there's
actually
a
interesting
medium.
Article
I
found
the
other
day
that
describes
a
heuristic
differences
between
the
various
cloud
vendors
on
how
they
apply,
but
I
I'd
be
hard-pressed
to
find
anybody
that
is
not
setting
a
reservation.
I
think
it's
very
uncommon
to
set
system
and
cube
reserved
versus
just
setting
system.
A
Okay
and
I
think
the
reason
we
had
the
two
flags
is.
We
wanted
to
separately
budget
or
place
the
cubelet
in
the
run
time
and
a
different
secret
than
the
rest
I.
Don't
think
anybody
in
the
world
has
actually
done
that,
and
so
just
having
one
flag
is
probably
what
you're
seeing
more
commonly
used.
We.
F
Is
because
okay
note
just
give
you
a
background,
because
okay
future
was
developed
before
to
to
increase
the
stability
of
of
note
and
for
this
future
as
direct
site,
we
specify
two
flags
system
reserved
and
qubit
reserved
so
that
we
can
reserve
man,
for
example,
memory
for
system
and
accumulated
demons.
No.
G
H
I
I
H
I
H
H
I
F
With
respect
to
the
cap,
I
think
the
introductory
sections
are
ready
for
review
and
I
was
wondering
if
I
just
was
wondering.
It
is
I
think
that
if,
if,
for
example,
only
the
introductory
sections
could
be
reviewed
first
before
the
Colo
review,
so
it
is
I
think
it
is
a
question
about
best
practices.
How
how
how
to
how
to
carry
on
carry
out
the
review
that
might
be
direct
yeah.
A
A
A
F
So
that's
great
because
this
cap
is
in
fact
lengthy.
So
it
would
be.
It
would
easy
the
process
of
reviewing
by
by
just
focusing
on
introductory
and
individual
sections.
Okay,
so
that's
great
and
yeah.
We
just
wanted
to
ask
because
in
fact
it
is
my
first
first
cap
and
I,
don't
really
know
the
details
and
what
would
be
the
checklist
to
to
follow
and
approach
the
review
and
because
direct
you,
you
agreed
to
review
the
the
cap
and
I
was
thinking.
If
we
can
add
you
to
the
reviewer
section,
yeah.
A
A
D
A
F
A
For
myself,
I
want
to
take
one
final
pass
on
the
pod
level
proposal,
but
I
feel
like
we're
iterating
towards
a
consensus
of
that
seems
like
a
good
thing
to
do
and
I
think
Kevin.
Your
proposal
on
device
manager,
enhancements
I,
think
I,
haven't
heard
any
wide
disagreement
on.
It
seem
like
a
good
thing
to
do
for
me
as
well,
and
so
hopefully,
folks
are
feeling
like
our
meeting
at
a
higher
cadence
is
helping
us
reach
better
outcomes.
So
for
those
who
are
going
to
join
it,
one
I'll
see
you
at
1:00
Eastern
in
signal.