►
From YouTube: Kubernetes SIG Windows 20210202
Description
No description was provided for this meeting.
If this is YOUR meeting, an easy way to fix this is to add a description to your video, wherever mtngs.io found it (probably YouTube).
A
A
All
right,
let's
get
started,
I
see
a
couple
of
new
people
on
the
call.
I
don't
know
if
anybody
wants
to
introduce
himself
or
not,
we
kind
of
started
that
a
couple
of
weeks
ago
and
I'll
try
and
keep
that
up
periodically.
A
A
All
right,
I
guess
I'll
start
with
some
of
the
agenda
items
first
thing
to
note,
is
last
week,
jay
here
hosted
the
vmware's
tgi
kubernetes
and
focused
on
windows
sessions.
B
A
C
I
thought
it
was
cool
that
everyone
showed
up
to
help
and
it
was
great
to
meet
danny
and
the
rest
of
the
crew.
So
I
appreciate
everyone
coming
to
hang
out.
I
I
think,
mark
at
some
point.
Maybe
we
should
do
a
more
polished
demo
of
it
that
isn't
so
like
hacky
and
rushed,
but
I
mean
I
felt
like
at
least
we
got
it
out
there.
So
thanks
a
lot
for
giving
me
giving
us
those
those
artifacts
last
minute,
I
think
it
was
really
cool
that
we
were
able
to
show
people
that
so
appreciate
it.
A
Yeah,
we
kind
of
had
a
very
hacked
up
set
of
binaries,
but
I
think
perry
and
jay
were
able
to
get
node
exporter
running
in
a
privileged
container
without
much
fuss
and
have
the
metrics
just
stream
to
prometheus.
So
that
was
great,
that's
kind
of
exactly
how
I'd
hope.
A
lot
of
these
privileged
container
scenarios
would
go
on
windows.
A
All
right,
let's
go
into
some
of
the
kep
updates
so
next
week,
next
thursday
or
next
tuesday
february
9th,
is
the
enhancement
cutoff
as
I'm
as
far
as
I'm
aware
we're
tracking
these
two
caps
here.
Arvind.
Do
you
want
to
give
some
updates?
I
believe
that
you
got
some
feedback
from
sig
cli
about
how
how
the
kind
of
the
user
flow
for
this
is
going
to
be.
D
Yes,
I
can
give
an
update.
I
also
want
to
call
out
that
I
changed
the
title
of
the
kept
to
node
service
log
vr
in
case
you
need
to
update
that
spreadsheet.
Mark
okay.
D
D
Up
the
cap
and
okay,
so
I
spent
some
time
last
week,
cleaning
up
the
cap,
I've
added
as
much
detail
that
I
could
from
my
side.
I
also
reflected
what
I
found
out
from
the
six
eli
folks,
so
the
six
cli
folks
are
asking
us
not
to
introduce
a
new
sub
command.
D
They
are
suggesting
that
we
extend
cube
ctl
logs
and
point
it
at
node
objects
or
introduce
a
level
of
filters
that
I
have
you
know
shown
as
ex
examples
out
here.
The
other
thing
I'm
trying
to
work
work
on
is
figure
out
from
a
coding
perspective
on
on
cube,
cto
keep
cuddle.
What
changes
do
I
need
to
make?
D
A
Yeah,
I
took
a
look
at
this
and
I
think
that
this
yeah
it's
much
less
of
a
work
and
progress.
It's
probably
good
to
get
feedback
from
other
folks
too.
So
removing
the
work
in
progress
would
help
with
that.
I
think
it's
in
a
place
that
we
can
start
reviewing
and
kind
of
iterate
as
we
go.
A
One
of
the
main
things
that
I
wanted
to
highlight
here
is
the
reason
why
the
motivations
for
doing
this,
and
I
believe
that
arvind
said
that
the
sig
cli
folks
said
that
we
should
just
inherit
a
lot
of
the
our
back.
The
rules
and
kind
of
permissions
around
accessing,
like
node
logs.
This
way
so
there'd
be
whole
a
lot
less
security
implications.
A
Rather
than
doing
the
initial
approach,
which
I
think
is
probably
the
right
move
here-
a
lot
less
moving
pieces
to
work
with.
D
Yeah
and
that's
the
piece
that
I'm
trying
to
confirm
mark
by
just
looking
at
the
code
and
trying
to
see
if
I
can
quickly
come
up
with
some
kind
of
a
proof
of
concept
to
really
assert
what
they're
saying
actually
matches
reality
sort
of
thing.
So
I
mean,
if
they're
telling
me
this,
I'm
assuming
it's
all
true
and
you
know
it
should
be
doable.
A
Okay,
the
other
two
things
one
is
that
sig
auth
is
has
a
bi-weekly
meeting.
One
is
tomorrow
at,
I
believe,
11am
pst.
If
we're
concerned
about
that,
we
can
bring
this
up.
I'm
bringing
the
privileged
container
kept
up
for
for
them
too.
So
I'll
make
sure
to
add
this,
and
we
can
maybe
hopefully
have
time
to
go
a
little
bit
deeper
into
this
with
the
sigoth
folks,
who
would
probably
also
be
able
to
definitively.
Let
us
know.
D
A
Yeah,
if
you
have
a
list
of
the
people
who
were
helpful
with
this,
we
can
start
a
chat
on
slack.
I
will
let
me
I'll
I'll
get
the
I'll
make
sure
it's
added
to
the
agenda
because
I
know
sometimes
they
cancel
their
meetings.
If
there's
not
a
lot
on
the
or,
if
there's
nothing
on
the
agenda
and
then
we
can
try
and
get
the
right
people
to
show
up
in
the
discussion.
A
Okay
sounds
good
and
the
other
thing
is
I
have
been.
I
presented
this
to
signode,
I
think
two
weeks
ago
asking
for
some
signaled
reviewers.
I
didn't
get
much
feedback
there.
I
think
that
they're
quite
busy-
I
don't
know
if
you
happen
to
know
anybody
else
at
redhead,
who
would
also
be
in
the
signode
crowd
to
maybe
take
a
look
at
some
of
the
changes,
especially
for
the
with
some
of
the
requirements
that
this
is
for
systemd
internal
journal
d.
A
I
think,
as
we've
mentioned
before,
there
is
a
precedent
for
enhancements
targeting
kind
of
certain
linux,
distros
or
kind
of
flavors,
so
that
shouldn't
be
an
issue,
but
we,
I
think
we
would
need
signod
to
at
least
look
at
it
and
say
yeah.
This
makes
sense
in
order
to
progress
on
the
cap,
I'll,
try
I'll,
also
try
and
figure
out
who
the
right
folks
are
for
this.
A
Okay,
that's
good
anybody
or
anything
else
of
note
here.
Arvind
or
does
anybody
have
any
questions.
D
Yeah,
I
have
nothing
further
to
add
folks
if
you
get
a
chance,
just
review
it
and
see.
If
I
need
to
provide
more
details,
the
only
thing
that
I
was
like
a
little
not
fully
100
sure
off,
given
I'm
not
introducing
an
api
per
se,
is
how
the
graduation
criteria
works.
So
I
threw
something
in
there
around
feature
flags,
so
yeah.
A
I
started
to
to
look
at
this
and
I
haven't
finished
my
my
review
comments.
I'll
do
that
today
we
had
kind
of
a
similar
deal
with
some
other
windows
features
like
mainly
container
d
support,
where
that
was
interesting,
because
a
vast
majority
of
the
changes
needed
were
actually
in
container
d
and
not
in
kubernetes
repositories
or
hcs
shims.
A
So
we
kind
of
have
a
little
bit
of
precedent
on
that
and
it
would
be
good
to
know
if
there
are
like
what
feature
flags
would
be
needed
if
any
for
cube,
ctl
and
so,
which
I.
D
I
think
you
commented
on
so
well
yeah,
I
I've
added
the
feature
flag
stuff
and
from
what
mache
told
me
cube.
Ctl
is
not
feature
flag
of
air
per
se,
so
he
said
what
you
should
do
is
when
they
make
the
call.
You
should
clearly
state
that
this
is
like
an
alpha
feature
and
then
on
the
hublet
side.
It
should
return
the
proper
message.
If
the
feature
is
not
enabled
it
should
return
feature
not
enabled
or
something
like
that.
A
E
A
E
One
quick
thing,
so
I
asked
the
gmsa
developers
to
review
this
so
yesterday
they
reviewed
it
they're
reviewing
it
in
detail,
but
from
the
looks
of
it
it
looks
like
it
will
work
for
the
correct
ccg
logs.
That
windows
creates
for
gmsa
and
the
windows
events,
so
they
should
be
able
to
spin
it
out
just
with
the
filter.
You
know
logs
and
that
specific
event.
I
I
think
that
was
a
good
use
case.
One
thing
I
do
have
question:
I
will
leave
comments.
I'm
reviewing
this
is
the
the
beta
graduation
criteria.
E
D
Yeah,
that's
where
I
was
like,
I'm
not
sure
about
almost
so.
If
you
can
help
me
with
that,
I'll
gladly
reflect
your
comments
on
the
data.
Graduation.
C
Oh
there's
now
that
scale
reliability
stuff
too
right
the
does
that
affect
this
arvind.
I
don't
know.
There's
some
like
new,
like
group
of
like
people,
that's
their
entire
job
is
to
like
make
sure
that
we
don't
introduce
things
that
aren't
unscalable
or
something
I
don't
know.
I
I
suspect,
if
it's
log
collection
related,
it
might.
B
A
Yeah,
I
think,
for
this
type
of
enhancement,
it
should
be
fairly
straightforward,
like
a
lot
of
the
concerns
are,
is
if
they're
introducing
new,
like
new
objects
of
any
sort
that
may
kind
of
have
an
exponential
explosion
of
api
calls
that
could
start
the
api
server
and
also
just
making
sure
that
there's
a
very
clear
path
for
disabling
these
features.
If,
for
whatever
reason,
it
causes
an
issue
in
production
here
with
a
lot
of
cubelet
flags,
there's
not
a
whole
lot,
you
can
do
except
like
like
in
real
time.
A
You
can
either
you
can
stop
the
cubelet
change
a
flag
and
start
it
again
or
you
could
bring
up
a
new
node.
But
there's
not
it's
not
as
kind
of
sensitive
to
changes
as
the
api
server,
for
example,
but
yeah
it
would
be
good
to
get
this
looked
at.
I
believe
that
only
the
scalability
requirements
don't
come
in
until
you're
trying
to
promote
to
beta
just
the
first
ones.
A
A
A
I
think
it's
better
to
have
it
in
okay,
but
they,
I
think
that
the
they're
like
so
now
keps
will
be
required
to
go
through
a
review
from
somebody
from
the
production,
readiness
review
and
depending
on
what
stage
they're
targeting
they'll
require
certain
information.
But
it's
the
more
the
more
information
earlier,
the
better.
I
think.
D
Okay,
yeah,
I
I've
I've
filled
out.
Whatever
process
wise
is
needed,
there
apparently
needs
a
you
need
another
sub
directory
somewhere
else
with
sig
windows
and
the
kept
that
you're
introducing
and
you
need
to
tag
someone
from
the
prr
approver
team
and
they
said
you
could
just
pick
someone
randomly.
So
I
just
picked
the
first
person.
D
A
Yep
yeah,
I
I
I
usually
wait
until
we
have
a
little
bit
more
like
overall
reviews
and
approvals,
especially
from
other
cigs,
on
the
caps
before
reaching
out
to
those
folks
just
in
case
anything
changes
significantly.
So
okay
and
they're
usually
they're,
usually
fairly
quick
to
respond
so
we'll,
but
yeah
we'll
keep
an
eye
on
that.
A
All
right,
I
wanted
to
give
a
brief
update
on
the
windows
privilege
container
cups,
so
the
most
noteworthy
thing
here
is,
after
a
lot
of
discussions
with
lantau
from
google
and
a
member
of
signode,
we've
decided
that
the
best
course
of
action
here
would
probably
be
to
not
refer
to
these
as
privileged
containers
in
the
cri
layer
or
the
kubernetes
objects,
and
the
reasons
for
that
is
there's
a
pretty
long
discussion
in
the
github
review
comments.
A
I'll
highlight
this
here
too,
in
the
updates,
but
so
instead,
I
believe
the
current
plan
is
to
call
them
host
job
containers,
and
that
is
a
reflection
of
how
these
containers
actually
work
on
the
host
and
how
they're
technically
not
containers.
So
I've
added
a
note
here
too.
A
I
encourage
anybody,
who's
interested
to
go
and
read
more
about
this
in
the
kep,
but
for
the
instance
of
being
brief,
where,
where
did
I
put
this
so
some
of
the
the
we
discussed
in
great
depth
about
using
reusing
the
existing
privileged
flag
or
even
adding
a
new
privileged
flag
onto
the
windows,
security
context,
job
option
or
windows,
security
contacts
options
and
we've
decided
that
all
right,
I
think
it
the
most
forward
or
the
most
clear
path
forward,
was
to
not
call
it
privileged
for
a
couple
reasons.
A
What
that
I
highlighted
here,
the
first
one
is
that
privileged
containers
has
a
very
kind
of
distinct
meaning
on
linux,
and
that
was
inherited
from
what
it
meant
in
docker
and
there's
a
number
of
conver
conversations
even
for
linux
today
about
what
the
privileged
field
should
actually
mean
in
with
cri
and
in
container
d
and
cryo,
because
there's
a
lot
more
granularity
over
what
kind
of
features
could
or
what
kind
of
access
that
this
container
can
have.
A
So
by
not
calling
it
privilege,
we
kind
of
sidestep
that
whole
conversation
and
also
these
containers
will
have
ev
vastly
different
kind
of
capabilities
on
linux
and
windows.
So
we
wanted
to
make
it
so
that
to
really
encourage
users
to
go
and
read
the
docs
to
understand
what
they're
enabling
and
what
will
work
with
these
host
job
containers,
rather
than
just
assuming
that
they
work
the
same
way
in
linux
and
running
into
all
sorts
of
issues.
A
The
next
biggest
kind
of
concern
with-
or
the
next
argument
was
that
for
this
enhancement
using
run
as
username
and
or
the
gmsa
credential
spec
to
run.
These
privileged
containers
is
the
primary
way
of
kind
of
limiting
access
to
host
resources
on
the
containers.
There's
a
section
that
kept
outlining
this.
A
But
I
wanted
to
make
sure
that
that
field
and
the
host
job
slash
privilege
field
lived
as
close
as
possible
on
the
on
the
deployment
specs,
just
to
make
it
clear
that
these
will
apply
to
that
they
apply
together
and
the
other
big
thing
was
for
to
help
with
api
validation.
So
with
windows,
privileged
containers,
as
mentioned
elsewhere
in
the
pod
currently
or
in
this
cap,
currently,
all
of
the
containers
in
a
pod
must
either
be
privileged
or
not
privileged
and
with
api
validation.
A
There
is
not
a
good
way
to
know
at
the
time
it
hits
the
api
server
if
the
spec
is
targeting
a
linux,
node
or
windows
node.
So
that
means
it
would
be
hard
to
enforce
that
that
kind
of
those
rules
that
all
containers
are
privileged
or
not
there.
So
these
are
kind
of
the
updates.
This
seemed
to
make
the
most
sense
for
a
couple
folks
who
were
reviewing
it
as
well.
A
As
I
mentioned,
we're
still
looking
for
approval
from
sig,
auth
and
sig
api,
I'm
going
to
bring
this
up
with
sig
auth.
I
think
their
biggest
concern
was
that,
as
I
mentioned
here,
there's
a
lot
of
policy
or
monitoring
tools
that
are
already
looking
at
this
privileged
field,
but
hopefully,
with
a
you
know
sufficient
like
evidence
or
arguments,
we
can
really
say
it
makes
more
sense
to
call
this
a
host
job.
A
That's
kind
of
the
biggest
updates
that
I
want
to
highlight
here.
Does
anybody
have
any
questions
on
these
updates?
Any
concerns
about
the
name,
I'm
totally
open
to
a
different
name.
If
we
can
find
an
awesome,
one
discuss
this
with
a
couple
folks
inside
of
microsoft,
and
this
was
kind
of
the
most
succinct,
descriptive
name.
We
could
think
of.
A
Yeah
we
did
kind
of
mention
that,
because
kubernetes
has
the
notion
of
jobs,
windows
has
a
pretty
distinct,
like
definition
of
what
job
objects
are
too
and
thought
that
we
can
make
it
more
clear
that
this
is
really
an
abstraction
around
those
windows,
job
objects
and
less
about
containers.
But
I
guess
yeah.
Let's
move
that
into
slack.
Let's
give
some
more
time
to
the
rest
of
the
agenda,
but
yeah
naming
is
hard.
A
All
right
two
agenda
items
left
james.
Do
you
want
to
talk
about
some
of
the
test
grid,
dashboard
updates?
You
were
proposing
yeah,
so.
B
We
talked
about
this
at
length
last
week,
so
I
opened
up
a
pr
to
create
a
new
dashboard
that
we
can
put
just
our
our
informing
tests
on
initially
and
get
those
to
turn
green,
and
then
we
can
decide
if
we
want
to
add
additional
tests
later
on.
If
you
scroll
all
the
way
down
mark
there's,
you
can
kind
of
see
what
our
dashboard
looks
like
right
now.
I
named
the
dashboard
signal
and
only
added
our
three
informing
jobs.
B
I
think
there's
an
aks
engine
container
d
and
then
there's
two
gce
2019
and
1909
jobs,
that
that
would
be
the
initial
signal
and
we
can,
if
we
focus
on
turning
that
green
and
then
we
can
determine
whether
we
have
more
but
take
a
look.
I
left
some
notes
in
the
pr.
Please
give
me
some
feedback
before
we.
We.
A
Merge
this
yeah,
I'm
in
favor
of
these
changes.
I
haven't
approved
them
because
I
wanted
to
give
some
time
to
for
people
to
comment,
but
I
think
this
is
good
step
forward.
I
think
this
will
help
people
focus
on
looking
at
the
dashboards
that
we
want
them
to
look
at
when,
like
for
for
all
of
this.
So.
F
Yeah,
I
feel
the
same
way.
It
helps
us
to
focus
on
just
the
the
jobs
that
we
are
interested
in.
I
think
the
release
releases
dashboard
has
grown
has
grown
too
big
for
us
to
identify
the
problems
now.
Just
thanks
for.
B
A
A
All
right,
I
guess
we
could
go
into
the
last
agenda
item
on
here,
which
is
windows
defender
overhead
with
container
d.
I
know
that
there's
been
some
talks
recently
on
slack
and
a
couple
folks
have
reached
out
to
either
muzz
myself
or
other
folks
at
microsoft
to
kind
of
discuss
this.
A
So
I
wanted
to
kind
of
open
up
a
discussion
here,
especially
because
we're
very
interested
in
seeing
some
specific
scenarios
that
other
folks
may
be
encountering
around
container
d
and
windows
defender
performance
that
we
may
or
may
not
have
been
able
to
to
replicate
here
at
microsoft.
So
I'll
muzz.
Do
you
want
to
help
drive
this
part
of
the
conversation
or
I
can
take
over
sure.
E
Sure
so
so
I
think
basically
there
is
the
vendor
overhead
of
10
cpu,
as
you
mentioned
there
in
the
notes
mark
that
we
are
aware
of
and
defender
team
as
we
speak,
are
have
been
working
towards
it.
They're
close
to,
I
believe,
a
resolution,
but
that
that
only
accounts
for
10
percent
cpu
warhead
when
running
the
containers
right.
But
we
have
heard
about
these
full
image.
We
are
pulling
the
container
d
and
unpacking
them.
Let's
take
percent
more
cpu
kind
of
thing.
E
C
Is
you
know
in
those
policy
jobs
that
we
run
sometimes
on
container
d?
What
we
see
is
we
see
the
job
just
kind
of
hangs
after
a
while,
and
it
sends
a
whole
bunch
of
requests.
Coupes
detail,
exec
requests.
C
Now
I
I
haven't
done
the
math
to
figure
out
whether
it's
always
contain
happens
with
defender
on
versus
off,
but
when
I
run
those
same
jobs,
the
same
those
same
network
policy,
jobs
on
on
eks
or
aks
with
docker.
I
don't
tend
to
see
that
instability.
G
The
last
time,
the
last
time
I
saw
this,
it
was
more
to
do
with
the
extracting
step,
so
it
wasn't
necessarily
the
the
pulling
of
the
image
it
was
when
it
was
extracting
it.
I've
seen
it
with
third
party
antivirus
in
the
past,
no
blue
mcafee,
but
I
haven't
seen
it
for
a
while.
G
E
E
You
were
saying
it's
for
extracting
so
cases
like
barry
and
jay.
If
you
want
to
get
me
that
extraction
performance
overhead
and
how
you're
reprobing
it
and
then
it
would
be
great
if
you
can
turn
off
the
defender
and
see
if
you
know,
if
it
doesn't
happen,
then
we
know
it's
kind
of
coming
from
defender
and
the
same
goes
with
jeremy.
If
you
can
give
me
a
little
bit
more
detail,
so
we
can
start
looking
at
it.
If
it's
not
defender,
then
it's
something
else.
H
So
yeah,
I
could
speak
a
little
bit
here.
I
I
I'm
so,
unfortunately,
the
engineer
that
has
actually
done
this
the
analysis,
I'm
kind
of
just
going.
To
paraphrase
what
they've
said,
but
basically,
we've
tried.
H
We
have
like
kind
of
like
this
tool
that
will
give
us
metrics
of
pool
performance,
but
I
believe
from
the
server
core
image
on
various
machine
types
on
gce
and
we
have
observed
a
general
regression
with
we've
tested
modes
with
and
without
defender
enabled,
but
even
without
with
defender,
disabled
we're,
seeing
that
the
pull
performance
as
compared
to
docker
is
still
slower.
H
I
don't
know
the
the
number
off
the
top
of
my
head
and
we've
also
done
this
analysis
with
pig
z,
enabled
and
and
disabled
as
well.
E
I
see
so
the
thing
you
were
talking
about,
jeremy
doesn't
seem
like
a
defender
issue
right,
that's
a
separate,
there's,
a
container
d
issue.
If
you
can
provide
me
more
details,
our
container
team
can
also
take
a
look.
I
can
ask
them
to
and
see
if
there's
something
known
there
or
they
can
look
into.
H
Yeah
that
but
that'd
definitely
be
appreciated.
Yeah
we're
probably
gonna
have
to
like
is
there?
Is
there
any
sort
of
like
metrics
in
particular
that
you
want?
Is
it
basically,
you
just
want
defender,
windows,
defender,
disabled
and
then
like
pick
z,
disabled?
Is
that
what
kind
of
what?
What
kind
of
information
do
y'all
need.
E
A
E
Yeah,
that
sounds
good
and
it
seems
like
just
to
summarize
it
seems
to
be
like
three
issues
going
on
right.
One
is
the
defender
known
which
the
defender
team
is
working
on
the
10
cpu
spike.
The
second
one
is
the
the
what
barry
and
jr
are
reporting
and
the
third
one
is
what
jeremy
is
reporting
so
so
yeah,
let's
treat
them
separately
and
like
gather
the
data
on
the
two.
H
Would
it
help
if,
like
we
started
a
document
just
because
there's
a
lot
of
data
that
I'll
be
involved
in
this
investigation
and
then
we'll
just
put
it
on
the
slack
channel.
B
Yeah
sure,
sorry,
sorry,
james,
I
couldn't
hear
you,
can
we
open
an
issue
on
container
d
that
might
be
a
central
place
to
collect
all
this.
H
Okay,
can
you
link
that
in
the
slack
channel
so
that
we
can
add
the
information
there.
A
Yeah,
it's
just
container
d
continuity.
Is
it
pretty
recent.
B
E
C
F
C
Yes,
I've
created
one
where
I'm
exploring
the
failure
of
the
network
policy
tests
on
container
d-based
clusters
and
I
feel
like
I
need
to
like
work
with
perry.
Meanwhile,
we're
just
talking
about
this
today
to
figure
out
an
s,
that's
a
little
bit
more
precise
in
terms
of
actually
creating
a
real
bug
report
type
thing,
but
for
now,
if
anybody's
interested
in
the
issue
that
I'm
seeing,
then
the
link
to
that
issue
is,
I
will
post
it
in
here.
Let
me
just
look:
it.
C
E
G
G
Was
when
I
I
started
noticing
it
when
I
was
started
to
use
cluster
api
and
it
was
just
taking
forever
to
pull
some
images
down
so
I
was.
I
was
trying
to
exclude
the
processes
for
container
d
and
ctr
at
the
time,
but
I
will
go
back.
It
wasn't
to
do
with
that
particular
issue.
G
It
was
more
to
do
with
the
first
initial
pull
of
the
source
images
that
were
the
problem,
so
it
was
when,
when
you
basically
have
a
fresh,
fresh,
node
brand
new
and
you
do
the
first
pull,
it
takes
absolutely
forever,
even
even
a
lot
like.
But
if
you
remove
the
windows
defender,
it
seemed
to
extract
faster.
That
was
why
I
noticed,
but
it
was
a
while
ago,
so
I
need
to
go
back
and
gather
all
that.
E
E
A
B
Everybody,
I
guess
I
got
nominated,
is
anybody
else.
Are
we
done
with
talking
about
the
container
d,
or
do
we
have
anything
else?
Anybody
wants
to
say
there.
D
F
Yeah,
so
I
want
to
talk
about
the
the
file
and
disk
container,
the
issues
that
I
created
like
as
of
now,
we
do
not
have
any
pre-segments
using
container
d.
I
switched
to
docker
and,
as
we
noted
like
last
week
and
the
week
before,
that
there
were
issues
with
storage.
I
want
to
bring
that
up,
but
I
think
jing
is
not
here.